Civity automatically tags bounties requesting deepfakes and lists a way for the person featured in the content to manually request its removal. This system means that Civvy has a fairly successful way of knowing what bounties there are for deepfakes, but it is still leaving moderation up to the general public rather than actively enforcing it.
A company’s legal liability for the actions of its users is not entirely clear. Generally, tech companies have broad legal protections against such liability for their content under Section 230 of the Communications Decency Act, but those protections are not unlimited. For example, “You cannot knowingly facilitate illegal transactions on your website,” says Ryan Callow, a professor specializing in technology and AI at the University of Washington Law School. (Callow was not involved in this new study.)
Civtai joins OpenAI, Anthropic and other AI companies in adopting in 2024 design principles To protect against the creation and dissemination of AI-generated child sexual abuse material. After this step a 2023 report From the Stanford Internet Observatory, which found that most of the AI ​​models named in pedophilia communities were static diffusion-based models “obtained primarily through Civitas.”
But adult deepfakes have not received the same level of attention from content platforms or the venture capital firms that fund them. “They’re not afraid of it that much. They’re very tolerant of it,” Callow says. “Neither law enforcement nor civil courts adequately protect against this. It’s night and day.”
In November 2023, Civitae received a $5 million investment from Andreessen Horowitz (a16z). Video Shared by a16z, Justin Mair, co-founder and CEO of Civitae, described their goal of creating the main place where people find and share AI models for their personal purposes. “Our goal is to make this space that’s, I think, unique and engineering-centric to as many people as possible,” he said.
Civitai is not the only company in A16Z’s investment portfolio with a deepfake problem; in February, MIT Technology Review It was first reported that another company, Botify AI, was hosting AI companions resembling real actors who posed as if they were under 18, engaged in sexually motivated conversations, presented “hot photos”, and in some cases described age of consent laws as “arbitrary” and “breakable”.
