Even though OpenAI has yet to publicly release its text-to-video model known as Sora, a group of testers who claim to have gotten early access to the tool say they’ve leaked it to the public in protest. Sora was first announced by OpenAI in February 2024.
The artists used the opportunity to make a greater point about how the AI industry is exploiting labor by making them “PR puppets.”
“We received access to Sora with the promise to be early testers, red teamers, and creative partners,” reads an open letter posted to public AI model repository Hugging Face.
“However, we believe instead we are being lured into ‘art washing’ to tell the world that Sora is a useful tool for artists,” they added, referring to the process of covering up the shortcomings of a corporation by employing art in a positive way.
“We are not your: free bug testers, PR puppets, training data, validation tokens,” the letter reads.
The artists complained that OpenAI, which raised a substantial amount of money by coming up with AI models that demand free labor. “Hundreds of artists provide unpaid labor through bug testing, feedback, and experimental work for the program for a $150B valued company,” the letter reads.
The artists also criticized OpenAI for requiring “every output” to be screened “before sharing.”
“This early access program appears to be less about creative expression and critique, and more about PR and advertisement,” the letter reads.
In a statement to The Verge, OpenAI spokesperson Niko Felix argued that participation in the preview is “voluntary, with no obligation to provide feedback or use the tool.”
“Sora is still in research preview, and we’re working to balance creativity with robust safety measures for broader use,” the statement reads.
Besides safety concerns, OpenAI may also be delaying the rollout of Sora for a much more benign reason: the astronomical amount of computing power required to AI generate high-resolution videos.