How the Infinite-Generating AI Version of 'Seinfeld' Got Transphobic
AI can be really fun until it unpacks the worst parts of humanity.
The AI version of Seinfeld called Nothing, Forever has been banned for 14 days on Twitch after their AI clone of Jerry Seinfeld, named Larry Feinberg, made transphobic statements during a stand-up bit.
The creators have appealed the ban and said they will work to prevent similar incidents in the future.
Xander, one of the creators of Nothing, Forever said on Discord, “Hey everybody. Here's the latest: we received a 14-day suspension due to what Larry Feinberg said tonight during a club bit. We've appealed the ban, and we'll let you know as we know more on what Twitch decides. Regardless of the outcome of the appeal, we'll be back and will spend the time working to ensure to the best of our abilities that nothing like that happens again.”
We won't be posting the transphobic joke here. It was not even a joke, mostly just a few remarks that were pretty dumb and not even remotely comedy.
We won't be posting the transphobic joke here. It was not even a joke, but a few remarks that were pretty dumb and not even remotely comedy.
So, how did this happen?
'Nothing, Forever'Credit: Discord
Why Did the AI Version of Seinfeld Become Transphobic?
The AI behind the show was fed classic episodes of Seinfeld and taught to mimic them. They think the root cause of the show turning transphobic was that they had to switch the AI they were using.
Tinylobsta, a staff member of Nothing, Forever, wrote on Discord, “We’ve been investigating the root cause of the issue. Earlier tonight, we started having an outage using OpenAI’s GPT-3 Davinci model, which caused the show to exhibit errant behaviors (you may have seen empty rooms cycling through)."
"OpenAI has a less sophisticated model, Curie, which was the predecessor to Davinci," Tinylobstar explained. "When Davinci started failing, we switched over to Curie to try to keep the show running without any downtime. The switch to Curie was what resulted in the inappropriate text being generated. We leverage OpenAI’s content moderation tools, which have worked thus far for the Davinci model but were not successful with Curie. We’ve been able to identify the root cause of our issue with the Davinci model, and will not be using Curie as a fallback in the future. We hope this sheds a little light on how this happened.”
Another staff member posted that nothing the AI wrote was indicative of the opinions of the people behind the show.
AI is such a tricky thing. It can be a run tool to experiment with, but because it's not socially trained, we've seen many instances where it gets racist, sexist, and homophobic incredibly quickly. We're still so far away from autonomy in this space.
As computers get smarter, we still have to edit and explain because there's so much more nuance to being a human being than just feeding ideas into an algorithm.
Let me know your thoughts in the comments.
Source: Vice