Out at dinner on Monday night in Amagansett, Ellis showed me his phone. On it were videos his friends made on Sora 2 using his cameo. None of this makes sense? You, dear reader, are not alone. He explained, and what struck me most was how ominous it all has become very, very quickly.
Sora 2 is a so-called artificial intelligence that can make lifelike videos with a few brief instructions, otherwise known as prompts. To allow friends to put him in their videos, Ellis uploaded short clips of himself moving around and speaking. In the resulting creations, the image looked and sounded like my own son — if doing bizarre things.
At the level of the individual, Sora 2 and the other up-and-coming A.I. video generators are unleashing an unprecedented potential for abuse. Among the threats: the old scam of someone pretending to be a grandchild in trouble. A plea from an A.I. relative needing money quickly becomes much more realistic if the con is sent in video form.
Sora 2 includes what are called watermarks on its creations, but Ellis told me that there are lots of apps available to make them go away. One way to tell if a video was conjured up with A.I., he said, is if a subject’s hair moves strangely. But they have probably fixed that already, he thought. More broadly, in a country already on edge, the videos could lead to real-life violence.
Fake news reports, fake newspaper articles, fake incendiary acts by political rivals, each of these could ignite pre-existing hatreds into dangerous, real-life uprisings.
A thought I have is that people who prefer to live in the real world might have to turn to trusted, old-fashioned newspapers and in-person experiences to tell fiction from fact. This could be A.I.’s silver lining for civil discourse. But perhaps I am too optimistic.