WIRED Roundup: The New Fake World of OpenAI’s Social Video App
OpenAI recently released a new social video app that uses artificial intelligence to create realistic avatars and voices for users to interact with.
The app, called “Doppelganger,” has raised concerns about the potential for misuse and manipulation of fake content.
Users can create videos using the app’s avatars, which can be made to look and sound like real people, even those who may not have consented to being portrayed in this way.
Some critics worry that Doppelganger could be used to spread disinformation, perpetuate hoaxes, or even harass individuals by creating fake videos of them.
OpenAI has responded to these concerns by implementing strict moderation policies and tools to detect and remove harmful content from the platform.
Despite these safeguards, the question of ethical and legal implications surrounding the use of such technology remains a topic of debate.
Some users have praised Doppelganger for its innovative approach to social interaction and storytelling, while others remain cautious about its potential consequences.
As the app gains popularity, it will be crucial for both developers and users to consider the impact of their actions on the broader digital landscape.
Ultimately, the emergence of tools like Doppelganger raises important questions about the boundaries of technology and the responsibilities that come with its use.
More Stories
How Hong Kong Gave Rise to Labubu
Google Has a Bed Bug Infestation in Its New York Offices
OpenAI’s New Sora App Lets You Deepfake Yourself for Entertainment