When generative artificial intelligence (AI) company OpenAI released its text-to-video app Sora 2 in September of this year, it promised that “you are in control of your likeness end-to-end” in its cameos—short clips featuring users’ likenesses made from facial scans. However, a company called Reality Defender that detects deepfakes claims it was able to bypass Sora’s security safeguards within 24 hours of release, creating cameos of celebrities and CEOs using publicly available footage from the Internet.
This is a reminder not just to be careful before you post photos and videos of yourself, your family, and friends online, but also that creating AI content comes with a heavy responsibility. You should always consider the potential impact of any AI-generated images and videos you make available, and ensure that your work respects the rights and dignity of others. Failure to do so can spread misinformation, damage reputations, harm individuals’ privacy, and erode the trust you’ve built online.
Creating Deepfakes with AI Text-to-Video Apps Isn’t Worth the Risk
Uncategorized |
27 Dec, 2025 |
Published