AI Virtual Personalities and Community Ethics

1. The Rise of Virtual Characters and the Dilemma of "Indistinguishable Reality"

A few weeks ago, I was chatting with Dagen when we mentioned the emergence of many incredibly realistic AI-generated accounts on Instagram. These characters have complete identity profiles; for example, we saw an account belonging to a Korean woman who claimed to be a "weather forecaster." Her posts were indistinguishable from those of typical online celebrities: gym photos, coffee shop check-ins, and sharing of her daily life.

We even discussed at the time whether to research how to use AI generation technology to create virtual characters specifically for us and manage their communities. However, this also raised my ethical concerns: although it doesn't directly "harm anyone," this behavior certainly makes it significantly more difficult for the world to distinguish between "real" and "fake."

2. Digital Identity Theft: BBC Exposes Color-Changing Forgery Case

I just saw a BBC report on a case where a female online celebrity's videos were stolen and her skin was "darkened" using AI technology to make her appear as a Black woman.

Aside from skin color, the lip movements, expressions, and background in the video were completely identical. This fake account attracted traffic by tagging controversial topics (such as racial issues or specific preferences), accumulating 3 million followers in a short period.

3. The Commercial Interests Behind the Fake Content

After establishing influence, these accounts would direct traffic to third-party platforms (such as OnlyFans or other subscription-based websites) to earn substantial revenue. Experts point out that these accounts, which steal others' images and process them with AI, are extremely profitable. This is not just a technical issue, but has evolved into serious infringement and fraud.

4. The Failure of Platform Guidelines: Vulnerabilities in the Tagging System

Although platforms like Instagram currently have guidelines for tagging "AI-generated" content, two major problems exist:

  • Reliance on Self-Tagling: Currently, most tagging relies on users' voluntary labeling. If someone intends to profit from fake content, they are unlikely to actively tag themselves as AI.

  • Insufficient System Identification Capabilities: Platforms lack powerful detection systems and cannot forcibly identify AI-generated works. Even when fans questioned the claims, the person would reply manually with emojis, creating the illusion of a "real person."

This raises the question: Should AI-generated content appear on social media? This misleading behavior severely damages the ability to distinguish between genuine and fake content in the community.

5. Contradictory Signals from Corporate Giants: Disney and OpenAI

A few days ago, I saw news about the changing relationship between Disney and OpenAI, mentioning that Disney+ originally intended to use AI to create video content to attract subscribers.

This behavior from the capital market seems to, to some extent, "support" netizens using AI for secondary creation, even pushing the boundaries of copyright or controversial topics. When large corporations begin to see AI content as a commercial driver, how do we define creativity from plagiarism? And how do we protect the digital environment we live in?

6. The Gap Between Fragmented Memory and Verification

This phenomenon reminds me of the discussion I had a few days ago about [[Reflections on Minecraft Short Video Plagiarism and Content Ecosystem]]. In the fragmented world of modern life, with its fragmented memories and fragmented time, we often experience a sense of déjà vu when scrolling through Reels or Shorts: "I feel like I've seen this before, but I can't remember where."

However, it's almost impossible for us to spend the significant effort of researching in a library to verify the source of a video. There are simply too many short videos online; the time cost of finding and comparing them far outweighs their entertainment value.

7. Entertainment First: Scrolling Instead of Verifying

Faced with this situation, my psychological reaction is: "I have more important things to do; I don't have time to research." Instead of spending time searching through viewing history to satisfy curiosity, I'd rather use that time to scroll through more short videos for more direct entertainment. This leads us to unconsciously abandon our commitment to verifying the accuracy of information.

8. The Stealth of AI Face-Swapping: A Case Study of Influencer Dance Videos

Suppose I see a video of an influencer dancing today, and five days later, I see a version where the skin tone has been AI-altered. While I might have a vague recollection—"This background looks familiar," "The movements and expressions seem similar"—I would never specifically go back to my viewing history from five days ago to verify whether this person's content has been stolen or their face has been swapped by AI.

This gap of "familiarity without verification" allows plagiarists and AI forgers to easily hide behind massive data flows, continuously profiting from public forgetfulness and laziness.

Created by

© 2026 All rights reserved