Synthetic AI order: the future

Naved Akhtar Khan

AI-generated influencers, avatars and deepfakes are reshaping social media, raising questions about trust, regulation and power.

Social media feeds are filling up with people who do not exist. The smiling faces selling products, the influencers sharing daily struggles and even the news anchors reading headlines are increasingly computer-generated. Analysts say this marks the start of a shift toward what some researchers call a “synthetic AI order,” in which large parts of online life are created and managed by machines.

At its core, synthetic AI refers to artificial intelligence systems that do more than analyze or recommend existing information. Unlike earlier forms of AI that mainly sorted, predicted, or optimized human-made data, synthetic AI can generate entire realities — faces, voices, personalities, texts, videos, and even social interactions — from scratch. These systems do not merely assist humans; they increasingly act in place of them, producing content, identities, and behavior at scale. In practical terms, synthetic AI is what allows a non-existent person to speak convincingly, react to audiences, learn from engagement, and continue operating with little or no human oversight.

The trend began with virtual influencers. In 2016, an Instagram account under the name Lil Miquela appeared in Los Angeles. She posted photos, promoted brands and gained millions of followers. Later it was revealed she was not real but a digital character created by a startup. Soon after, Shudu Gram, marketed as the “world’s first digital supermodel,” was launched by photographer Cameron-James Wilson and booked for luxury fashion campaigns.

In 2018, China’s state-run Xinhua News Agency introduced AI-powered anchors who could deliver scripted news 24 hours a day in Mandarin and English. By the early 2020s, websites publishing AI-written news stories began surfacing in large numbers. The U.S.-based watchdog NewsGuard has identified hundreds of such outlets, some producing thousands of articles a day with little or no human involvement.

Tech companies have since started experimenting with ways to integrate AI personas into mainstream social media platforms. In 2023, Meta rolled out chatbots based on celebrity personas, including rapper Snoop Dogg and model Kendall Jenner, on Instagram and Messenger. TikTok launched “Symphony Avatars,” tools that let brands create AI presenters who can speak in multiple languages. In China, Douyin, TikTok’s domestic version, has gone further, using AI-powered “virtual humans” in livestream e-commerce. These digital hosts can sell products, answer viewer questions and perform around the clock.

The use of synthetic AI has spread into politics. In early 2024, a robocall using a cloned version of President Joe Biden’s voice told New Hampshire voters not to cast ballots in the primary election. The call was quickly labeled a deepfake, but the incident triggered investigations and led the U.S. Federal Communications Commission to ban AI voice robocalls. Researchers monitoring conflicts in Ukraine and the Middle East say both sides are using AI-generated images and videos to spread propaganda online.

Analysts argue what makes this phase different from earlier disinformation campaigns is scale and speed. Troll farms once relied on thousands of workers to produce content. Now AI can generate convincing posts, identities and videos almost instantly. The content can also be tested and adapted in real time. For example, an AI account could create hundreds of versions of a message, analyze which one gets the most engagement and then amplify it.

For creators, the technology offers both opportunities and risks. A single person can now use AI to edit video, write captions, run analytics and manage audiences, reducing costs and expanding reach. But the flood of synthetic content may make it harder for smaller human creators to be noticed. Industry observers expect demand for “authentic” human content to rise in some communities, even as other users show little concern about whether they are interacting with humans or machines.

Regulators and platforms are responding with efforts to label or trace synthetic content. Adobe, OpenAI and Google have joined industry coalitions to embed provenance markers into AI-generated images and videos. Meta and TikTok have tested labels indicating when content is AI-made. The European Union has introduced AI transparency rules, and India’s Election Commission has issued warnings about deepfakes during campaigns. Enforcement, however, remains inconsistent, and detection systems often lag behind new techniques.

Some experts worry this environment could erode public trust. If people assume that any video or voice recording might be fake, they may dismiss genuine evidence as fabricated. That could weaken democratic debate and accountability. “The danger is not only that people will believe falsehoods, but that they will stop believing in the possibility of truth,” one analyst said.

Looking ahead, observers outline several possible outcomes. In one scenario, AI becomes a background tool that helps human creators but is clearly disclosed. In another, synthetic content overwhelms platforms, undermining authenticity and trust. A third path envisions a hybrid environment where AI dominates mass feeds, while verified human content becomes a premium product.

The contest over synthetic social media is also part of a larger geopolitical struggle. Decisions over who controls watermarking standards, who sets platform rules and who has access to the most advanced AI models are already becoming issues of national interest. Analysts compare the fight over synthetic AI governance to earlier battles over oil, shipping lanes and telecommunications infrastructure.

For now, platforms and regulators are trying to catch up with a fast-moving technology. TikTok is investing in AI avatars for advertisers, Meta is shifting toward an “AI studio” where users can build their own agents, and watchdogs are flagging an increasing number of AI-run news outlets. With each step, the internet becomes more synthetic.

The transformation is already reshaping politics, business and culture. Whether it enhances creativity and communication or deepens manipulation and distrust will depend on how governments, companies and users manage the transition. For many, the most important question is no longer what they see in their feed, but whether the entity posting it is human at all.