A YouTube channel posing as Avi Loeb has used AI to create convincing impersonations of the Harvard astronomer, highlighting how easy it’s become to mimic real people online. Generative AI now enables seamless facsimile of a person’s face and voice, a trend that ranges from scam artists cloning voices for phishing to deepfakes of public figures circulating across platforms.
The impersonation targets Avi Loeb, who has drawn media attention this year for his provocative idea that the interstellar object 3I/ATLAS could be an alien spacecraft. The channel named “Dr. Avi Loeb” appears to leverage AI tools to clone Loeb’s likeness and voice, suggesting there’s money to be made from sensational content about the object.
Loeb confirmed to Futurism that the videos are fake and AI-generated, and he has reported them to YouTube. Unlike his cautious scientific stance—acknowledging the possibility that 3I/ATLAS could be natural or technologically advanced—the videos push very dramatic claims, such as declaring 3I/ATLAS a probe with “New Data Leaves No Doubt.”
In a recent blog addendum discussing the latest Hubble Space Telescope images of 3I/ATLAS, Loeb considered the broader consequences of impersonation. He warned about the danger of videos featuring scientists who look and sound real spreading counterfactual information, and asked how the public can discern who to trust.
Loeb noted that he has received hundreds of messages from fans who discovered a YouTube channel bearing his name that hosts AI-generated videos about 3I/ATLAS. Observers also noted odd artifacts in the videos, like jerky movements and a clock in the background that appears frozen, all pointing to AI rendering or manipulation.
Loeb is pursuing legal action against the creator(s) of the fake content, and he and his fans have filed multiple reports with YouTube, though enforcement has been slow. The channel seems to infringe YouTube’s impersonation policy, which warns that content designed to impersonate a person or channel may lead to the termination of the offending account.
The motive behind the impersonation remains unclear. Loeb suggested two possibilities: profit from advertising on a popular channel, given his large following (over 5 million readers per month on Medium), and the broader risk of misinformation spreading online.
The impersonating channel, launched in September, has accumulated more than 1.4 million views. If monetized, potential earnings could have ranged from roughly $14,000 to $42,000, depending on the pay-per-view rate.
The impersonation began with videos in which a man in a lab coat—presented as “Dr. Ricardo Reyes”—offered health advice in Tagalog, prior to shifting to the Loeb impersonation. This sequence underscores a troubling precedent: platforms often move slowly to remove explicit AI-generated impersonations.
Loeb concluded that we now inhabit a reality where AI can generate fake content at scale, raising serious questions about how to verify information online. Science, he emphasized, aims to uncover physical reality through facts, whereas the real threat to science is fake content crafted by AI. The conversation surrounding these issues is not merely theoretical; it demands practical solutions to authentication and accountability on the internet.