She thought she had found love online. He was attentive, kind, and appeared in every video call. When he asked for help sending money to finish a work project, she trusted him. The day the funds disappeared, so did he. The face, the voice, and every conversation had been created by artificial intelligence.
This is the new frontier of cybercrime. Deepfakes and disinformation have become powerful tools for scammers and criminals. Instead of hacking systems, they hack our minds. The modern attacker does not need to steal your password when they can steal your confidence, a lesson that cyber and physical security penetration testers that employ social engineering know well.
For years, cybersecurity focused on protecting technology. Firewalls, encryption and software updates were enough. Today, criminals target people instead of computers. Deepfake videos, cloned voices, and AI-generated profiles and conversations make deception easy to produce and difficult to detect. When combined with false information online, these tools create plausible truths that appeal to human emotions instead of logic.
Romance scams have evolved into something far more convincing. Last year, I uncovered evidence of a romance scam group likely harvesting cheesy lines, financial literacy and character personas from bad romance novels and get-rich-quick guides on an online self-published content and reading platform. The victims of those sappy or economically-savvy lines from titles like The Profit Playbook or Second Chances in Sonoma were likely extorted of their savings several years ago, as today, scammers are increasingly using synthetic identities created with AI. They build fake social media profiles, generate realistic faces, employ generative AI to craft messages and use voice cloning to speak in real time. Victims often believe they are communicating with a genuine person. The financial losses of cybercrime are staggering, over $16 billion in total losses in 2024 according to the FBI’s Internet Crime Report, but the emotional damage is even worse. Many victims lose faith not only in others but in their own judgment, some sadly resorting to suicide.
This same manipulation now threatens businesses and public trust. Deepfake videos of company executives have been used to impersonate a CFO to authorize a fraudulent $25 million wire transfer. Fabricated news clips spread misinformation about markets, politics, and public safety. Every false image and video makes it harder to know what is real.
And now, deepfakes have entered the realm of cyberbullying. In K–12 schools across the U.S., AI-generated videos depicting students in compromising or false scenarios are becoming a weapon of choice. Deepfake bullying doesn’t rely solely on rumors, it fabricates "evidence." One Texas Christian University researcher recently studied how synthetic harassment can humiliate students in ways no text message ever could.
In Florida, two middle school boys were charged with creating deepfake nudes of classmates to shame them. A New Jersey teen was also victim of AI-generated nudes that contributed to the “Take It Down Act” being signed into law.
These acts are not just pranks. They are nonconsensual intimate imagery (NCII), image-based sexual abuse and psychological attacks wrapped in technology. Many victims can suffer lasting trauma while the content keeps resurfacing months or years later.
How OSINT Can Fight Back
Amid this synthetic chaos, Open-Source Intelligence (OSINT) remains one of our most powerful weapons. OSINT uses public information available to anyone: web searches, public records, digital metadata, social media profiles and other data that can be used to expose deception.
Here’s how:
- Reverse image searches can show where a photo first appeared (or if it’s been used elsewhere under different names).
- Metadata analysis can reveal when, where and how a file was created.
- Cross-platform corroboration examines the consistency of usernames, accounts, biographies and posting patterns. Synthetic accounts often lack depth or continuity.
- Network and cluster analysis can detect coordinated networks spreading disinformation or harassment.
- Linguistic and behavior analytics can flag content that doesn’t match human style: unexpected syntax, uniform posting cadence or repetitive phrases.
- Temporal and geolocation cues can expose impossible or conflicting timelines in fabricated content.
- Screenshots and documentation of scam communications, affiliated websites and other information can help law enforcement investigators track down bad actors.
These tools let investigators, journalists and even individuals verify, challenge, and dismantle synthetic lies.
You don’t need to be a professional to use basic OSINT. Before accepting a profile or video at face value, pause. Reverse-search images. Cross-check claims. Look for inconsistencies. In a world full of synthetic content, skepticism is security.
Protecting Perception in the Digital Age
Cybersecurity Awareness Month calls us to protect passwords and enable multifactor authentication. But in 2025, we must protect something more fragile: our perception of truth.
If something is urgent, emotionally charged, or too convenient - slow down. Ask for verification in another way. A real person will not resist scrutiny. Pause before sharing images or videos. Don’t consent to harassment or defamation; document it and report it. If you’re in the United States, submit a complaint to the FBI’s Internet Crime Complaint Center.
Artificial intelligence created new methods of deception, but it also provides the tools to fight back. With OSINT, awareness, and a habit of digital verification, we can rebuild trust online and protect ourselves from AI-driven scams and misinformation. In this new reality, verification of online content has become the most valuable form of cybersecurity.