Just a few years ago, deepfake technology seemed like something out of a science fiction movie. The ability to create hyper-realistic fake videos and audio was limited to Hollywood studios and high-tech labs. Today, it’s available at the click of a button—and it’s being exploited in alarming ways.

Real estate transactions have always been a target for fraud because of the large sums of money changing hands, but deepfake technology has made them even more vulnerable. Fraudulent actors now use deepfakes to impersonate buyers, sellers, and agents, create fake documents such as deeds, and even forge signatures on contracts, loan agreements, and closing documents to carry out fraudulent activities. In some cases, deepfakes are so convincing that even seasoned title & escrow agents can be fooled.

As the technology becomes more widespread, it’s critical to understand how it’s used and ways to detect it to prevent financial loss. In this article, we’ll explore how deepfake technology is impacting real estate closings and practical steps you can take to protect yourself and your clients.

What is deepfake technology?

At its core, deepfake technology uses machine learning models to analyze large amounts of data—like photos, videos, and audio recordings of a person’s voice. Once the system has enough information, it can create realistic-looking images or videos or generate lifelike voices that are hard to distinguish from the real thing.

Thanks to global competition, artificial intelligence (AI) and deepfake technology are advancing rapidly, and it’s now easier than ever for individuals—both good and bad actors—to access powerful AI tools that can create convincing forgeries.

Deepfake fraud trends

In real estate, several new fraud trends are emerging due to the accessibility of AI and the general ease of creating deepfakes.

1. Deepfake phishing attacks and social engineering scams

What it is: Fraudsters impersonate parties involved in real estate transactions (e.g., buyers, sellers, or agents) using AI-generated videos or voice cloning to authorize fraudulent activities. The fraudster only needs a short, pre-recorded sample of the target’s voice and a single photo to create a deepfake in their likeness. The more samples the fraudster has, the better the quality of the deepfake will be.

Real-life example: In a recent case in Hallandale Beach, Florida, a scammer used deepfake technology to impersonate a property owner during a video call with a title & escrow company. The fraudster created a synthetic video of a missing woman to appear as the legitimate property owner, attempting to authorize the fraudulent sale of a vacant lot. The title & escrow company became suspicious due to inconsistencies and was able to halt the transaction before any financial loss occurred.

Why it’s happening: Deepfake tools, like voice synthesis and facial reenactment software, are now more accessible and affordable. The number of deepfake videos online surged by 550% from 2019 to 2023. By 2023, approximately 500,000 deepfake videos were shared on social media platforms globally, with projections suggesting this number could double every six months.

Moreover, advancements in automated web scraping tools have made it easier than ever for fraudsters to collect images, videos, and audio clips en masse. AI-powered scrapers can sift through massive amounts of online content, extracting and compiling data in minutes. The rise of AI-assisted data harvesting means that even individuals who are cautious about sharing personal information online can be targeted through aggregated datasets from multiple sources.

2. Synthetic document and ID fraud

What it is: AI tools can now create hyper-realistic fake documents, such as property deeds or identification cards, making it increasingly challenging to validate property transactions. Fraudsters can use generative AI to bypass traditional verification methods by altering text, logos, and other elements on official documents.

Real-life example: In May 2024, scammers used forged documents to fraudulently list Elvis Presley’s Graceland mansion for auction. The perpetrators filed fake ownership deeds, claiming they had the right to sell the property. The scheme was ultimately uncovered, and a judge blocked the sale, preventing the fraudulent transaction from going through.

Why it’s happening: Advances in AI, particularly in generative adversarial networks (GANs), enable fraudsters to create hyper-realistic fake documents. GANs work by pitting two neural networks against each other—one generating fake content and the other evaluating its authenticity—allowing the system to refine forgeries until they are nearly indistinguishable from real documents. In 2024, digital document forgery surpassed physical counterfeits as the leading method of fraud, with digital forgeries accounting for 57% of all document fraud — a 1,600% surge since 2021.

3. E-Signature and digital authentication manipulation

What it is: Fraudsters use AI-generated handwriting and deepfake signatures to bypass traditional authentication methods in real estate transactions.

Why it’s happening: AI models can now precisely mimic handwriting and signatures. This technology allows scammers to generate fraudulent e-signatures that closely resemble authentic ones. For example, researchers at Abu Dhabi’s Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) developed technology to imitate someone’s handwriting based on just a few paragraphs of written material. Furthermore, AI tools have advanced to the point where they can simulate real-time signing movements, making the manipulation even more challenging to detect.

How to spot deepfakes during a real estate closing

Identity verification red flags

  • Video inconsistencies: Look for mismatches between lip movement and spoken words in video calls. AI-generated deepfakes often struggle with real-time adjustments, causing slight lagging or distortion.
  • Unnatural eye movement: Deepfake videos sometimes fail to replicate natural blinking patterns, making the subject’s gaze appear unnatural.
  • Unusual delays in conversations: If someone on a live video call exhibits delayed responses or long pauses before answering a question, it could be a sign of AI manipulation.
  • Inconsistent background or environment: Pay attention to any discrepancies in visual context, such as inconsistent lighting, mismatched decor, or unclear video resolution that doesn’t align with the claimed environment.

Signs of ID fraud

  • Font Inconsistencies: Check for variations in the font used across the document. Government-issued IDs often have specific fonts that are hard to replicate. If the text on the ID seems off—either too clean, too blurry, or inconsistent with the rest of the document—it could be a sign of manipulation.
  • Misalignment: Scammers may crop, stretch, or distort images to fit the ID or to hide discrepancies. Pay attention to alignment, especially in areas where photo IDs and other text or elements should line up perfectly.
  • Holograms and security features: Many government-issued IDs have built-in holograms, watermarks, or other security features that are difficult to replicate. If these features appear blurry or faded, or if they don’t match the ID format used by the issuing agency, it could indicate tampering. Use a magnifying tool or UV light to inspect these features more closely.
  • Photo mismatch: If the photo on the ID seems poorly rendered or out of place (e.g., a low-resolution image on an otherwise high-quality document), it’s likely to have been altered. Compare the photo to any known images of the person if possible.

Signs of fraudulent e-signatures

  • Inconsistent signature style: AI-generated signatures may look uniform or overly consistent, whereas real human signatures tend to have slight variations due to the natural hand movement.
  • Abnormal stroke patterns: E-signatures created by AI might lack the natural flow and pressure variations of a handwritten signature. Look for stilted or robotic lines where the strokes appear too perfect.
  • Mismatch in signature timing: In some cases, e-signature tools capture the signature digitally. If the signature was captured via AI rather than human input, it may be instant without the natural pauses or variations associated with actual writing. In addition, some AI signatures may be written faster than a human would naturally do.

Behavioral signs that could indicate deepfake deception

  • Avoiding live interactions: Deepfakes are sometimes pre-recorded, and scammers might avoid real-time verification. For example, a deepfake fraudster may resist if you request a live video or audio call to verify their identity.
  • Delaying or denying additional verification: If you ask for further verification steps, such as confirming identity via a second communication channel (e.g., asking them to call a landline or send an alternative identification), and they stall or refuse, it could indicate the use of deepfake deception.
  • Overly scripted conversations: Deepfake fraudsters often rely on pre-generated scripts. If their answers seem too rehearsed, excessively formal, or repetitive, it’s worth investigating. For example, if you ask multiple questions about different aspects of the transaction, the responses may sound too similar or not address specific details. 
  • Conflicting time zone information: If someone claims to be in a particular location but their actions don’t match that time zone—such as scheduling a call at odd hours or responding too quickly when they should be asleep—it could signal deception. 
  • Geographic inconsistencies: If the person claims to be in one place but the metadata of their email or IP address reveals they’re actually in a completely different location, that could be a sign of fraud. 

What to do if you suspect a deepfake

  1. Pause the transaction immediately. This is crucial to preventing any fraudulent activities from progressing, such as the transfer of funds or finalizing a sale. Stop any wire transfers, document signings, or title exchanges until you can verify all parties and documents. 
  2. Request a second communication channel to cross-verify their identity. For example, if you’ve been communicating through email, request a phone call or ask the person to send a code via SMS or a secure app like Google Authenticator. Scammers often use deepfake videos to simulate real-time communication, but they may struggle to keep the deception consistent across multiple platforms.
  3. Alert your internal fraud team (or proper internal stakeholders such as legal or compliance teams). These teams can help coordinate further verification steps, track the potential fraudster’s digital footprint, and analyze any other risk indicators in the transaction. They can also help collect evidence for law enforcement or other agencies if needed.
  4. Notify authorities. Start by contacting the Internet Crime Complaint Center (IC3). This department is often better equipped to handle cyber-related incidents than local police. In certain cases, it’s helpful to notify other agencies, such as the Federal Trade Commission (FTC) or the Consumer Financial Protection Bureau (CFPB), especially if the fraud is part of a larger scam.

Staying ahead of deepfake threats

Deepfake fraud in real estate transactions is a growing concern, but with vigilance and the right tools, title & escrow professionals can protect themselves and their clients. Qualia offers a range of security features designed to address these risks and enhance your fraud prevention efforts. If you want to learn more about how Qualia can help keep deepfake fraudsters out of real estate closings, click below to schedule a time with our team.

Speak With an Expert