Artificial Intelligence (AI), like any other technology, is not inherently good or bad – it is merely a tool people can use for decent or ill purposes.
For example, many companies use AI-powered biometrics solutions in speech and facial recognition to streamline login processes and enhance the customer experience by replacing tedious PINs, passwords and account numbers. Businesses can also leverage AI to uncover valuable insights from amongst mountains of data to craft personalized customer experiences.
Beyond the customer experience, AI can analyze imaging data in medical settings to increase the accuracy of tumor identification and classification. Likewise, AI is augmenting language learning tools and programs, allowing more people access to life-enriching skills.
Of course, AI is available to not only well-meaning individuals but also malicious ones who commonly employ its capabilities to supercharge their fraudulent schemes.
How Bad Actors Use AI to Enhance Their Scams
Highly sophisticated and well-resourced criminal organizations have already begun to use AI for new and ingenious (or rather, insidious) attack vectors. These fraudsters will train their AI engines with terabytes or even petabytes of information to automate their various schemes, building exploits and scams at a scale unimaginably larger than the capabilities of a single human hacker.
Some hackers will even exploit AI-powered systems that drive better customer experiences through AI-generated deep fakes that target biometric authentication systems. In particular, savvy fraudsters use AI to create deepfake voice clones for robocall scams. Typically, scam calls or SMS texts pose as someone or something to trick the victim into divulging sensitive account information or clicking a malicious link.
In the past, people could usually tell when a call or text was suspicious, but this new breed of deepfake robocalls uses AI-generated clones of people’s voices. The applications of these voice clones are truly disturbing. Fraudsters will copy a child’s voice, pose as kidnappers and call the parent, demanding they pay a ransom for the release of their child.
Another common method of scammers when using AI voice clones is calling an employee and impersonating that person’s boss or someone of seniority, insisting they withdraw and transfer their money to pay for some business-related expense.
These schemes are prolific and effective, with a 2023 survey from Regula discovering that 37% of organizations experienced deepfake voice fraud. Likewise, research from McAfee shows that 77% of victims of AI-enabled scam calls claimed to have lost money.
Organizations Must Verify Their Customer’s Identity
The ongoing evolution of AI is akin to an arms race, with businesses constantly deploying the newest innovations and techniques to thwart fraudsters’ latest schemes.
For example, Know Your Customer (KYC) processes allow companies to verify a customer’s identity to determine whether they are a potential customer or a scammer attempting to carry out fraudulent transactions or money laundering. KYC is mandatory for many industries. For example, in the US, the Financial Crimes Enforcement Network (FinCEN) requires financial institutions to comply with KYC standards.
The introduction of AI has made the KYC battlefield more dynamic, with both sides (good and bad) utilizing the technology to achieve their aims. Innovative businesses have taken a multi-modal approach to KYC processes, where AI helps detect suspicious activity and then warns affected customers via text messages.
To prove their identity, customers must provide a form of identification, such as a date of birth, photo ID, license or address. After customers demonstrate they are who they say they are, this multi-modal KYC process then associates a phone number to a customer, which will serve as a digital ID.
The convenience and simplicity of mobile phone numbers make them ideal digital identifiers in the KYC process. Likewise, mobile phones provide businesses with reliable and verifiable data, including the global ubiquity that national registries cannot replicate.
Authoritative Phone Numbering Intelligence
Unfortunately, businesses aren’t the only ones who recognize the value of mobile numbers as digital identifiers. As mentioned, bad actors frequently target customers through fraudulent texts and phone calls. Research from Statista shows nearly half of all fraud reported to the US Federal Trade Commission starts with texts (22%) or a phone call (20%).
In the case of a ported phone number (i.e., it changed from one phone company to another), businesses have no way of knowing if this action was simply a customer switching providers or a fraudster bent on malicious intent. Additionally, fraudsters can use SIM swaps and port outs to hijack phone numbers and use those digital identifiers to masquerade as customers. With these numbers, they can receive the text messages companies use for multi-factor authentication (MFA) to engage in online payment fraud, which topped $38 billion globally in 2023.
Even though SIM Swaps present an opportunity for number hijacking, organizations can effectively combat this scheme by using authoritative data. In other words, while phone numbers are still ideal digital identifiers, organizations need a trusted, authoritative, and independent resource for information about each telephone number to validate ownership. By leveraging authoritative phone numbering intelligence, businesses can determine if a customer is truly legitimate, protecting revenue and brand reputation while boosting customer confidence in voice and text communications.
Enterprises also need deterministic and authoritative data. More specifically, their AI solutions need access to data about each phone number, whether it was ported recently or is associated with a particular SIM, line type or location. If AI assesses that the data indicates deceitful activity, it will require the person to provide additional information, like mailing address, account number or mother’s maiden name as a further step in the verification process. Businesses must also leverage an authoritative resource that continually updates phone number information, enabling AI tools to recognize fraudulent tactics more effectively.
Digital Identity and the Age of AI
The world is more connected than ever, with mobile devices powering this unprecedented interconnectivity. While this connectedness benefits organizations and consumers, it poses significant risks and responsibilities. Moreover, proving one’s digital identity is not as straightforward without a trusted and authoritative source.
In the age of AI, schemes like sophisticated AI-generated deepfakes, voice clones and highly tailored phishing emails further emphasize the need for enterprises to utilize authoritative phone numbering intelligence to empower their AI to protect against fraud. Such efforts will restore customers’ faith in business text messages and phone calls while safeguarding revenue and brand reputation.