Artificial intelligence (AI) is a buzz term that’s always on people’s lips at the moment, and for good reason! With its swift developments and human-like capabilities, AI is an extremely powerful tool…
AI’s Impact on Fraud and Scams
As AI is becoming more developed, so are opportunities for fraudulent activity. However, AI can also be used as a tool to prevent fraud…
“AI will make it easier for criminals to perpetrate sophisticated scams at scale, and to impersonate trusted institutions, friends and family members, raising serious questions around trust in content. But as is made equally clear, AI will better enable banks and others to identify and prevent fraud and scams. Chatbots will engage with chatbots uncovering critical information about criminal operations. And AI will free up the time of specialist investigators to focus on those most difficult cases” – Ruth Evans, Chair of Stop Scams UK
“AI will have wide reaching impacts across the whole of society and has the potential both to create significant harm but also to drive significant improvement across a wide range of applications. PwC’s research highlights that while there is limited evidence that AI is behind large numbers of fraud attacks now, it will very likely drive an increase in the number and sophistication of fraud threats” – Andrew Bailey, Governor of the Bank of England
To What Extent is AI used to Perpetrate Scams?
Currently, it appears that AI is not extensively used to perpetrate scams, but it’s difficult for us to ascertain the true level in which it’s used. This is because the use of AI can be almost indistinguishable from human-generated scams.
How is AI used in Scams?
Research conducted by PwC in conjunction with Stop Scams UK identified six key ways in which AI tools can be used to facilitate fraud and scams:
- Generating text and image content – AI can be used to create tailored emails, messages and image content in things like phishing attempts or fraudulent adverts
- AI enabled chatbots – AI models can be used to converse with people to try to manipulate the responder into making payments
- Deepfake videos – these are very lifelike manipulated images or recordings created using AI. Here, the image of a trusted person may be used as click bait to redirect you to malicious websites which harvest your card payment details when you enter them. Or another example is where deep fake videos of celebrities are used to promote investment scams, such as the trusted money saving expert, Martin Lewis. IDnow’s Fraud Awareness Report 2024 found that 47% of Brits don’t know what deepfakes are, so how can we be expected to spot something we can’t even identify?
- Voice cloning – using existing audio content of someone’s voice to replicate it and impersonate them
- Sophisticated victim targeting – AI can be used to trawl through lots of data to identify potential scam victims and tailor scam content towards them and their particular circumstances
- Pressure testing – attacks against things like bank systems may become more sophisticated with the use of AI, as it can develop strategies to exploit any vulnerabilities in the defence systems
How Can AI Protect Against Scams?
AI is already widely used to prevent and detect fraud, particularly by institutions such as banking.
The main way in which AI is used in fraud defence is through machine learning-based models, which detect transactions, behaviours, activity or content that appears to deviate from the norm.
Although AI can certainly be used to manipulate images and clone voices, it can also be used on the flip-side of this, to identify synthetic content such as these fake images and cloned voices.
AI can also be applied in various other ways to detect and prevent fraud, including, but not limited to:
- Bank transaction monitoring
- Filtering spam messages
- Blocking harmful content
- Detecting malware
The capabilities of artificial intelligence are improving rapidly and with ease, which could lead to potential issues in the future, wherein our fraud defences are undone and we do not have the correct measures in place to defend against AI. Therefore, our defences against AI must also develop rapidly, in order to not be outpaced.