Future fakes- what to do when you can no longer believe your eyes

Deepfakes – hyper realistic and convincing fake videos generated through AI technology – are potentially one of the most disturbing tech developments of recent times. Doctored video footage is nothing new but AI significantly enhances the realism of fake video, enabling unsaid words to be put into an individual’s mouth. From politicians and film stars to CEO and CFOs – the ability to manipulate existing images of public figures, or simply those with sufficient visual online presence, to create fake footage, virtually indistinguishable from the real thing, has far reaching implications. Personal, financial and democratic credibility is at stake. What happens when you can no longer believe your eyes?

The role of artificial intelligence in enabling as well as detecting fakery and fraud was discussed at the recent 22nd International Fraud Group conference. The novelty of Deep Fake video ensures considerable media interest, possibly distorting perceptions of the risk that they pose. Production of the most realistic deep fakes requires substantial financial and technological resource, currently likely only within the reach of larger state actors. However, the technology will inevitably become cheaper and more widely accessible, resulting in highly sophisticated fakes and a marketplace to match. While that day is not here yet, now is the time particularly for public figures to prepare. Critical actions include ensuring a strong narrative underpins their online persona, preferably reinforced by third parties, and having a contingency plan to respond if faked footage does emerge. This places individuals on the front foot in swiftly identifying and contradicting faked footage if it emerges.

More broadly, the defining change that AI brings to the fraudster’s toolkit is an ability to act at scale – to manipulate multiple sources of data, visual or otherwise, quickly and efficiently, creating ever more convincing and complex deceptions to achieve their aims. To combat this requires a collaborative, multidisciplinary response, in which the power of AI can be harnessed to accelerate fraud detection, mitigation and prevention.

While tempting to believe emerging technologies can be the silver bullet, even the most advanced machine learning programme will not be sufficient in isolation. Technology is one of three critical components in fraud detection – managing the human aspect through training, and putting in place robust processes are essential to complement what AI can bring to the table. Organisations should also adopt a holistic top-down approach to fraud – that is, elevate the issue to the board room and ensure an organisation-wide strategy for identifying, preventing and mitigating fraud. To combat increasingly organised and systematic criminals effectively, financial institutions and other organisations need to strategise outside business unit silos.

AI-enabled solutions are, if not a complete answer in themselves, nonetheless reaching an important tipping point in fraud investigations and litigation. Investigators and lawyers increasingly feel able to trust the tech tools at their disposal, enabling them to reach more accurate conclusions more quickly. Predictive coding and cognitive analytics, such as natural language processing (NLP) and sentiment analysis, for example, are two areas where significant advances are being made. Machines can now analyse a mass of documents and determine not only those which are relevant but subsequently detect relevant information and concepts embedded in the text of these documents with 60-70% confidence. The core characteristic underpinning these new technologies is looking at patterns in data to identify the people rather than the act – fraudsters instead of frauds.

So where next? A logical next step, as these tools evolve towards greater precision and accuracy, is to deploy them proactively to detect and stop fraud in its tracks earlier. Wholesale prevention of fraud is likely to be a challenge – fraudsters will inevitably adapt and evolve to evade detection – but earlier mitigation will reduce its financial impact.

Keep reading

...
High Court Sets Aside Injunction Granted Against Binance
Crypto fraud losses in the UK have increased by 40% in the past year according to Action Fraud. Mishcon de Reya’s Rhymal Persad, Sofia Berggren and Philippa Rees explore Piroozzadeh v Persons Unknown, and the increase in crypto-related fraud cases brought before the English Courts. High Court sets aside injunction granted against Binance: are we going to see more
Read
...
Navigating Cross-Border Challenges
Fraud and asset recovery cases are increasingly being fought against unidentified defendants, as individuals hide behind computers, fake emails and false online personas. Rhymal Persad, Philippa Rees and Jordan Harrison of our member firm Mishcon de Reya explore the importance of disclosure orders, and the challenges of seeking them from foreign banks: Asset tracing in fraud cases: navigating
Read
...
Article on Government’s recent Fraud Strategy Announcement
The IFG are delighted to share the below Article written by Catherine Rogerson, Associate and Linda Ali, Trainee Solicitor, Fraud Department at Mishcon de Reya LLP on the Government’s recently announced fraud strategy. Please click the link below https://www.mishcon.com/news/the-governments-new-fraud-strategy-is-it-enough  
Read
...
Thank you Toronto! IFG Asset Recovery Conference (April 2023)
Our IFG members travelled out to Toronto for our Spring Meeting hosted by Bennett Jones taking part in another great Asset Recovery Conference. We were delighted to welcome key note speakers Hon. A. Anne McLellan P.C., O.C., A.O.E., former Deputy Prime Minister of Canada, Former Minister of Justice and Attorney General for Canada, and Inspector
Read