Deepfakes don’t disappear: future-proof digital identity

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.

Deepfakes are not new, but this AI-powered technology has emerged as a ubiquitous threat in the spread of misinformation and increasing identity theft. The pandemic made matters worse by creating the ideal conditions for bad actors to take advantage of the blind spots of organizations and consumers, further exacerbating fraud and identity theft. Fraud due to deepfakes increased during the pandemic and poses significant challenges for financial institutions and fintechs that need to accurately verify and verify identities.

As cybercriminals continue to use tools such as deepfakes to fool identity verification solutions and gain unauthorized access to digital assets and online accounts, it is essential for organizations to automate the identity verification process to better detect and combat fraud.

When deepfake technology evades fraud detection

Fraud-related financial crime has steadily increased over the years, but the rise in deepfake fraud in particular poses a real danger and presents a variety of security challenges for everyone. Fraudsters use deepfakes for a number of purposes, from impersonating celebrities to impersonating job applicants. Deepfakes have even been used to carry out scams with large scale financial implications. In one case, fraudsters used deepfake voting to trick a bank manager in Hong Kong into transferring millions of dollars into fraudulent accounts.

Deepfakes have been a theoretical possibility for quite some time, but have only gained a lot of attention in recent years. The controversial technology is now much more widely used due to the accessibility of deepfake software. Everyone from ordinary consumers with little technical knowledge to state-sponsored actors has easy access to phone applications and computer software that can generate fraudulent content. In addition, it is becoming increasingly difficult for people and fraud detection software to distinguish between real video or audio and deepfakes, making the technology a particularly malicious fraud vector.

Also Check:   Is Legit or Scam? Know Here


MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

The Growing Fraud Risks Behind Deepfakes

Fraudsters use deepfake technology to perpetuate identity fraud and theft for personal gain, wreaking havoc across all industries. Deepfakes can be exploited in many industries; however, industries that work with large amounts of personally identifiable information (PII) and customer assets are particularly vulnerable.

For example, the financial services industry deals with customer data when onboarding new customers and opening new accounts, making financial institutions and fintechs susceptible to a wide range of identity theft. Fraudsters can use deepfakes as a vector to attack these organizations, which can lead to identity theft, fraudulent claims and new account fraud. Successful fraud attempts can be used to generate false identities on a large scale, allowing fraudsters to launder money or take over financial accounts.

Deepfakes can cause material damage to organizations through financial loss, reputational damage and diminished customer experiences.

Financial loss: Financial charges related to deepfake fraud and scams have resulted in losses ranging from $243K to as much as $35M in individual cases. In early 2020, a bank manager in Hong Kong received a call, ostensibly from a customer, to approve money transfers for an upcoming takeover. Using speech-generated AI software to mimic the voice of the customer, bad actors have ripped off the bank for $35 million. Once transferred, the money was untraceable. Reputation management: Misinformation from deepfakes causes hard-to-repair damage to an organization’s reputation. Successful fraudulent attempts resulting in financial loss can negatively affect customer confidence in and overall perception of a company, making it difficult for companies to defend their case. Impact on customer experiences: The pandemic challenged organizations to detect sophisticated fraud attempts while ensuring a smooth customer experience. Those who don’t rise to the challenge and become riddled with fraud will leave customers with unwanted experiences on almost every part of the customer journey. Organizations need to add new layers of defense to their onboarding processes to detect and secure deepfake scam attempts from the start.

Also Check:   Is Electric Bike Paradise Legit or Scam? Know Here

Future-proof identity: how organizations can fight deepfake fraud

Current fraud detection methods cannot verify 100% of real identities online, but organizations can protect against deepfake fraud with a very high degree of effectiveness and minimize the impact of future identity-based attacks. Financial institutions and fintechs must be particularly vigilant when acquiring new customers to detect third-party fraud, synthetic identities and impersonation attempts. With the right technology, organizations can accurately detect deepfakes and fight further fraud.

In addition to validating PII in the onboarding process, organizations must verify their identity through in-depth multidimensional liveness testing, which estimates vibrancy by analyzing the quality of selfies and estimating depth cues for facial authentication. In many cases, fraudsters can attempt to impersonate individuals using legitimate PII combined with a headshot that does not match the individual’s true identity. Traditional identity verification is imprecise and uses manual processes, creating a larger attack surface for attackers. Deepfake technology can easily bypass flat images and even liveness tests in identity verification – in fact, the winning algorithm in Meta’s deepfake detection competition only detected 65% of the deepfakes analyzed.

Also Check:   Is The Speaker Lab Legit or Scam? Know Here

This is where graph-defined digital identity verification comes in. By continuously collecting digital data during the photo validation process, customers gain confidence in the identities they do business with and reduce their risk of fraud. Organizations also gain a holistic and accurate view of consumer identity, can identify more good customers and are less likely to be misled by deepfake attempts.

While it is difficult to fight any type of fraud, security teams can stop deepfake technology by moving beyond legacy approaches and adopting identity verification processes with predictive AI/ML analytics to accurately identify fraud and build digital trust. .

Mike Cook is VP Fraud Solutions, Commercialization at Socure

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

This post Deepfakes don’t disappear: future-proof digital identity

was original published at “”

Leave a Comment