Working from home, chatting with friends, online shopping and banking – our lives are increasingly taking place in the digital space. This opens up a vendor’s tray of all too easily accessible data for cybercriminals, from which they can compile fictitious – so-called synthetic – identities.
Unlike the theft of the identity of a real person, with synthetic identity fraud there is no real account or account owner who could notice inexplicable account movements or unauthorized online purchases or who would be surprised if a reminder was issued. This makes it particularly difficult to detect this scam and is one reason synthetic identity fraud has become increasingly popular in criminal circles. The crooks combine captured real data with false information and create a fictitious identity from it.
The real personally identifiable information (PII) required for this, such as email addresses, social security numbers, passport numbers, details of place of residence or dates of birth, are tapped off by phishing or are available on the dark web. The fraudsters can draw on the full here. With the help of deepfake technology, even photos can be created. The identity newly created in this way appears so deceptively real that there is usually no suspicion when opening a bank account or a shopping account.
Once the new bank account has been created with the Frankenstein identity, the scammers can apply for a loan at their leisure, take advantage of it and disappear without a trace. In e-commerce, they often use a combination of legitimate payment details and false information to complete transactions uninterruptedly for extended periods of time without detection of the fraud. Synthetic identities are also used in loyalty program referral fraud, where new accounts are spoofed in order to claim introductory rewards.
Ease of use vs. security?
The increasing number of devices, channels and access points plays into the hands of cybercriminals. At the same time, consumers are becoming less patient when it comes to online activities: According to a study by Ping Identity, providers who fail to find the right balance between user-friendliness and data security run the risk of losing their customers to the competition. 45 percent of those surveyed stated that they had already turned their backs on an online service because they found logging in frustrating. 53 percent would switch to a competing online offering, provided identity and access management works much more easily there.
For online providers, this means: They have to manage the balancing act of minimizing the risk of fraud without deterring their customers with cumbersome authentication measures such as CAPTCHA or querying PII. Capabilities such as voice recognition, fingerprint scanning, and facial recognition ensure a smooth customer experience, but by themselves they do not guarantee complete security. Methods based purely on behavioral analysis also have their pitfalls: because human behavior is not static. In the course of the corona pandemic, many people changed their digital habits. They logged in at different times or with different devices and bought different products, such as groceries. But false alarms when securing online campaigns can be fatal. For example, if a customer’s user account is blocked, this will probably alienate him forever.
Behavioral biometrics reveal anomalies
New intelligent security mechanisms therefore monitor certain patterns of behavioral biometrics: Whether in the financial sector or in e-commerce – as soon as a user interacts with a device or an application, he generates hundreds of unique user data with every swipe on the smartphone and with every mouse click on the PC. While fraud detection has so far been used almost exclusively in the payment phase of the customer journey, modern online fraud detection checks the biometric behavioral data generated by interactions between people and devices, device attributes and account activities right from the login. Security risks are stopped before they can cause damage.
Behavior-based biometrics constantly learns with the help of machine learning and is able to both identify the identity of regular users and recognize bots by their non-human behavior. For example, the use of copy-paste or the auto-complete function, the speed at which typing on the smartphone or the speed at which the mouse is moved across the screen can indicate misuse of an identity. Because a real human differs in the way he clicks or scrolls from bots, scripts or emulators.
A normal user who is new to an account is not familiar with the order of the fields in the registration form and is therefore slower to fill them out. Other indications of illegal activities are the repeated entry of a registration process with different data, an unnaturally smooth, rehearsed navigation between the pages or deviations from the average duration of an order process.
Cyber criminals will continue to come up with new and more sophisticated scams in the future. In order not to lose out, the security methods of companies must develop at least as dynamically. With a combination of modern identity and access management and fraud prevention through behavioral biometric technologies, it is possible to be one step ahead of the attackers.