Artificial Intelligence vs. Human Vision

Artificial Intelligence vs. Human Vision

Cybersecurity is constantly evolving as cybercriminals become more sophisticated, and digital security tools accelerate risk mitigation as much as possible.

2020 has presented even more opportunities for hackers to strike, such as through phishing emails such as spoofing authentic PPE providers or from HMRC to copy unsuspecting victims. Recently we even saw how phishers are now using the vaccination campaign to scam people into making them pay for fake vaccines.

Artificial Intelligence and Machine Learning have proven to be innovative technologies to help foil planned actions and are a key part of any cybersecurity strategy. But AI isn’t necessarily the right tool for every task. Humans are still far more capable of making complicated decisions than machines, especially when it comes to determining whether data can be safely sent outside of an organization. For this reason, relying on AI to make these decisions can cause problems, or worse, data loss if the AI ​​is not yet mature enough to fully understand what data is sensitive and what is not . So where can AI contribute an effective part to the cyber defense strategy and where can it pose a challenge for the user? This explains Fiete Marohn, Senior Sales DACH VIPRE Security.

identify similarities

One of the biggest challenges facing AI in reducing the risk of internal workers violating policies is identifying similarities between documents, or determining if it’s okay to send a specific document to a specific recipient. Business templates like invoices look very similar every time they are sent, with very subtle differences that machine learning and AI often overlook. The technology registers the document as usual, except that there are very few differences in numbers or words, and would usually allow the user to send the file attachment. While a human in this example would know which invoice or offer should be sent to which (potential) customer.

Implementing an AI in a large company for this purpose would only prevent a small proportion of emails from being sent. But even if the AI ​​finds a problem that needs to be reported, it informs the administration team rather than the user. This is because if the AI ​​assumes the email shouldn’t be sent, it doesn’t want the user to be able to override it and send the email anyway. This can mean additional work for the admin team and frustrate the user at the same time.

data storage

Using an AI for defense can also be very data intensive. This is because, with this concept, each e-mail has to be sent to an external system in a different location in order to be analysed. Especially for industries that handle very sensitive data, the fact that their data is scanned elsewhere is a cause for concern. In addition, the technology with machine learning has to store part of the sensitive data in order to learn rules from it and use them again and again to make a correct decision next time. Given the machine learning aspect of these types of solutions, they cannot work out of the box, but require a learning period of at least two months. Therefore, they cannot provide immediate security checks.

Understandably, many companies, particularly at the corporate level, are uncomfortable with their sensitive data being sent elsewhere. The last thing they want is for the data to be stored somewhere else, even if it’s just for analysis. The AI ​​therefore adds an unnecessary and unwanted element of risk to sensitive materials.

The role of AI in cybersecurity

AI plays an important role in many elements of an organization’s cyber defense strategy. Antivirus technology, for example, operates a strict “yes or no” policy when deciding whether or not a file is potentially malicious. This is not subjective, but through a strict level of parameters, something is either perceived as a threat or not. The AI ​​can quickly determine whether to crash the device, lock the machine, or disconnect the network and thus remove or allow it. It is important to note that VIPRE uses AI and ML as key components in their email and endpoint security services, for example as part of the sandboxing solution for email attachment security, where an email attachment is opened and checked by an AI in an isolated environment separated from the customer network.

So while AI is not the ideal method to protect against accidental data loss through email, it still plays an important role in certain areas such as virus detection, sandboxing, and threat analysis.


With such a heavy reliance on email for business practices, data loss is an inevitable risk. The consequences of reputational impact, non-compliance and the associated financial damage can be devastating. A cyber-aware culture with constant training is very important, as is the right technology

Providing technology that alerts the user when they might be making a mistake – either by sending an email to the wrong recipient or by sharing sensitive data about the company, its customers or its employees – not only minimizes errors, it helps also to create a better email culture. In a fast-paced, high-pressure work environment, mistakes happen quickly, especially with the increase in remote working where the instant peer review that many are accustomed to cannot occur. But instead of leaving that responsibility to artificial intelligence, this type of technology, combined with a trained human eye, can help users make more informed decisions about the nature and legitimacy of their emails before they act. Finally, it helps organizations mitigate this risky element of their operations and strengthen compliance through a cyber-aware culture.

Only every third company is well prepared for the new supply chain law Previous post Only every third company is well prepared for the new supply chain law
Trends in the implementation of SAP solutions Next post Trends in the implementation of SAP solutions