The use of artificial intelligence (AI) enabled cybersecurity systems is increasing dramatically. By 2018, sixty-two percent of all companies are projected to use AI technologies.

The use of AI cybersecurity systems provides greater efficiency through automation, the ability to evaluate larger data sets and, in many cases, a faster way to identify the “cyberattack needle in the big data haystack.” For example, some credit card companies use AI systems to scan large data banks for abnormal transactions and evaluate the gravity of a potential large scale cyber threat.

However, companies should not believe that AI systems alone provide a cybersecurity panacea. Cybersecurity solutions always require a human touch, such as risk analysis and case specific strategies for individual cyberattack responses. AI systems can identify potential risk situations, but the question of individual case evaluation and the proper individualized response still can only occur through the use of human analysis and participation.

AI cybersecurity systems should be used to act as a “safety net” in assessing potential large scale risks. In addition, AI systems can be adjusted based on human intervention to determine the differences between malicious attacks and normal behavior that has low risk. Also, it is undisputed that cybersecurity experts expect that criminals will inevitably utilize AI to automate their attacks as well. Because of this anticipated criminal use of AI, fully audited cybersecurity will never, by definition, be possible. AI systems can be well-equipped to increase detections rates, but will always need to be tweaked by human testers to find holes in programs and subsequently fortify AI defenses moving forward.

Bottom line, the use of AI enabled cybersecurity systems should be explored and evaluated, but always used in conjunction with individual human training and response strategies.