Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
AI for cybersecurity was greeted with a degree of skepticism from experts at a recent industry conference, with one supplier outlining a service to assess the AI content of a solution. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

How effectively AI can be applied to cybersecurity was the subject of some debate at the RSA Security Conference held virtually from May 17-20.  

The event featured a virtual show “floor” of some 45 vendors offering various AI and machine learning capabilities for cybersecurity, and a program track dedicated to security-focused AI, according to an account in VentureBeat.  

Skepticism was high enough that Mitre Corp. developed an assessment tool to help buyers assess the AI and machine learning content of cybersecurity offerings. The AI Relevance Competence Cost Score (ARCCS) seeks to give defenders a way to question vendors about their AI claims, in much the same way they would assess other basic security functionality. Mitre is a US non-profit organization that manages federally funded R&D centers supporting several US government agencies. 

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Anne Townsend, Department Manager, NIST Cyber Partnerships, Mitre

“We want to be able to jump into the dialog with cybersecurity vendors and understand the security and also what’s going on with the AI component as well,” stated Anne Townsend, department manager and head of NIST cyber partnerships at Mitre. “Is something really AI-enabled, or is it really just hype?” 

ARCCS will provide an evaluation methodology for AI in information security, measuring the relevance, competence, and relative cost of an AI-enabled product. The process will determine how necessary an AI component is to the performance of a product; whether the product is using the right kind of AI and doing it responsibly; and whether the added cost of the AI capability is justified for the benefits derived. 

“You need to be able to ask vendors the right questions and ask them consistently,” stated Michael Hadjimichael, principal computer scientist at Mitre, of the AI framework effort. “Not all AI-enabled claims are the same. By using something like our ARCCS tool, you can start to understand if you got what you paid for and if you’re getting what you need.” 

Mitre’s ARCCS research is in its early stages; it was not easy to assess how most products claiming AI enhancements would fare with the assessment. “The tool does not pass or fail products—it evaluates,” Townsend stated to VentureBeat. “Right now, what we are noticing is there isn’t as much information out there on products as we’d like.” 

One cybersecurity software supplier whose product incorporates AI was glad to hear about the assessment framework. Hunters of Tel Aviv, Israel, offers advanced machine learning capability in its Extended Detection and Response (XDR) platform. Founded in 2018, the company has raised $20.4 million to date, according to Crunchbase. 

“In a world where AI and machine learning are liberally used by security vendors to describe their technology, creating an assessment framework for buyers to evaluate the technology and its value is essential,” stated Hunters CEO and cofounder Uri May to VentureBeat. “Customers should demand that vendors provide clear, easy-to-understand explanations of the results obtained by the algorithm.” 

Marketing analytics company AppFlyer connects all of its security information and event management (SEIM) tools to Hunters XDR. The primary goal was to scale up their security operations center (SOC) and to move from a reactive defense mode to a proactive security approach.  

“The main value that Hunters XDR provides me as a CISO is that it connects the dots across solutions. My teams now have better visibility into the issues that we have in our environment, and we can connect alerts to a bitter story,” stated Guy Flechter, CISO at AppsFlyer, in an account on the Hunters website.  

Few Developer Resources Found Around AI and Cybersecurity 

While cybersecurity providers are increasingly adopting machine learning and AI as a way to provide anomaly detection, security professionals are typically challenged to apply the AI techniques to their own work. Jess Garcia, founder of One eSecurity, offering digital forensics and incident response services, from its headquarters in Madrid, Spain, attempted to integrate automation and ML into his threat hunting work at the beginning of the pandemic, according to an account in TechBeacon 

He found few resources and companies willing to share algorithms and technologies they used for cybersecurity. The open-source community around the development of ML for cybersecurity is far less-developed than the community around the development of web applications, he found. 

“Everything is very fuzzy; everything is very generic or very obscure. I was pretty surprised to see that everyone is talking about AI and is supposed to be using AI and machine learning in tier technologies,” he stated. “So you would expect a big community around AI and cybersecurity. To my surprise, there was almost nothing.”  

Schneier Warns of Potential to AI to Become the Hacker  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Bruce Schneier, computer security professional and writer

Another potential downside of AI in cybersecurity is the potential for AI to become the hacker, suggests Bruce Schneier, computer security professional and writer, who delivered a keynote address at the 2021 RSA Conference entitled “The Coming AI Hackers.”  

Schneier addressed this topic in a recent essay he wrote for the Cyber Project and Council for the Responsible Use of AI at the Belfer Center for Science and International Affairs at Harvard Kennedy School, according to an account in DarkReading. 

He addressed the question of what would happen if AI systems could hack social, economic, and political systems at the computer scale, speed, and range such that humans could not detect in time. He described it as AI evolving to “the creative process of finding hacks.” 

“They’re already doing that in software, finding vulnerabilities in computer code. They’re not that good at it, but eventually they will get better [while] humans stay the same” in their vulnerability discovery capabilities, he stated. 

The AI does not know when it’s breaking the rules, unless its programmers tell it so. He recalled the Volkswagen scandal in 2015, when the automaker was caught cheating on emissions control tests, after its engineers programmed the car’s computer systems to activate emissions-curbing only during tests and not in its normal operations.  

In his essay, The Coming AI Hackers, Schneier states, “If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI; it doesn’t understand the abstract concept of cheating. It will think ‘out of the box’ simply because it won’t have a conception of the box, or of the limitations of existing human solutions. Or of ethics. It won’t understand that the Volkswagen solution harms others, that it undermines the intent of the emissions control tests, or that it is breaking the law.”  

While he concedes the idea of AI as a hacker is “super speculative” for now, the issue needs consideration. “We need to think about this,” he stated. “And I’m not sure that you can stop this. The ease of this [AIs hacking] happening depends a lot on the domain [in question]: How can we codify the rules of the system?” 

If AI could be harnessed for finding and fixing all vulnerabilities in a software program before it gets released, “We’d live in a world where software vulnerabilities were a thing of the past,” Schneier stated. 

Read the source articles and information in VentureBeat, from the cybersecurity software supplier Hunters, in TechBeacon and in DarkReading. 

This post was first published on: AI Trends