Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
The convergence of AI, 5G and augmented reality poses new security and privacy risks, challenging organizations to keep pace. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor  

Some 500 C-level business and security experts from companies with over $5 billion in revenue in multiple industries expressed concern in a recent survey from Accenture about the potential security vulnerabilities posed by the pursuit of AI, 5G and augmented reality technologies all at the same time.  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Claudio Ordóñez, Cybersecurity Leader for Accenture in Chile

To properly train AI models, for example, the company needs to protect the data needed to train the AI and the environment where it is created. When the model is being used, the data in motion needs to be protected. Data cannot be collected in one place, either for technical or security reasons, or for the protection of intellectual property. “Therefore, it forces companies to insert safe learning so that the different parties can collaborate,” stated Claudio Ordóñez, Cybersecurity Leader for Accenture in Chile, in a recent account in Market Research Biz.  

Companies need to extend secure software development practices, known as DevSecOps, to protect AI though the life cycle. “Unfortunately, there is no silver bullet to defend against AI manipulations, so it will be necessary to use layered capabilities to reduce risk in business processes powered by artificial intelligence,” he stated. Measures include common security functions and controls such as input data sanitization, hardening of the application and setting up security analysis. In addition, steps must be taken to snake data integrity, accuracy control, tamper detection, and early response capabilities.    

Risk of Model Extraction and Attacks on Privacy  

Machine learning models have demonstrated some unique security and privacy issues. “If a model is exposed to external data providers, you may be at risk of model extraction,” Ordóñez warned. In that case, the hacker may be able to reverse engineer the model and generate a surrogate model that reproduces the function of the original model, but with altered results. “This has obvious implications for the confidentiality of intellectual property,” he stated.  

To guard against model extraction and attacks on privacy, controls are needed. Some are easy to apply, such as rate limitations, but some models may require more sophisticated security, such as abnormal usage analysis. If the AI model is being delivered as a service, companies need to consider safety controls in place in the cloud service environment. “Open source or externally generated data and models provide attack vectors for organizations,” Ordóñez stated, because attackers may be able to insert manipulated data and bypass internal security.   

Asked how their organizations are planning to create the technical knowledge needed to support emerging technologies, most respondents to the Accenture survey said they would train existing employees (77%), would collaborate or partner with organizations that have the experience (73%), hire new talent (73%), and acquire new businesses or startups (49%).  

The time it takes to train professionals in these skills is being underestimated, in the view of Ordóñez. In addition, “Respondents assume that there will be vast talent available to hire from AI, 5G, quantum computing, and extended reality, but the reality is that there is and will be a shortage of these skills in the marketplace,” he stated. “Compounding the problem, finding security talent with these emerging tech skills will be even more difficult,” he stated.  

Features of 5G technology raise new security issues, including virtualization that expands the attack surface and “hyper-accurate” tracking of attack locations, increasing privacy concerns for users. “Like the growth of cloud services, 5G has the potential to create shadow networks that operate outside the knowledge and management of the company,” Ordóñez stated.  

Device registration must include authentication to handle the enterprise attack surface. Without it, the integrity of the messages and the identity of the user cannot be assured,” he stated. Companies will need the commitment of the chief information security officer (CISO) to be effective. “Success requires significant CISO commitment and expertise in cyber risk management from the outset and throughout the day-to-day of innovation, including having the right mindset, behaviors and culture to make it happen.”  

Augmented reality also introduces a range of new security risks, with issues of security around location, trust recognition, the content of images and surrounding sound, and “content masking.” In regard to this, “The command “open this valve” can be directed to the wrong object and generate a catastrophic activation,” Ordóñez suggested.  

Techniques to Guard Data Privacy in 5G Era 

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Jiani Zhang, President, Alliance and Industrial Solution Unit, Persistent Systems

Data privacy is one of the most important issues of the decade, as AI expands and more regulatory frameworks are being put in place at the same time. Several data management techniques can help organizations stay in compliance and be secure, suggested Jiani Zhang, President of the Alliance and Industrial Solution Unit at Persistent Systems, where she works closely with IBM and Red Hat to develop solutions for clients, as reported recently in The Enterprisers Project. 

Federated Learning. In a field with sensitive user data such as healthcare, the traditional wisdom of the last decade was to ‘unsilo” data whenever possible. However, the aggregation of data necessary to train and deploy machine learning algorithms has created “serious privacy and security problems,” especially when data is being shared within organizations. 

In a federated learning model, data stays secure in its environment. Local ML models are trained on private data sets, and model updates flow between the data sets to be aggregated centrally. “The data never has to leave its local environment,” stated Zhang.   

“In this way, the data remains secure while still giving organizations the ‘wisdom of the crowd,’” she stated. “Federated learning reduces the risk of a single attack or leak compromising the privacy of all the data because instead of sitting in a single repository, the data is spread out among many.”  

Explainable AI (XAI). Many AI/ML models, neural networks in particular, are black boxes whose inputs and operations are not visible to interested parties. A new area of research is explainability, which uses techniques to help bring transparency, such as decision trees representing a complex system, to make it more accountable.   

In sensitive fields such as healthcare, banking, financial services, and insurance, we can’t blindly trust AI decision-making,” Zhang stated. A consumer rejected for a bank loan, for example, has a right to know why. “XAI should be a major area of focus for organizations developing AI systems in the future,” she suggested. 

AI Ops/ML Ops. The idea is to accelerate the entire ML model lifecycle by standardizing operations, measuring performance, and automatically remediating issues. AIOps can be applied to the following three layers: 

  • Infrastructure: Automated tools allow organizations to scale their infrastructure and keep up with capacity demands. Zhang mentioned an emerging subset of DevOps called GitOps, which applies DevOps principles to cloud-based microservices running in containers.  
  • Application Performance Management (APM): Organizations are applying APM to manage downtime and maximize performance. APM solutions incorporate an AIOps approach, using AI and ML to proactively identify issues rather than take a reactive approach.  
  • IT service management (ITSM): IT services span hardware, software and computing resources in massive systems. ITSM applies AIOps to automate ticketing workflows, manage and analyze incidents, and authorize and monitor documentation among its responsibilities. 

Read the source articles in  Market Research Biz, in the related report from Accenture and in The Enterprisers Project. 

This post was first published on: AI Trends