Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Questions of Ethics, such as gender bias, ethnic bias, and human condition impact, also have Privacy components where individuals may not want their human characteristics shared. (Credit: Getty Images) 

By Dawn Fitzgerald, the AI Executive Leadership Insider

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Dawn Fitzgerald, VP of Engineering and Technical Operations, Homesite

Part Four of a Four Part Series: “AI Holistic Adoption for Manufacturing and Operations” is a four-part series which focuses on the executive leadership perspective including key execution topics required for the enterprise digital transformation journey and AI Holistic Adoption. Planned topics include: Value, Program, Data and Ethics. Here we address Ethics.  

The Executive Leadership Perspective  

Executive leaders have the responsibility to guide their organization’s AI Holistic Adoption journey. In previous articles of this series, we began with the foundation of Value, moved to the framework of the Program, and then addressed some specific AI Holistic Adoption aspects of Data. We now discuss the most complex topic of all, the executive leader’s response to the Ethics of AI.  

 

AI Trust Triad: Ethics, Security and Privacy 

AI Holistic Adoption means that we are taking a holistic view of the multi-faceted aspects involved in bringing our organizations through Digital Transformation and subsequent AI solution execution. On our AI Holistic Adoption journey thus far, we looked not only at creating high value AI Solutions (Value Analytics) by defining and measuring their value, but also at life cycle maintenance. We looked at the needs and contributions of all stakeholders plus the visibility and access from both corporate and business points of view. In addition, we addressed multi-sourcing of Value Analytics and security from a system and design component perspective. As we embark on the topic of Ethics our view must broaden again to include the intertwined topics of Security and Privacy.  

Ethics, Security, and Privacy are indeed the AI Trust Triad. It is imperative to understand that these topics are inseparable, and ALL must be addressed as we architect our AI Solutions. In an AI system, when we address one, we must touch all three. To ignore any of the Triad will, at best, lead to no adoption of our AI solutions, and at worst, lead to catastrophic unintended consequences (the kind of adoption you do NOT want). Although the AI Trust Triad is everyone’s responsibility, the executive leader is in the unique position to lead this awareness and provide the framework, governance, and guidance to enable implementation. 

An IDC 2020 survey conducted with Microsoft found, “Trustworthy AI is fast becoming a business imperative. Fairness, explainability, robustness, data lineage, and transparency, including disclosures, are critical requirements that need to be addressed now.” Also, IDC forecasted that Lack of Trust in AI/ML will inhibit Adoption”. Trust affects stakeholders at every level, “most sophisticated users trust AI-based recommendations the least” and “trust is paramount in expanding adoption of AI/ML-infused technologies”. 

The common questions of Ethics, such as gender bias, ethnic bias, and human condition impact, also have Privacy components. Individuals may not want to be associated with a category or have their human characteristics shared. Privacy regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) of 2018 are helping to improve personal data protection. We must, however, go beyond basic legal compliance by considering the ethical implications and fundamental challenges to the human condition when algorithmically driven behavioral tracking or persuasive computing are used. Finally, we must ensure the security of our data and AI Systems from unauthorized access, use, and cyber-crimes. All three components of the AI Trust Triad are required to achieve Trustworthy AI.  

 

The Challenge of Digitizing Ethics   

As executive leaders driving digital transformation and AI Holistic Adoption into our organizations, we find ourselves with the challenge of digitizing our corporation’s Code of Ethics. In the digital era, executive leaders have the responsibility to drive both corporate ethics policies and privacy policies into all digital aspects of our corporations. The power of AI solutions makes this obligation all the more imperative. Unlike Security, the third component of the AI Trust Triad, incorporating both Privacy and Ethics into our corporation’s digital aspects (both operations and offer related) is a new concept for most organizations. To complicate things further, Ethics especially can be susceptible to subjective interpretation. 

As society evolves, ethics evolve, varying by time, geography, culture, political influence, and age demographics. Our definition of what is ethical evolves, as humans evolve. Just look at the recent rise in diversity and inclusion movements across employers. These changes must be reflected in our corporate code of ethics and must translate to a digital interpretation.  

As leaders, it is our business and moral obligation to exercise the process of digitizing the AI Trust Triad. We must build the ability for our organizations to frequently define, digitize, and mature our ethical and privacy stance. We must synchronize the technical implementation of our ethics with our policy evolution. Once digitized and coded into our technology, we must judge its quality, protect it, and mature it. To do this we must introduce design for controlled and ethical AI practices into our organizations. 

 

Design for Controlled and Ethical AI (DCE_AI) 

Just as we have had Design for TestDesign for Manufacturing and Design for Security, AI Holistic Adoption requires that teams engage in Design for Controlled and Ethical AI (DCE_AI). Since Security is part of the AI Trust Triad and the most aligned, we can leverage experiences from Design for Security.  

Like security initiatives, DCE_AI requires a focused effort and will impact development timelines. And just like the Design for Security discipline, adopting DCE_AI will become an offer acceptance criteria with a specified level of mandatory adoption.   

Since DCE_AI is just as mandatory and as extensive as Design for Security we have the benefit of leveraging the systems of governance used for our security execution.  

The basic components of DCE_AI initiatives include:  

  • Early inclusion of ethics and privacy criteria as requirements in the design cycle. 
  • Transparency for stakeholders with role-based visibility (see the intent of the AI solution) and control points (stop it/ redirect it) built into solutions.  
  • Metrics and management of the Analytics Design Package associated with each Value Analytic (AI Solution) in our Value Analytics Library. (as defined in the column on Program in AI Trends in August 2020)  
  • Certification governance with evidence-based test criteria.   
  • Periodic Ethics and Privacy Policy Alignment Reviews to synchronize evolution.

Role-Based Visibility and Control Points 

Trustworthy AI requires transparency, accountability and explainability. In AI Holistic Adoption, this is achieved with roles-based visibility and control points. The concept of Visibility and Control Points, introduced in the Program column, aligns with the concept of the Human-in-the-Loop. For Ethics and Privacy, Human-in-the-Loop is fundamental.  

The importance of the Human-in-the-Loop was also emphasized in the AI Trends interview with John Havens, Executive Director of the IEEE, on Ethics by Design in May 2019. As stated by Havens, “technology needs to be human-centric. That often means a Human-in -the-Loop (HITL) mentality is used in the technology, which means there can always be some form of intervention in a system where humans maintain control”.   

The Visibility and Control Points in AI Holistic Adoption give humans an intervention mechanism based on role defined access and control privileges. The corporation’s Analytics Library consists of Value Analytics (VA) which must be designed with stakeholder Role-Based Visibility and Control Points. These are required to ensure that AI evolution stays on track with business objectives including those requirements derived from Ethics and Privacy Policies.   

Management of Visibility and Control Points will be key and must be part of the AI System Architecture and design from the beginning. The definition of the roles, and their corresponding Visibility and Control Points, are highly sensitive and key as they influence the direction of the AI evolution. 

Design teams must determine where Role-Based Visibility and Control Points will reside and the associated governance mechanism in the AI System. A straightforward solution is to maintain Role-Based Visibility and Control Point data in the Analytics Design Package of each Value Analytic and have a platform-based delivery mechanism via an API. The API provides the roles-based access for visibility and for the Analytics Design Package changes.   

The platform API must have access to historical trend data associated with the AI solution. Standard Identity Access Management (IAM) is engaged for visibility and control access to both the Analytics Design Package elements and historic data trends database. Control Point adjustments will alter the Analytics Design Package components, most likely the algorithm code, training model, baseline dataset and user value configuration.  

Visibility and control over Ethics and Privacy concerns can be enabled for stakeholders both in the corporation and at the consumer level. Some examples of role-based visibility and control point are below: 

Role  Scenario Example  Visibility Point   Control Point 
Technician on manufacturing floor  AI predicts and adjusts equipment air flow based on seasonal temperature fluctuations and current manufacturing floor layout. The technician knows of new neighboring equipment arriving which will change the temperature and air flow. The technician wants to use visibility and control points to adjust and aid new predictions of the equipment performance.  See machinery status and predictive trends.  Adjust air cooling parameters based on site changes. 
Manufacturing Site Manager  A manager has unexpected expenses which have hit the site budget. AI predicts and schedules maintenance of the equipment on the floor based on equipment parameters and manufacturer’s recommended time window. The manager wants to use visibility and control points to avoid the calendar Quarter boundary while keeping in the recommended window to optimize budget management.   See the predicted equipment maintenance window schedule for optimal equipment lifetime.   Adjust the expenditure of the equipment maintenance within the window. 
HR  An HR manager is rolling out new diversity hiring guidelines. Algorithms have learned the characteristics of historic hiring successes based on past demographics. The new corporate diversity hiring program aims to increase diversity from past trends thus the learned ethnic parameters must be adjusted to account for new policies.  See the decision criteria, resumes & resulting hiring trends based on the existing AI algorithms.  Adjust to incorporate new diversity hiring program guidelines. 
Consumer  The consumer is going out for the evening to an establishment that requires age ID to enter. They wish to only provide the relevant age data but not their address.   See personal data provided.  Select that age data alone is shared and it is only used for the purpose of entry. (ie. Personal Data and Individual Agency). 

 

With AI Holistic Adoption’s implementation of DCE_AI with role-based visibility and control points, humans maintain authority in how data is used, how models are evolving, and how influence is being applied, ensuring that all 3 aspects of the AI Trust Triad are incorporated.  

Stakeholder roles and their corresponding visibility and control points may be managed by Identity Access Management (IAM) techniques for less complex systems or more sophisticated techniques might be necessary such as blockchain for highly expansive systems.  

 

Certification 

For decades, we have seen the Design for Security movement with corporate security initiatives include programs, certifications, penetration (pen) testing, audits with mitigation and remediation plans etc.  

Just as with security criteria, the execution of ethical and privacy designs will be governed through certification. The goal is to evolve standard corporate security certification to a Trust Triad Certification scope. The certification criteria must be defined by the corporation’s Security Policies, Ethics Policies and Privacy Policies. When designing solutions, the checklist must include mandatory compliance with evidence, addressing all requirements in each category.  

   AI Trust Triad Certification   
Policy  Example Policy Question  Evidence 
Security  Firewall implementation?  Pen Test Results 
Privacy   Data Agency rules implemented?  Data Privacy Test Results 
Ethics  Complies with gender inclusion guidelines?  Gender Inclusion Test Results 

 

Just as with security policies, standards for what are to be included in privacy and ethical policies must be defined by the corporation: independently or by leveraging international body works such as IEEE or ISO. 

 

The Un-Asked Questions and Deeper Implications 

Just taking an organization’s high level Ethics policies, unpacking and digitizing them is not enough. The executive leader driving AI Holistic Adoption needs to look deeper and dig out the not-so-obvious and unknown questions using the AI Trust Triad as the guide.  

For example, in the AI Holistic Adoption: Data column in AI Trends, article, we explored the Baseline Data Set of a Value Analytic and the use of smaller sized Data Sets to achieve the first mover advantage of our AI solutions. This is not only good for business, but recently a potentially deeper implication of Data Set size was brought forward by AI Ethicist Timnit Gebru, formerly of Google, (See AI Trends, Dec. 10, 2020) speculating that avoiding Large Language Models is more ethical due to environmental impact and inequitable access to resources.   

An additional example is seen in the question regarding the common concern, ‘Will AI replace the workforce?’ If the answer unfolds, ‘No, but those who know how to work with AI will replace those that do not,’ the deeper implication is that the corporation bears the responsibility in part, to teach the existing workforce how to work with AI prior to the AI solution launch.   

To unearth these questions and implications, the organization must bring diversity of thinking to their teams, master the AI Trust Triad and execute Design for Controlled and Ethical AI. In driving these, executive leaders will ensure the success of their organization’s Digital Transformation and Holistic AI Adoption journey. 

 

Dawn Fitzgerald is VP of Engineering and Technical Operations at Homesite, an American Family Insurance company, where she is focused on Digital Transformation. Prior to this role, Dawn was a Digital Transformation & Analytics executive at Schneider Electric for 11 years. She is also currently the Chair of the Advisory Board for MIT’s Machine Intelligence for Manufacturing and Operations program. All opinions in this article are solely her own and are not reflective of any organization. 

This post was first published on: AI Trends