Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
AI World Government speakers addressed the challenges of getting AI engineers to incorporate ethical decisions into the design of AI systems.   

By John P. Desmond, AI Trends Editor  

Engineers tend to see things in unambiguous terms, which some may call Black and White terms, such as a choice between right or wrong and good and bad. The consideration of ethics in AI is highly nuanced, with vast gray areas, making it  challenging for AI software engineers to apply it in their work.  

That was a takeaway from a session on the Future of Standards and Ethical AI at the AI World Government conference held in-person and virtually in Alexandria, Va. this week.   

An overall impression from the conference is that the discussion of AI and ethics is happening in virtually every quarter of AI in the vast enterprise of the federal government, and the consistency of points being made across all these different and independent efforts stood out.  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Beth-Ann Schuelke-Leech, associate professor, engineering management, University of Windsor

“We engineers often think of ethics as a fuzzy thing that no one has really explained,” stated Beth-Anne Schuelke-Leech, an associate professor, Engineering Management and Entrepreneurship at the University of Windsor, Ontario, Canada, speaking at the Future of Ethical AI session. “It can be difficult for engineers looking for solid constraints to be told to be ethical. That becomes really complicated because we don’t know what it really means.”  

Schuelke-Leech started her career as an engineer, then decided to pursue a PhD in public policy, a background which enables her to see things as an engineer and as a social scientist. “I got a PhD in social science, and have been pulled back into the engineering world where I am involved in AI projects, but based in a mechanical engineering faculty,” she said.   

An engineering project has a goal, which describes the purpose, a set of needed features and functions, and a set of constraints, such as budget and timeline “The standards and regulations become part of the constraints,” she said. “If I know I have to comply with it, I will do that. But if you tell me it’s a good thing to do, I may or may not adopt that.”  

Schuelke-Leech also serves as chair of the IEEE Society’s Committee on the Social Implications of Technology Standards. She commented, “Voluntary compliance standards such as from the IEEE are essential from people in the industry getting together to say this is what we think we should do as an industry.”  

Some standards, such as around interoperability, do not have the force of law but engineers comply with them, so their systems will work. Other standards are described as good practices, but are not required to be followed. “Whether it helps me to achieve my goal or hinders me getting to the objective, is how the engineer looks at it,” she said.   

The Pursuit of AI Ethics Described as “Messy and Difficult”  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Sara Jordan, senior counsel, Future of Privacy Forum

Sara Jordan, senior counsel with the Future of Privacy Forum, in the session with Schuelke-Leech, works on the ethical challenges of AI and machine learning and is an active member of the IEEE Global Initiative on Ethics and Autonomous and Intelligent Systems. “Ethics is messy and difficult, and is context-laden. We have a proliferation of theories, frameworks and constructs,” she said, adding, “The practice of ethical AI will require repeatable, rigorous thinking in context.”  

Schuelke-Leech offered, “Ethics is not an end outcome. It is the process being followed. But I’m also looking for someone to tell me what I need to do to do my job, to tell me how to be ethical, what rules I’m supposed to follow, to take away the ambiguity.”  

“Engineers shut down when you get into funny words that they don’t understand, like ‘ontological,’ They’ve been taking math and science since they were 13-years-old,” she said.  

She has found it difficult to get engineers involved in attempts to draft standards for ethical AI. “Engineers are missing from the table,” she said. “The debates about whether we can get to 100% ethical are conversations engineers do not have.”  

She concluded, “If their managers tell them to figure it out, they will do so. We need to help the engineers cross the bridge halfway. It is essential that social scientists and engineers don’t give up on this.”  

Leader’s Panel Described Integration of Ethics into AI Development Practices  

The topic of ethics in AI is coming up more in the curriculum of the US Naval War College of Newport, R.I., which was established to provide advanced study for US Navy officers and now educates leaders from all services. Ross Coffey, a military professor of National Security Affairs at the institution, participated in a Leader’s Panel on AI, Ethics and Smart Policy at AI World Government.  

“The ethical literacy of students increases over time as they are working with these ethical issues, which is why it is an urgent matter because it will take a long time,” Coffey said.  

Panel member Carole Smith, a senior research scientist with Carnegie Mellon University who studies human-machine interaction, has been involved in integrating ethics into AI systems development since 2015. She cited the importance of “demystifying” AI.    

“My interest is in understanding what kind of interactions we can create where the human is appropriately trusting the system they are working with, not over- or under-trusting it,” she said, adding, “In general, people have higher expectations than they should for the systems.”  

As an example, she cited the Tesla Autopilot features, which implement self-driving car capability to a degree but not completely. “People assume the system can do a much broader set of activities than it was designed to do. Helping people understand the limitations of a system is important. Everyone needs to understand the expected outcomes of a system and what some of the mitigating circumstances might be,” she said.   

Panel member Taka Ariga, the first chief data scientist appointed to the US Government Accountability Office and director of the GAO’s Innovation Lab, sees a gap in AI literacy for the young workforce coming into the federal government.  “Data scientist training does not always include ethics. Accountable AI is a laudable construct, but I’m not sure everyone buys into it. We need their responsibility to go beyond technical aspects and be accountable to the end user we are trying to serve,” he said.  

Panel moderator Alison Brooks, PhD, research VP of Smart Cities and Communities at the IDC market research firm, asked whether principles of ethical AI can be shared across the boundaries of nations.   

“We will have a limited ability for every nation to align on the same exact approach, but we will have to align in some ways on what we will not allow AI to do, and what people will also be responsible for,” stated Smith of CMU.   

The panelists credited the European Commission for being out front on these issues of ethics, especially in the enforcement realm. 

Ross of the Naval War Colleges acknowledged the importance of finding common ground around AI ethics. “From a military perspective, our interoperability needs to go to a whole new level. We need to find common ground with our partners and our allies on what we will allow AI to do and what we will not allow AI to do.” Unfortunately, “I don’t know if that discussion is happening,” he said.  

Discussion on AI ethics could perhaps be pursued as part of certain existing treaties, Smith suggested  

The many AI ethics principles, frameworks, and road maps being offered in many federal agencies can be challenging to follow and be made consistent. Take said, “I am hopeful that over the next year or two, we will see a coalescing.”  

For more information and access to recorded sessions, go to AI World Government. 

Read more about this on: AI Trends