Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Most companies are deploying AI at significant risk, finds a new report from the Fair Isaac Corp., due to immature processes for AI governance. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor  

A new report on responsible AI from the Fair Isaac Corp. (FICO), the company that brings you credit ratings, finds that most companies are deploying AI at significant risk. 

The report, The State of Responsible AI: 2021, assesses how well companies are doing in adopting responsible AI, making sure they are using AI ethically, transparently, securely and in their customers best interest.  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Scott Zoldi, Chief Analytics Officer, FICO

“The short answer: not great,” states Scott Zoldi, Chief Analytics Officer at FICO, in a recent account on the blog of Fair Isaac. Working with market intelligence firm Corinium for the second edition of the report, the analysts surveyed 100 AI-focused leaders from financial services, insurance, retail, healthcare and pharma, manufacturing, public and utilities sectors in February and March 2021.  

Among the highlights: 

  • 65% of respondents’ companies cannot explain how specific AI model decisions or predictions are made; 
  • 73% have struggled to get executive support for prioritizing AI ethics and Responsible AI practices; and  
  • Only 20% actively monitor their models in production for fairness and ethics. 

With worldwide revenues for the AI market including software, hardware and services, forecast by IDC market researchers to grow 16.4% in 2021 to$327.5 billion, reliance on AI technology is increasing. Along with this, the report’s authors cite “an urgent need” to elevate the importance of AI governance and Responsible AI to the boardroom level.  

Defining Responsible AI 

Zoldi, who holds more than 100 authored patents in areas including fraud analytics, cybersecurity, collections and credit risk, studies unpredictable behavior. He defines Responsible AI here and has given many talks on the subject around the world.  

“Organizations are increasingly leveraging AI to automate key processes that, in some cases, are making life-altering decisions for their customers,” he stated. “Not understanding how these decisions are made, and whether they are ethical and safe, creates enormous legal vulnerabilities and business risk.” 

The FICO study found executives have no consensus about what a company’s responsibilities should be when it comes to AI. Almost half (45%) said they had no responsibility beyond regulatory compliance to ethically manage AI systems that make decisions which could directly affect people’s livelihoods. “In my view, this speaks to the need for more regulation,” he stated.  

AI model governance frameworks are needed to monitor AI models to ensure the decisions they make are accountable, fair, transparent and responsible. Only 20% of respondents are actively monitoring the AI in production today, the report found. “Executive teams and Boards of Directors cannot succeed with a ‘do no evil’ mantra without a model governance enforcement guidebook and corporate processes to monitor AI in production,” Zoldi stated. “AI leaders need to establish standards for their firms where none exist today, and promote active monitoring.” 

Business is recognizing that things need to change. Some 63% believe that AI ethics and Responsible AI will become core to their organization’s strategy within two years.  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Cortnie Abercrombie, Founder and CEO, AI Truth

“I think there’s now much more awareness that things are going wrong,” stated Cortnie Abercrombie, Founder and CEO of responsible AI advocacy group AI Truth, and a contributor to the FICO report. “But I don’t know that there is necessarily any more knowledge about how that happens.” 

Some companies are experiencing tension between management leaders who may want to get models into production quickly, and data scientists who want to take the time to get things right. “I’ve seen a lot of what I call abused data scientists,” Abercrombie stated. 

Little Consensus Around What Are Ethical Responsibilities Around AI  

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Ganna Pogrebna, Lead for Behavioral Data Science, The Alan Turing Institute

Regarding the lack of consensus about the ethical responsibilities around AI, companies need to work on that, the report suggested. “At the moment, companies decide for themselves whatever they think is ethical and unethical, which is extremely dangerous. Self-regulation does not work,” stated Ganna Pogrebna, Lead for Behavioral Data Science at the Alan Turing Institute, also a contributor to the FICO report. “I recommend that every company assess the level of harm that could potentially come with deploying an AI system, versus the level of good that could potentially come,” she stated.   

To combat AI model bias, the FICO report found that more companies are bringing the process in-house, with only 10% of the executives surveyed relying on a third-party firm to evaluate models for them.   

The research shows that enterprises are using a range of approaches to root out causes of AI bias during model development, and that few organizations have a comprehensive suite of checks and balances in place.  

Only 22% of respondents said their organization has an AI ethics board to consider questions on AI ethics and fairness. One in three report having a model validation team to assess newly-developed models, and 38% report having data bias mitigation steps built into model development.  

This year’s research shows a surprising shift in business priorities away from explainability and toward model accuracy. “Companies must be able to explain to people why whatever resource was denied to them by an AI was denied, ” stated Abercrombie of AI Truth.  

Adversarial AI Attacks Reported to be On the Rise  

Adversarial AI attacks, in which inputs to machine learning models are hacked in an effort to thwart the correct operation of the model, are on the increase, the report found, with 30% of organizations reporting an increase, compared to 12% in last year’s survey. Zoldi stated that the result surprised him, and suggested that the survey needs a set of definitions around adversarial AI.  

Data poisoning and other adversarial AI technologies border on cybersecurity. “This may be an area where cybersecurity is not where it needs to be,” Zoldi stated.  

Organization politics was cited as the number one barrier to establishing Responsible AI practices. “What we’re missing today is honest and straight talk about which algorithms are more responsible and safe,” stated Zoldi. 

Respondents from companies that must comply with regulations have little confidence they are doing a good job, with 31% reporting the processes they use to ensure projects comply with regulations are effective. Some 68% report their model compliance processes are ineffective.  

As for model development audit trails, four percent admit to not maintaining standardized audit trails, which means some AI models being used in business today are understood only by the data scientists that originally coded them.  

This falls short of what could be described as Responsible AI, in the view of Melissa Koide, CEO of the AI research organization FinRegLab, and a contributor to the FICO report. “I deal primarily with compliance risk and the fair lending sides of banks and fintechs,” she stated. “I think they’re all quite attuned to, and quite anxious about, how they do governance around using more opaque models successfully.”  

More organizations are coalescing around the move to Responsible AI, including the Partnership on AI, formed in 2016 and including Amazon, Facebook, Google, Microsoft, and IBM, The European Commission in 2019 published a set of non-binding ethical guidelines for developing trustworthy AI, with input from 52 independent experts, according to a recent report in VentureBeat. In addition, the Organization for Economic Cooperation and Development (OECD) has created a global framework for AI around common values.  

Also, the World Economic Forum is developing a toolkit for corporate officers for operationalizing AI in a responsible way. Leaders from around the world are participating.   

“We launched the platform to create a framework to accelerate the benefits and mitigate the risks of AI and ML,” stated Kay Firth-Butterfield, Head of AI and Machine Learning and Member of the Executive Committee at the World Economic Forum. “The first place for every company to start when deploying responsible AI is with an ethics statement. This sets up your AI roadmap to be successful and responsible.” 

Wison Pang, the CTO of Appen, a machine learning development company, who authored the VentureBeat article, cited three focus areas for a move to Responsible AI: risk management, governance, and ethics.  

“Companies that integrate pipelines and embed controls throughout building, deploying, and beyond are more likely to experience success,” he stated.  

Read the source articles and information on the blog of Fair Isaacin the Fair Isaac report, The State of Responsible AI: 2021, on the definition in Responsible AI and in VentureBeat. 

This post was first published on: AI Trends