AI is a Tool Not a Magic Trick

  • Artificial Intelligence
  • 13.12.2023 12:15 pm

Having become more readily available, generative AI has captured the public’s attention and sparked concerns around its accuracy and bias. Global analytics software leader FICO is highlighting that all data is biased, and when it is used to build AI appropriate interpretable machine learning, algorithms and guardrails need to be employed.

“As the power of generative AI has become increasingly available to everyone, and increasingly popular, responsible usage concerns have also increased,” commented Dr. Scott Zoldi, Chief Analytics Officer at FICO, who has authored more than 130 patents related to AI and machine learning. “AI relies on data, and all data should be considered dangerous. This means that AI models need to be interpretable and inspected and continually monitored for bias. We cannot blindly apply AI to data and assume the AI is safe to use for important operations.”

When developing and building new models, Zoldi said businesses should assume that all data is biased, dangerous and a liability. This perspective requires deep inspection into models developed, and in particular using interpretable machine learning, which allows the business to understand what the model has learned, judge whether it is a valid tool, and then to apply that tool.

“It is vital that machine learning models are not built naively on data — you must assume all data contains a variety of biases that could be learned by the machine learning model,” Zoldi said. “If such models are deployed, they will systematically reapply biases in the models used in making decisions. Organisations need to understand and take responsibility for the fact that they are deploying human-in-the-loop machine learning development processes that are interpretable. Businesses cannot hide behind the black box, but instead must use transparent technologies that allow concrete demonstration that these models are not causing a disparate impact or discrimination towards one group versus another.”

A recent FICO survey carried out with Corinium showed that just 8% of organisations have even codified AI development standards. In the future, consumers will need to be able to ask whether organisations using AI have defined model development standards – in the same way that they currently have expectations around how their data is being used and protected. Consumers and businesses alike also need to understand that all AI makes mistakes. Governance of their use includes an ability to challenge the model and leverage auditability to challenge key data used to make decisions about a consumer. In a similar way to how consumers provide consent to share their data for specific purpose, they should also have some knowledge of what different AI techniques a financial institution is using to challenge the model, and this requires built-in transparency.  

Scott Zoldi continued: “If you think about machine learning as a tool, rather than a magic box, you will have a very different mentality, which is based on needing to understand how the tool works and how differences in data inputs impact that tool. This leads us to choose to use technologies that are transparent. It will take time, but the more conversations we have about interpretable machine learning technologies, the more organisations can start to demonstrate that they meet the necessary model transparency and governance principles, and the more customer confidence will improve. What is fundamental to this is ensuring that models are being built properly and safely, and not creating bias. This is what will start to establish trust.”

Related News