Governance: Companies mature in their use of AI know that it needs guardrails


Quality governance ensures responsible data models and AI execution, as well as helps the data models stay true to the business objectives.

Retro wooden robot with light bulb on bright background

Image: Getty Images/iStockphoto

More about artificial intelligence

The fundamentals of traditional IT governance have focused on service-level agreements like uptime and response time, and also on oversight of areas such as security and data privacy. The beauty of these goals is that they are concrete and easy to understand. This makes them attainable with minimal confusion if an organization is committed to getting the job done.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Unfortunately, governance becomes a much less-definable task in the world of artificial intelligence (AI), and a premature one for many organizations.

“This can come down to the level of AI maturity that a company is at,” said Scott Zoldi, chief analytics officer at FICO. “Companies are in a variety of stages of the AI lifecycle, from exploring use cases and hiring staff, to building the models, and having a couple of instances deployed but not widely across the organization. Model governance comes into play when companies are mature in their use of AI technology, are invested in it, and realize that AI’s predictive and business value should be accompanied by guardrails.”

Because AI is more opaque than enterprise IT environments, AI requires a governance strategy that asks questions of architectures and that requires architectures to be more transparent,” Zoldi said.

SEE: 3 steps for better data modeling with IT and data science (TechRepublic)

Achieving transparency in AI governance begins with being able explain in plain language the technology behind AI and how it operates to board members, senior management, end users, and non-AI IT staff. Questions that AI practitioners should be able to answer should include but not be limited to, how data is prepared and taken into AI systems, which data is being taken in and why, and how the AI operates on the data to return answers to the questions that the business is asking. AI practitioners should also explain how both data and what you ask of it continuously change over time as business and other conditions change.

This is a pathway to ensuring responsible data models and AI execution, and also a way to ensure that the data models that a company develops for its AI stay true to its business objectives.

One central AI governance challenge is ensuring that the data and the AI operating on it are as bias-free as possible.

“AI governance is a board-level responsibility to mitigate pressures from regulators and advocacy groups,” Zoldi said. “Boards of directors should care about AI governance because AI technology makes decisions that profoundly affect everyone. Will a borrower be invisibly discriminated against and denied a loan? Will a patient’s disease be incorrectly diagnosed, or a citizen unjustly arrested for a crime he did not commit?  

How to achieve AI fairness

The increasing magnitude of AI’s life-altering decisions underscores the urgency with which AI fairness and bias should be ushered onto boards’ agendas.”

SEE: Equitable tech: AI-enabled platform to reduce bias in datasets released  (TechRepublic)

Zoldi said that to eliminate bias, boards must understand and enforce auditable, immutable AI model governance based on four classic tenets of corporate governance: accountability, fairness, transparency, and responsibility. He believes this can be achieved if organizations focus their AI governance on ethical, efficient, and explainable AI.

Ethical AI ensures that models operate without bias toward a protected group, and are used only in areas where we have confidence in the decisions the models generate. These issues have strong business implications; models that make biased decisions against protected groups aren’t just wrong, they are illegal.

Efficient AI helps AI make the leap from the development lab to making decisions in production that can be accepted with confidence. Otherwise, an inordinate amount of time and resources are invested in models that don’t deliver real-world business value. 

Explainable AI makes sure that companies using AI models can meet a growing list of regulations, starting with GDPR, to be able to explain how the model made its decision, and why.”

SEE: Encourage AI adoption by moving shadow AI into the daylight (TechRepublic)

Some organizations are already tackling these AI governance challenges, while others are just beginning to think about them.

This is why, when putting together an internal team to address governance, a best practice approach is a three-tiered structure that begins with an executive sponsor at the top to champion AI at a corporate level.

“One tier down, executives such as the CAO, CTO, CFO, and head of legal should lead the oversight of AI governance from a policy and process perspective,” Zoldi said. “Finally, at the blocking-and-tackling level, senior practitioners from the various model development and model delivery areas, who work together with AI technology on a daily basis, should hash out how to meet those corporate governance standards.”

 Also see



Source link