Robert Kugel's Analyst Perspectives

Risk Management is Essential for AI and Generative Development

Written by Robert Kugel | Jan 4, 2024 11:00:00 AM

The recent launch of the AI Alliance, a coalition of more than 50 corporations and research institutions engaged in artificial intelligence (AI) development (including AMD, CERN, Cornell University, Dell Technologies, IBM, Intel, Linux Foundation, Meta, NASA, Oracle, ServiceNow and Sony Group), aims to achieve the following objectives: 

  • Provide independent benchmarks and evaluation standards, tools, and other resources for the responsible development and use of AI systems.  
  • Deliver a set of vetted safety, security and trust tools.   
  • Create an ecosystem of open foundation models with diverse modalities, including highly capable multilingual, multi-modal, and science models.  
  • Promote an AI hardware accelerator ecosystem through the adoption of core enabling software technology. 
  • Support AI skills building and exploratory research globally.  
  • Foster initiatives to encourage open development of AI in safe and beneficial ways. 

The AI Alliance has both commercial motivations as well as (I hope) a collective voice loud enough to build understanding and trust in the technology, preventing or at least mitigating the potentially negative impact of ill-informed regulation from either the United States or the European Union. Moreover, all the discussion of AI as a shiny new object misses the reality that the technology is already at work delivering practical and safe results that improve productivity and performance. 

One motivating factor behind the creation of this group is a competitive concern that the tie-in between Microsoft and OpenAI, along with the latter’s expected pivot to more of a commercial endeavor, could make generative AI technology more proprietary and less open. Few people under the age of 50 can recall a time when computing systems were closed, based on proprietary standards that enforced vendor lock-in for hardware and software. Until the early 1990s, there was little room for third parties to develop software or create peripheral devices that worked with that anchor system unless they were granted permission by that vendor and paid extortionate fees. Then, technology buyers rebelled, and systems that were far more open (but not completely open) became the norm. This was followed by the launch of the Open Source Initiative (OSI) in 1998, which marked a key beginning for open source software, a construct where the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose, often in a public, collaborative fashion. 

Beyond the objective of spurring competition, one of the most consequential objectives of the AI Alliance is developing tools and methods for safety, security and trust that are essential for the further rapid advancement and adoption of AI and generative AI. This work is necessary because of what I see as an overblown reaction to the purported dangers of generative AI and AI generally. Public awareness of AI was permanently altered a year ago when ChatGPT went viral and became a thing that people could touch and use. Having no background in work that had been underway for quite some time, the general public and politicians were susceptible to fear of the unknown. Having a large, diverse, self-interested body dedicated to furthering the use of AI can be a useful counterweight to those looking to serve their own interests by sowing fear and doubt about an innovative technology. This is especially important for users of business computing, who have a great deal to gain from the rapid expansion of AI-enabled features in the software they use to run their organizations.  

Those seeing grave danger in AI and generative AI miss its potential to reduce the time spent on vast quantities of simple and seemingly inconsequential activities that currently sap productivity, raise costs and prevent individuals from having the time to focus on more difficult issues that require training, skill and experience. To be sure, there are risks with any technology, but the application of existing AI technology in business software promises to boost productivity. So much so that, for example, Ventana Research asserts that by 2027, almost all vendors of business applications will use some form of generative AI to enhance their capabilities and functionality to remain competitive. 

The work that the AI Alliance proposes to do complements work that’s already been done by the National Institute of Standards and Technology (NIST) which developed an AI Risk Management Framework that was released in early 2023, along with a video explaining the approach. The framework is general enough to be applicable to most situations and use cases. It focuses on how developers of the technology should approach the risks associated with harm to people, organizations and the ecosystems — for example risk to interdependent social structures such as financial institutions when individual actions could create a software-driven cascade of negative outcomes. 

Consistent with all risk management in complex systems and environments, there is a significant challenge in developing accurate and consequential methods of defining and measuring risks. Given the immaturity of the technology, there are plenty of unknowns that will require management, but for the vast number of use cases now contemplated, there seem to be very few unknown unknowns. Beyond that, individual organizations have their own tolerance for risk and prioritize risks differently. The process of integrating AI risk management also needs to be defined in ways that achieve each organization’s objectives, following general guidelines. 

 At its core, the ultimate objective of applying risk management to AI is to ensure that the results of any AI system are valid and reliable as well as accountable and transparent. The last two mean that systems are not designed in a way to be a black box immune to inspection and therefore understanding. The outcome of using AI must be readily verifiable so that results are explainable and interpretable. Like any enterprise software, AI systems must be designed to be secure and resilient to avoid nefarious modifications and, especially where AI is part of a core system, able to rapidly recover from shocks of all kinds. The systems also must respect individual privacy and, as much as possible, limit bias in their creation. On that last point, where there’s transparency, it’s likely that a dispassionate application of machine learning (ML) is likely to be less biased than cases in which humans are involved. At the same time, there also is a danger that — consistent with human nature — AI systems will be accused of bias by those who find some objectively determined result unflattering or inconvenient.  

What’s often overlooked in current discussions about AI and generative AI is that both are already being applied to day-to-day activities in enterprises. A quick survey of currently available vendor offerings reveals a range of AI-enabled capabilities including: 

  • Anomaly detection that flags any data entry that appears to be wrong. AI goes beyond fixed data validation rules because it dynamically and automatically learns over time without the need for programming. Unlike standard data validation rules, these systems can assess validity in a more specific context that takes multiple factors into account. For example, the price of a product sold in a specific channel or to a class of customer. Immediately addressing out-of-alignment data entries can substantially reduce errors to boost productivity, reduce costs and accelerate processes.  
  • AI-enabled predictive analytics which are used to generate an unconstrained demand plan at a desired level of granularity — one that considers seasonality, promotions, events and external factors. These can be continuously tested to provide alerts when results diverge significantly from the plan, allowing organizations to react faster to boost agility.  
  • Driver-based forecasting that aligns sales, labor, material costs and inventory to facilitate more accurate plans that can be quickly altered when outcomes diverge from the forecast.  
  • Detailed cash management built on forecast changes in working capital, operating cash flows and investments while respecting legal entity, currency and location constraints.  
  • Conversational analytics that enable people to quickly have a dialog with the numbers in a report or document to understand the meaning behind the numbers. 
  • Automated annotations and storytelling. 

These items aren’t especially flashy, and they might not seem consequential because they address productivity and effectiveness issues at an atomic level. However, because each of these and other seemingly inconsequential improvements are multiplied by the tens of millions every day, their collective impact on commerce and the economy will be substantial. And all these examples use technology in a way that can be easily explained and verified.  

Risk management for AI and generative AI capabilities will be a key requirement for all business software vendors, so I recommend that they have a robust, customer-centric approach in place as they introduce features and capabilities. This will entail having: 

  • An emphasis on transparency (the ability to explain and interpret) with readily available disclosures that enable human review, which helps build trust. 
  • A framework and methodology that evaluate risks in the application of AI for specific users and use cases. These enable vendors, product managers and executives to assess risk sensitivity that can lead to design modifications for risk mediation or elimination. 
  • A fundamental approach to data governance (especially in defining security in the collection, management and application of data) that promotes corporate integrity and personal privacy. 
  • Tools that enable customers to test their own specific uses of the software to assess and manage AI-related risks. 
  • A tone at the top that prioritizes thoughtful use of AI technologies. 

Buyers must also have internal AI risk management systems, processes and culture in place to ensure that they can take maximum advantage of the technology as quickly as possible.  

Regards,

Robert Kugel