The rapid growth of emerging AI technology is increasing the complexity of AI constellations beyond the capacity for responsibility and accountability in AI systems. This is the conclusion from a recent survey of 1,240 executives published by MIT Sloan Management Review and Boston Consulting Group (MIT SMR and BCG), which looked at the progress of responsible AI initiatives and the adoption of both internally built and externally sourced AI tools, what the researchers call “shadow AI”. The risks that come from ever-rising shadow AI are increasing, too. Companies’ growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI, exposes them to new commercial, legal, and reputational risks that are difficult to track.
The researchers refer to the importance of responsible AI, which they define as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.” However, a number of companies “appear to be scaling back internal resources devoted to responsible AI as part of a broader trend in industry layoffs.” These reductions in responsible AI investments are happening when they are most needed.
The research suggests 78% of organizations report accessing, buying, licensing or otherwise using third-party AI tools, including commercial APIs, pretrained models, and data. More than half (53%) rely exclusively on third-party AI tools and have no internally designed or developed AI technologies of their own. Responsible AI programs “should cover both internally built and third-party AI tools,” Renieris and her co-authors urge. Ultimately, if something were to go wrong, it wouldn’t matter to the person being negatively affected if the tool was built or bought.
The co-authors caution that while “there is no silver bullet for mitigating third-party AI risks, or any type of AI risk for that matter,” they urge a multi-prong approach to ensuring responsible AI in today’s wide-open environment. Such approaches could include evaluation of a vendor’s responsible AI practices, contractual language mandating adherence to responsible AI principles, vendor pre-certification and audits (where available), internal product-level reviews (where a third-party tool is integrated into a product or service), adherence to relevant regulatory requirements or industry standards, and inclusion of a comprehensive set of policies and procedures, such as guidelines for ethical AI development, risk assessment frameworks, and monitoring and auditing protocols.
The specter of legislation and government mandates might make such actions a necessity as AI systems are introduced, the co-authors warn.