Third-party AI Tools Responsible for Majority of Failures in Organizations, Survey Finds
A recent survey conducted by MIT Sloan Management Review and Boston Consulting Group has revealed that over 55% of AI-related failures in organizations can be attributed to third-party AI tools. These failures not only result in reputational damage and financial losses but also lead to a loss of consumer trust and potential litigation.
According to Philip Dawson, head of AI policy at Armilla AI, many enterprises have not fully adapted their third-party risk management programs to the context of AI or the challenges of deploying complex systems like generative AI products. As a result, they often fail to subject AI vendors or their products to the same level of assessment undertaken for cybersecurity, leaving them blind to the risks associated with deploying third-party AI solutions.
The release of ChatGPT, a generative AI tool, almost a year ago sparked a boom in the technology. Other companies, including Microsoft Bing and Google Bard, quickly followed suit and released their own AI chatbots. While these bots gained popularity and showcased their capabilities, they also brought about ethical challenges and raised important questions.
The survey, which received responses from 1,240 participants across 87 countries, found that 78% of companies rely on third-party AI tools, with 53% exclusively using these tools without any in-house AI technology. However, despite their widespread use, 55% of AI-related failures can be attributed to the utilization of these third-party tools.
Alarmingly, the study also revealed that 20% of organizations failed to evaluate the substantial risks posed by third-party AI tools. This highlights the importance of conducting thorough risk assessments and evaluations when engaging with vendors and utilizing their AI solutions.
Triveni Gandhi, responsible AI lead for AI company Dataiku, emphasized the need for a more comprehensive evaluation of third-party tools, especially in regulated industries such as financial services. She pointed out the correlation between model risk management practices and responsible AI, suggesting that organizations should adopt similar practices to ensure responsible deployment of AI technologies.
To address these issues, the researchers recommend implementing risk assessment strategies such as vendor audits, internal reviews, and compliance with industry standards. They also stress the significance of prioritizing responsible AI throughout the organization, from regulatory departments to the CEO. In fact, organizations with a CEO who is actively involved in responsible AI reported 58% more business benefits compared to those with a less engaged CEO.
The survey further revealed that organizations with a CEO who is involved in responsible AI are almost twice as likely to invest in responsible AI compared to those with a hands-off CEO.
In conclusion, while third-party AI tools can play a crucial role in organizational AI strategies, it is essential to address the risks associated with their usage. Thorough risk assessments, vendor evaluations, and compliance with industry standards are vital steps in ensuring responsible AI deployment. Moreover, organizations should prioritize responsible AI from top to bottom, with the CEO taking an active role in driving the adoption and implementation of responsible AI practices.