The application of advanced technologies such as Machine Learning (ML) has been rapid and widespread, and more recently ChatGPT, which relies on a subsection of ML called 'large language models', is both very useful and potentially dangerous. Responsible Artificial Intelligence (RAI) is necessary to mitigate the technology’s risks, including issues of bias, fairness, safety, privacy, and transparency. Yet, it is by no means standard practice, and adoption of RAI across organizations worldwide has thus far been relatively limited.
As an industry leader in solutions for professionals, Wolters Kluwer has been at the forefront of embedding advanced technologies in their product and the Wolters Kluwer Internal Audit team has played a key role in helping to develop a governance framework for RAI. Hear first-hand from Deep Nanda, the AI Lead in the Wolters Kluwer Internal Audit Team, on the work done and lessons learned in this critical new area of ESG (Environmental, Social, and Governance).
- What is Responsible AI?
- Why do we need Responsible AI programs?
- What role can auditors play in implementing Responsible AI?