CISI hosts launch of Investor Consensus on Responsible AI

Global investment heavyweights gathered at CISI headquarters in London on 17 October to launch a high-level consensus on what investors expect from their investee companies in terms of responsible development and deployment of AI
by Jane Playdon, Review editor

Lori Heinel (pictured above right), global chief investment officer at State Street Global Advisors, chaired the event, which celebrated the third in a trio of achievements under the Lord Mayor’s Ethical AI Initiative – the first two being the broad reach of AI ethics courses, including our Certificate in Ethical Artificial Intelligence, and the Walbrook AI Accord.

The Investor Consensus on Responsible AI (ICRAI) was introduced to help investors and investees coordinate around the responsible development and deployment of AI. It aims to increase returns and reduce risks and is based on input from investors with over US$30tn assets under management or advisement – a significant proportion of the global total – many of whom attended the launch.

Also in attendance were the 695th Lord Mayor, Alderman Professor Michael Mainelli, Chartered FCSI(Hon) and Nicholas Beale, chair of the Ethical AI Initiative (both pictured above with Heinel); Sir Kenneth Olisa OBE, HM Lord-Lieutenant of Great London and chair at Restoration Partners; Christine Chow, chair of the International Corporate Governance Network; Professor Chris Summerfield of Oxford University, technical director at the AI Safety Institute within the government’s Department of Science, Innovation and Technology; and Lydia Edmonds, senior investment partner at the government’s Office for Investment. 

Responsibilities of relatively few AI providers

Explaining more about the need for the Consensus, Heinel – who had flown from her Boston HQ specially for the event – said that the relatively few AI providers have a responsibility to deploy the tech not only “in ways that provide the right societal benefits” but also in recognition of the “commercial and capitalistic sensibilities” arising from their dominance in the space today. And for businesses at different stages in adopting AI, “it's critical that the governance and oversight and end-to-end understanding of the upstream and downstream implications are vetted as these technologies are deployed”. Short and not prescriptive Nicholas Beale clarified that the Consensus is only two pages long and is “not meant to be a straitjacket” that imposes a set of values on investors. While it “doesn’t spell out every detail”, it is “a reasonable distillation of what most investors would like to see,” he said, and it isn’t constrained to a particular geography.

It aligns with the core responsibilities of investment managers in that it’s about achieving returns while understanding the risks, “in particular, systemic risk”, which is “super difficult” to diversify away from. The Ethical AI Initiative expects the Consensus to “evolve in cooperation with investors and people with deep technical insights, particularly the AI safety institutes,” he said.

The Consensus is "not meant to be a straitjacket" He explained that ICRAI is also not constrained to the current Lord Mayor, whose term finishes in November. Instead, it is a not-for-profit entity comprising two elements: a technical working group that liaises with AI safety institutes and an editing group responsible for editing the Consensus. The editing group’s next task is to create the ‘ICRAI Charter’, which will “define the terms of reference of an Investor Council on Responsible AI”. This council will “settle the rules” but will hopefully remain a “very lightweight coordinator of some very heavyweight investors”, said Beale. They hope to present the Charter at the Paris Summit in February 2025, the chief intergovernmental AI gathering of the year.

Beale was also keen to point out that one of the principles at ICRAI is likely to be “no sponsorship whatsoever” because it’s going to be important to ensure the Consensus represents the investors and is not dependent on other interests that might be actively lobbying.

Influencing developers to build AI models that are safe for human use Professor Chris Summerfield explained that the role of the AI Safety Institute within the government’s Department of Science, Innovation and Technology “is to equip the government with an empirical understanding of advanced AI”. This includes “primary research involving machine learning engineers … who evaluate advanced technologies as they arise … and write reports about the differing capabilities of these models and feed them back to developers in such a way that the developers are informed about where the potential risks may be arising with the technologies that they're building”.

They also engage directly with sector experts, he said, to understand what’s happening on the ground.

He identified two points of alignment between the goals of the Consensus and the AI Safety Institute – the danger of systemic risk referenced in the Consensus document and the “privileged access” and, therefore, influence that the AI Safety Institute has on developers “to encourage them to build models that are safe for human use”.

“The other major lever is investment in these companies themselves. The idea that the investment community can collectively decide to allocate funds to developers who behave and whose technologies are, as far as we can assess them, safe and fit for use – that's a new departure and something which, I think, is without precedent.”

Access public support through safe and responsible business Lydia Edmonds explained that her team, based at No 10, works across Whitehall to engage all arms of government. They spend increasing amounts of time “talking to AI companies and companies that are investing in the infrastructure to enable AI”.

Reflecting Summerfield’s comments about access, she said, “Access to capital and access to computing are shaping this sector, and therefore, an initiative like this, which is evidence-based and focused on responsible deployment of capital, is very exciting and thoughtful, and I'm really keen that we are supporting it as best we can.”

She added that effective deployment of AI is very much contingent on consent and support from the public, which can only be accessed through “safe and responsible business”. The importance of ISO/IEC 42001:2023

Regulation has a part to play in this, said Michael Mainelli, speaking of the “race to regulate” AI and drawing attention to ISO/IEC 42001:2023, an international AI standard “that no one seems to have noticed” for the ethical use of AI in organisations.

“We're very conscious that the UK public has serious misgivings about the deployment of AI,” he said. “What we're proposing doesn't cover some of the existentialist areas that need to be examined in the safety summits. It doesn't cover rogue actors. It simply covers responsible firms who want to look at themselves and their supply chain and see that they've got reasonable conformity assessment across that.” ICRAE largely but not exclusively promotes ISO/IEC 42001, he said, and “shows the commitment of the financial services sector to the responsible use of AI, not just directly, but through investee companies and the countries in which they invest”.

He highlighted the two other elements to the Ethical AI Initiative: the guidance on delivering ethics courses, such as the CISI’s Certificate in Ethical Artificial Intelligence which has now been taken by around 6000 people, including many global regulators, in over 60 countries; and the Walbrook AI Accord [named after the address of Mansion House, where the Lord Mayor lives], a collaboration aiming for safe, secure, ethical and sustainable AI, that has to date been signed by 38 countries.

To celebrate the high uptake of the Certificate in Ethical AI, Mainelli concluded his comments by presenting Aletta Ely, Chartered FCSI (pictured), chief of staff at J.P. Morgan AG and one of the latest achievers of our Certificate in Ethical AI, with her certificate.

Think in systems, not silos Sir Ken Olisa, a noted tech entrepreneur, banker, and philanthropist, concluded by drawing an analogy between climate change and AI, pointing out that they’re both systems. “Until climate change, we never had the chance to get people to think properly about the system we live in, but these things are all connected, and we should, therefore, be thinking about them collectively,” he said. He commended the Initiative and encouraged us all to keep the word ‘system’ at “the forefront of our thinking”. We need a ‘Chapter AI’ One attendee, who invoked the Chatham House Rule, said that while we’re pushing for board oversight, the boards themselves are asking for help. “We investors have a duty while we're looking at systemic stewardship to support companies in finding ways to create the right kind of internal structure to oversee responsible AI, connecting what we call the top to bottom and back to front,” they said.

CISI chair Michael Cole-Fontayn MCSI agreed, noting similarities with boards regarding climate change: “We found independent directors didn’t have the knowledge or the skillset to challenge the executive, so a self-help group for non-executive directors was established: Chapter Zero. Chapter Zero, founded here, has mushroomed and spread worldwide now with various branches. Maybe there can be a ‘Chapter AI’." 

Over US$30tn AUM/AUA speaks volumes Heinel concluded the launch by acknowledging the milestone of having drafted the Consensus, with the staggering numbers – over US$30tn AUM/AUA – behind it speaking “volumes”.

“This is a critical thing that we want to tackle together, and given the commitment of everybody around the table and on the call, I have every confidence that we will move forward with that.”

Jane is the editor of The Review.
She is a Professional Member of the Chartered Institute of Editing and Proofreading.

CISI members can sign in to continue reading
Published: 21 Oct 2024
Categories:
Integrity & Ethics
Wealth Management