Grey matters ethical dilemma: Jane Doe – the verdict

Read the CISI's verdict and readers' comments on the ethical dilemma that appears in the October 2019 print edition of The Review

october19janedoe
Read the October 2019 dilemma

This Grey Matter, published in the October 2019 print edition of The Review, presents a dilemma that arises when a firm replaces an employee with a chatbot. When things start to go wrong, senior managers must decide what to tell clients and the regulator.

Should you wish to suggest a dilemma or topic to be featured in a future Grey Matter, please contact us at ethics@cisi.org.

Suggested solutions and results

  1. If the firm does not wish to escalate complaints about an AI programme to the regulator, the complaints should be assigned to Sammy, the person responsible for finding and implementing the programme, or to Alfie, the head of client services, and escalated to the FCA. (14%)
  2. The chatbot should be amended to inform clients that they are talking to a robot. However, Alfie has dealt with the complaints to the satisfaction of the clients, so the FCA need not be informed on this occasion. (9%)
  3. All clients should be written to, informing them that for the past two months, they have not been speaking with Jane, but with a robot. A message will be put on the website noting that the Jane programme has been running for two months, and it will be made clear on all future communications that clients are speaking with a robot. The complaints will be logged, in case of a visit by an FCA supervisor. (68%)
  4. Jane is clearly a liability. This is a failed experiment, and the company should either stop using the programme, or hire a junior client services administrator to work alongside the bot, and monitor responses. (9%)

Responses received: 403

The CISI verdict

This dilemma highlights some difficulties associated with the increasing use of AI and chatbots. A key principle to remember is, just because clients are interacting with a bot, it does not mean company/professional values can be disregarded. In this case, Identity Finance should have been mindful of the values of honesty and transparency, which the CISI defines as follows:

Honesty: Have I been truthful about my action or decision … and told no lies or ‘half-truths’?

Transparency: Have I been clear and not misleading to any party involved?

The CISI Code of Conduct also sets out that professionals within financial services should comply with regulations and the law in both letter and spirit, which one reader astutely observed was a consideration within this dilemma: “By arguing that the chatbot is a program and not an employee, the firm is complying with the letter of its policy, but not the spirit.”

Our recommended solution is option 3, as it best encompasses the values set out by the Institute of honesty, openness, transparency and fairness. Many respondents in the comments also suggest a mixture of all four options.

Selection of comments

“I would lean towards a mix of all four answers. By arguing that the chatbot is a program and not an employee, the firm is complying with the letter of its policy, but not the spirit. Clearly the policy is intended to ensure that the FCA is made aware of dissatisfied customers as a result of actions taken by the firm. In that respect, a human should be 'assigned' the complaints attributed to Jane. Equally, clients must be informed that the chatbot is indeed a bot – particularly as it carries human Jane's name and photo. Whether that needs to be retrospectively communicated to all clients is less clear cut – I would lean towards ensuring that going forwards, the bot is clearly labelled as such, but also to address this clearly in the response to any complaints that have been previously raised. Finally, the chatbot is a software tool that can act as a 'first line of defence' in handling customer queries – from a best practice perspective this should allow for escalation, monitoring and intervention by a 'second line', which in this instance should certainly be a human employee of the firm.”

“Someone in the firm has to be responsible for the actions/responses from the chatbot. I also feel that clients should be advised they are dealing with a robot and provided with an option to escalate to Alfie, if required.”

“This is a serious ethical situation and one that institutions will face in varying forms over the next few days, weeks, months and years. Three things to consider are: 1) The Board was apprehensive and allowed cost to rule the day when they worried about this very outcome. Oversight will be questioned by the FCA once this comes to light. 2)The use of Jane's tone and facial features can be considered as misleading, especially as it wasn't declared. 3) Having written policies that aren't implemented speaks volumes about ethics and standards. At the least, a report or call to the FCA based on their internal policy should have been pushed by either the Board or senior management.”

“Elements of all the options apply. By not disclosing that Jane is a chatbot, the firm has not been open with customers. If a chatbot is to be used going forward, it should only be used as a first stage, particularly if clients are making complaints. Every client should hear that they are about to speak to a bot – and be given the opportunity to speak to a human. I also think any client who has complained about Jane should be told the truth and that the system is being altered. Finally the complaints need to be recorded – possibly as complaints about the firm's systems.”

“Although option 3 is the one with the most relevance, I would also point out that the apology by Alfie, informing them that "the matter will be dealt with" is effectively a lie by omission. By not revealing the full nature of the matter which caused the clients to complain in the first place, he is, in fact, continuing to conceal the source of the client issues in a classic cover-up. When apologising to clients, the default position must be to acknowledge fully the clients' complaints, and the reasons that these arose. Half-truths or omissions will just make the situation worse, as concealment of the chatbot's existence just adds another potential layer of complaint. I would advise the Board to issue an apology 'over the head' of Alfie to all clients whether or not they have reported interactions with the chatbot, making it clear that Alfie's apology did not fully cover the circumstances. Only by doing this can the Board save face, and have any hope to preserve client loyalty longer-term.”

Should you wish to suggest a dilemma or topic to be featured in a future Grey Matter, please contact us at ethics@cisi.org

Published: 16 Dec 2019
Categories:
  • Fintech
  • Integrity & Ethics
Tags:
  • Code of Conduct
  • verdict
  • grey matters ethical dilemma
  • chatbot
  • AI

No Comments

Sign in to leave a comment

Leave a comment