"The 'Ethics of Code' is designed to protect the user and to ensure that tech giants, such as Sage, are building AI that is safe, secure, fits the use case and most importantly is inclusive and reflects the diversity of the users it serves," Kriti Sharma, VP of AI and bots, said at the company's recent Summit conference in Canada.
Sharma outlined the principles the company developed along with its accounting chat bot Pegg. These principles included the following:
1. AI should reflect the diversity of the users it serves and must not perpetuate stereotypes.
2. AI must be held to account and so must users. Technology should not be allowed to become too clever to be accountable.
3. Reward AI for 'showing its workings'. An AI system learning from bad examples could end up become socially inappropriate. Proponents must develop a reward mechanism when training AI with AI and robots aligning with human values.
4. AI should level the playing field, providing accessibility to those with sight problems, dyslexia and limited mobility.
5. AI will replace, but it will also create via the robotification of tasks. If businesses and AI work together people can focus on what they are good at—building relationships and caring for customers.
Sage has also announced a rolling program of BotCamps in the United Kingdom to teach those between 16 and 25 years old basic bot and AI coding skills.