In September 2016, Salesforce founder and CEO Marc Benioff informed employees, customers, and investors that Salesforce would be an AI-driven company. Earlier that year, Microsoft released its Tay research chatbot project through a Twitter Account. Microsoft shut down Tay after only 16 hours because it started to mimic the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of such inappropriate behavior.
With chatbots as one of Salesforce’s most promising customer service-related technologies, Kathy Baxter, in her role at that time as Principal User Researcher, was curious about understanding what went wrong with Tay. She also wanted to know how that type of AI-enabled system behavior could be avoided at Salesforce. Baxter dove deeper into the nascent field of AI ethics, including how to educate fellow employees about ethical issues associated with AI, and guide product engineering teams about how to design technology in ethical ways.
In April 2018, Baxter met Richard Socher, then Salesforce’s chief scientist. The two began discussing the cross-functional role ethical AI could play with Einstein (Salesforce’s AI technology across its sales, service, marketing and commerce offerings). By August 2018, she became Salesforce’s architect of ethical AI.
An “Architect” for Ethical AI
The role of an ethical AI architect is nascent, but increasingly important in today’s post-pandemic ‘success from anywhere’ world. Many companies are in the early stages of their AI and automation journeys, let alone hiring someone to internally evangelize the importance of ethical AI practices. However, an organizational leader with a dedicated focus is needed to create a sustainable practice that can scale across a company.
With an architect of ethical AI at the helm, Salesforce quickly set out to establish a framework for ethical AI, which led to the creation of its five Trusted AI Principles. Baxter worked with leaders throughout the company to publish a charter that specified how to turn those principles into practice.
It’s important to hyper-contextualize what ethical AI means in the setting of a specific product, a specific type of prediction, or a specific usage application scenario. To address this, Baxter and teammate Yoav Schlesinger gave employees specific context for specific work situations. Baxter commented:
“With specific contextualization, people more easily understand how ethical AI works in practice, and the role they can play in creating ethical and responsible technology from the start. It’s that this should not be an afterthought, and instead be infused into the genesis of a product, feature, or team from day one.”
Today, Salesforce’s Ethical AI Practice not only advises internal working groups, but advises customers of all sizes with guidance about how to create ethical AI practices within their own organizations. For example, the Chief Information Officer of a large company asked for insights about how to create and pitch the need for an ethical AI team to her C-suite. They helped the CIO codify how the lack of ethical AI practices would harm and impact the company’s brand. They also helped detail the type of skill sets necessary to do this type of work, and how to find and interview the few people with experience in this area.
“Today, ethical AI concepts and practices are akin to the level of understanding that industry had about cybersecurity in the 1980’s, before cybersecurity practices evolved to where they are now,” said Baxter.
Scaling the Impact of the Ethical AI Effort
The Salesforce website for AI Ethics summarizes the company’s effort in this area as follows: “We deliver tools to our employees, customers and partners for developing and using AI responsibly, accurately and ethically.”
Salesforce’s ethical AI team needed a strategy to amplify their efforts and propagate their impacts internally and across the external customer base, and scales its efforts through the company’s Office of Ethical and Humane Use – an organization with Salesforce that works across product, law, policy, and ethics to develop and implement a strategic framework for the ethical and humane use of technology across the company.
The formula is one that other companies can emulate. Scaling an ethical AI practice can be achieved through three mechanisms:
- Engage: Systematic outreach to all employees, including new hires.
- Advise: Serve as advisors to product and data science teams on practical ways to identify and address ethical issues associated with their projects.
- Adopt: Identify methods and practices to use internally to support ethical AI practices, and drive their adoption across teams and employees.
Baxter argues:
“Everyone in a company needs to have an ethical mindset. Each employee has to have a sense of ‘what is my responsibility for Ethical AI practices’. Ethical AI teams have to propagate a sense of responsibility for this to everyone in the company and to their customer base.”
New hire orientations, Baxter believes, should include sessions dedicated to the topic of ethical and humane use of technology, including AI, automation and its practices. Salesforce even offers bite-sized training modules on Responsible Creation of AI (four modules) and Ethics by Design (three modules), accessible internally and for free externally through the company’s Trailhead learning platform. For internal employees, additional training modules are available to reinforce best practices for ethical and human use of technology, and the company’s Trusted AI Charter.
In additional to these internal and external resources, Baxter, Schlesinger, and colleagues share articles about related topics to further contextualize the nascent field. These include AI in marketing, responsible chatbot design, and ethical considerations for AI in COVID-related back-to-work solutions and vaccine management, among others
Ethical and Responsible Technology: An Organizational Imperative
Ethical AI teams must serve as ongoing advisors to product and data science teams. These teams engage in constructive dialogue through questions like: “How do we assess the degree of bias in the training data we are using, and in the model itself?” Many of these questions are intentionally broad, which makes for challenging answers that yield further inquiry to examine the many ways in which an ethics-first design process might come to fruition.
Salesforce helps teams better understand the nature and degree of bias associated with the datasets they are using and the models trained on those datasets. It’s essential that ethical AI teams facilitate questions about how to make AI models more explainable, transparent, or auditable.
“Clarity and actionability stems from establishing well defined, externally validated methods and practices for supporting decision making,” said Baxter. “It’s important to keep an eye on the great work other companies and organizations are doing – it’s a small community and important that we learn and share best practices.”
How to Use “Model Cards” for Transparent and Ethical Reporting
As part of their corporate commitment to make AI models as transparent as possible, Salesforce employs the use of “model cards'' to document the performance characteristics of machine learning models and associated training data sets to encourage transparent model reporting.
In a Salesforce blog post listing all of the model cards published at Salesforce, Schlesinger explains:
“Model cards seek to standardize documentation procedures to communicate the performance characteristics of trained machine learning (ML) and artificial intelligence (AI) models. Think of them as a sort of nutrition label, designed to provide critical information about how our models work — including inputs, outputs, the conditions under which models work best, and ethical considerations in their use.”
The growing practice of publishing model cards at Salesforce played an important role with Einstein Discovery, a product used to bring trusted and transparent predictions and recommendations to anyone, from data scientists to business users. The product team even launched a model card generator within the product to allow customers to create model cards for their own models with the click of a button so our customers can be transparent to their customers.
Baxter and Schlesinger aren’t alone, but they’re early examples of how companies can build teams of people to drive ethical AI at scale within companies, governments, and non-profit organizations.
“Based on the number of students that have applied for internships on our team the last couple of years, I believe we are going to see a lot of people moving into this field. As AI regulation and customer demand for responsible technology grows, we will see many more companies building teams like ours,” said Baxter.
To fulfill the demand, companies will have to look internally and externally, and hold themselves to higher standards to ensure the development of responsible technology. The costs of creating, selling and implementing technology without a holistic understanding of near- and long-term implications are far too great to ignore.