Three People-Centered Design Principles for Deep Learning
Over the past decade, organizations have begun to rely on an ever-growing number of algorithms to assist in making a wide range of business decisions, from delivery logistics, airline route planning, and risk detection to financial fraud detection and image recognition. We’re seeing the end of the second wave of AI, which began several decades ago with the introduction of rule-based expert systems, and moving into a new, third wave, termed perception AI(https://fortune.com/2018/10/22/artificial-intelligence-ai-deep-learning-kai-fu-lee/). It’s in this next wave where a specific subset of AI, called deep learning, will play an even more critical role.
Like other forms of AI, deep learning tunes itself and learns by using data sets to produce outputs — which are then compared with empirical facts. As organizations begin adopting deep learning, leadership must ensure that artificial neural networks are accurate and precise because poorly tuned networks can affect business decisions and potentially hurt customers, products, and services.
The Importance of People-Centered Principles for AI
As we move into this next stage, the key question for organizations will be how to embrace deep learning for driving better business decisions while at the same time avoiding biases and potentially bad outcomes. In working with numerous clients across multiple industries, we have determined patterns that can help companies reduce error rates when implementing deep learning initiatives.
Our experiences working with organizations in these early stages of AI adoption have helped us create design principles for a people-centered approach to deep learning ethics with a strong focus on the data employed to tune networks. A designed people-centered approach helps address both short-term concerns — poorly trained AI networks that produce spurious solutions — as well as the long-term concerns that machines might displace humans when it comes to business decision-making.
When we talk about people-centered design, we mean principles that provide benefit to all individuals and communities compared with instances where only a few individuals benefit at the expense of others. Our people-centered design principles support the goal of providing and informing with data to allow people more opportunities in their work. In our experience, there are three key principles organizations need to hold up as pillars for any AI implementation:
- Transparency. Wherever possible, make the high-level implementation details of your AI project available for all of those involved. In this case of a deep learning initiative, people should understand what deep learning is, the way it works to include how data sets are used to tune algorithms, and how deep learning may affect their work. When intellectual property or other sensitive information might be exposed, an organization may want to include a panel of external stakeholders, keeping in mind that certain data sets might need to be protected from disclosure if they contain sensitive information or there are privacy concerns.
- Explainability. Employees within an organization and external stakeholders, to include potential customers, should be able to understand how any deep learning system arrives at its contextual decisions. The focus here is less on an explanation of how the machine reached its conclusions — as deep learning often cannot be explained at that level of detail — and more on the level of what method was used to tune the algorithm(s) involved, what data sets were employed, and how human decision makers decided to use the algorithm’s conclusion.
- Reversibility. Organizations also must be able to reverse what a deep learning effort ...