Ethics is Messy

By John Sumser

Jun 21, 2021

Of course most of our latest technologies — particularly those that leverage AI — miss the mark in achieving safety and ethics standards. How can we expect otherwise?

Safety and ethics require a financial investment, at the expense of margin. At the onset of any new business venture, stakeholders tend to favor profit over costs, and growth over precautions.

It’s not malevolent, it’s just business.

Consider the history of manufacturing, automobiles, or aerospace. We eventually moved towards high ethical and safety standards — over the course of decades. Most steps in that direction occurred in reaction to adverse consequences, not through an intentional adherence to principles.

Fatalities, for example, propelled the aerospace and automobile industries. The number of airplane crashes declined by over 50% between 1989 and 2019. Between 1913 and 2019, annual automotive deaths per 10,000 vehicles fell from 33.38 to 1.41, a 96% improvement.

These sectors began improving safety features 20+ years into their life cycle. But the real improvements and government regulations came much later. The first Model T Ford rolled onto the road in 1908. Universities began researching ‘crash test dummies’ in the late 1950s, and the federal government passed car safety standards in the late 1960s.

In recent decades, it seems that our industrial era industries have normalized to a generally accepted level of ethical and safety standards. This has lulled us into an unfounded sense of security. We assume that other industries and current innovations conform to those same standards.

However, our new ‘Intelligent Tools’ powered by AI, machine learning, and predictive analytics are not designed with our traditionally-accepted (and expected) standards in mind.

Safety and Ethics in AI & Predictive Analytics

The ways we collect, share, and use data about human beings raises ethical and safety issues such as privacy, bias, and fairness. Concerned people from a variety of fields and industries have already begun asking fundamental questions about this. As these conversations evolve, twelve (12) values seem to surface fairly regularly:

12 VALUES OF SAFE AI

  1. Privacy

  2. Accuracy

  3. Fairness

  4. Security

  5. Explainability

  6. Transparency

  7. Responsibility

  8. Repeatability

  9. Trustworthiness

  10. Maintainability

  11. Improvability

  12. Governability

These values of “Safe AI” represent the ideals for the AI/Predictive Analytics industry. An ideal AI system would represent the best possible demonstration of each principle.

Following a natural course of 30-100 years, AI tools will probably, eventually comply with these values. After all, our transportation machines, appliances, power tools, manufacturing devices, and even most weapons adhere to many of these ideals.

Given the prevalence of this new tech and the all-encompassing impact it has — and will have — on our lives, we may not want to wait for negative consequences to gradually evolve products toward “Safe AI.” Instead, organizations can begin incorporating these values into their development process today.

Just because something was unforeseen, doesn’t mean that it was unforeseeable

To do this, businesses must first reject the widely accepted notion that unintended consequences are inevitable. ‘Unintended consequences’ is a passive aggressive description that means, ‘we didn’t really want that to happen, but we didn’t ever think about it.’ Just because something was unforeseen, doesn’t mean that it was unforeseeable. Our new era demands a deeper look at things that can be difficult to see.

Keeping ethics in mind is not easy. It requires a continual effort to be aware of the consequences of our actions and decisions. Even more challenging, an ethical stance requires that we think beyond our ability to imagine.

That undoubtedly sounds odd. Just how would we think beyond our ability to imagine, you might ask? The premise sounds impossible.

To fully imagine the consequences of our actions and decisions, we need to elicit and incorporate input from diverse perspectives, values, expertise, and lived experiences. Often, what is unimaginable to me is perfectly apparent to someone from a different race, gender, orientation, or ability. For an organization to develop Safe AI, it must collect diverse voices in the design and implementation processes.

In this way, the ethics function can be baked into the foundations of organizations. The alternative is an ethical framework that operates like a governance or oversight function. This will create as many ethical dilemmas as it solves. Once people figure out the boundaries of that system, their natural and professional instincts inspire them to push the envelope and find work-arounds. Implementing an ethics function as an organic, foundational element, however, fosters a way of doing rather than a design constraint.

Over the next few articles, we are going to take a deeper look into Safe AI values, how AI works, and the ways in which these competing concepts work in practice. Stay tuned, and keep looking for those unintended consequences.

John Sumser

Advisor

John Sumser is a Senior Fellow in Human Capital at The Conference Board and the Principal Analyst at HRExaminer, a company focused on pushing the boundaries of HR and HR Technology through research and which publishes a regular flow of content about the leading edges of HR practice in research reports, podcasts, and articles. John maps emerging technologies in an annual research report based on demos and quantitative surveys. Currently, Sumser is in the midst of a multi-year exploration of AI and Analytics in HR.