The digital revolution has brought in a new dimension to the challenge of ensuring responsible behaviour

The issue of corporate ethics is never far from the business media headlines. Take the troubles embroiling former Nissan chair Carlos Ghosn, or the accounting problems at Patisserie Valerie in the UK, to name just two recent examples. Despite the best intentions and efforts of policymakers, legislators, boards and professional consultants, the corporate scandals keep coming.

Now, to further complicate matters, the latest developments in the digital revolution are adding a new dimension to the challenge of ensuring companies and their executives behave responsibly. Ioannis Ioannou, Associate Professor of Strategy and Entrepreneurship at London Business School, and Sam Baker, Monitor Deloitte Partner, suggest that, while the widespread introduction of AI and machine learning technologies can be a force for good, without the right approach there is a risk that the corporate ethics waters become even murkier.

As Ioannou and Baker note there is widespread pressure from stakeholders for corporations and their employees to behave in a responsible manner. So much so, that such behaviour is no longer optional but critical to sustaining financial performance. The pressure is evident from all directions. From consumer dissatisfaction aired via social media, from the the letter to CEOs from Larry Fink, Chair and CEO of asset managers BlackRock, highlighting the need for companies to have a positive social impact, from regulatory measures to ensure good environmental practices, from millennials who insist on working for responsible organisations.

“All of the material stakeholders of a company today insist on more responsible behaviour,” says Ioannou. “Part of that is down to the size of many corporations today. Take the world’s largest 100 economic entities – according to Global Justice Now, sixty-nine of those are companies, the rest are countries. With that kind of size comes a huge footprint, impacting on the environment, labour markets, consumers, touching the lives of more people, creating an elevated sense of responsibility and demands for accountability.”

Size is not the only issue. The digital economy is another critically important factor, adds Baker. Digital technologies create greater visibility for corporate actions and adverse outcomes, enable people to link the two more easily. Furthermore, the speed and breadth of communication afforded to stakeholders allows them to make views known more persuasively and forcibly, and take action more effectively.

 

The impact of AI

And this is far from the only impact that the digital revolution is having on corporate ethics. Take the introduction of specific technologies such as Artificial Intelligence and machine learning, for example. “To some extent digitisation codifies decisions,” says Baker. “Artificial intelligence and machine learning probably takes that a bit further and devolves some of those decision making powers to machines. We are not just codifying a decision that we would make, we are codifying the way we make decisions as well.”

Ioannou argues that AI and machine learning have the potential to optimise decision making for the benefit of organisations, as well as for society more generally. “If these tools can fundamentally increase the quality, accuracy and timeliness of the information that goes into decision making then we are going to have higher quality decision making,” he says.

“To take just one example, that higher quality decision making might allow companies to run their fleets more efficiently, optimise routes so that they use less energy and there are fewer carbon emissions, collect and report on relevant fleet information in a more timely manner, and keep an audit trail on how that information is gathered. On a bigger scale these algorithms can process, analyse and understand information, reducing uncertainties and creating incentives, in way that people cannot.”

In an energy market, for example AI and machine learning can be used to make more accurate forecasts about electricity generation, how that might be affected by complex weather patterns, and the consequent impact on availability and pricing. There are enormous cost benefits, too.

However, while optimised decision making via AI has the potential to embed ‘automated’ ethical decision making into organisations, thus helping to avoid ethical considerations being lost in the myriad of daily activities, as both Baker and Ioannou point out the reality is more complicated.

“Whether optimisation is about making things better at the macro level for society, or for individuals or organisations, a beneficial outcome still depends on a decision being a ‘good’ decision, from an ethical perspective,” says Baker. “Without absolute clarity on what is good or bad, we may run into difficulty.”

Unfortunately, the dividing line between good and bad is not always clear cut. Both in the case of the framework within which AI aided decision making occurs, and for the actual AI algorithms that optimise the decision making process. Setting aside the possibility of organisations deliberately using AI for purposes that general consensus might construe as ‘bad’, or programmers purposefully coding unethical AI in bad faith, there are still issues such as conflict of interests and unintended consequences to contend with.

Baker offers the example of AI used to make more accurate decisions about the health of individuals. Such information might legitimately be used by a hospital to improve a person’s healthcare outcomes. But that same information might equally be used to deny the same person health related insurance. What is ethical or not may ultimately depend on the more broadly agreed framework within which AI enabled decisions are made.

Then there are the possible unintended consequences associated with the creation and deployment of the AI itself. “Think about a situation when an algorithm optimises mobility within a smart city,” says Ioannou. “Yes it can run the entire network and do so efficiently, reducing carbon emissions. But what if people then start using this mobility network even more, cancelling out the environmental efficiencies, and how many people are going to lose their jobs because now the network of mobility is being run by an algorithm? That is why we need to think about this not only from a strict efficiency and cost perspective but by accounting for these outcomes in a broader social context.”

And even with the best intentions it is possible to build inherent biases into AI. The debate around bias in facial recognition technology is a good example of the challenges faced. To the extent that Brad Smith, Microsoft President, recently suggested laws should be introduced to govern this use of AI technology.

 

Ethical safeguards

That is why both Ioannou and Baker stress the need for safeguards accompanying the growing deployment of AI. “First of all we want to be clear on what we think is ‘good’ or ‘bad’, what is ethical, across different cultures, and that means organisations using and referencing widely accepted frameworks, such as the UN’s sustainable development goals, the UN Global Compact, and the Universal Declaration of Human Rights as guidance,” says Baker.

Education is also essential, both inside organisations at senior management level at the very least, and for the broader public, too. “Certainly the top leadership of the organisation has a responsibility to understand the AI and monitor it, in the same way as it has a responsibility to monitor an organisation’s culture and values,” says Ioannou. “The added complexity is that it requires specialised knowledge. But leaders are taking on a huge risk if they adopt these technologies without profoundly understanding them.”

Ideally, adds Baker, we need technology solutions that help communicate the basis and implications of AI based decisions to those responsible, in an automated way, as that basis evolves. Similarly, if unethical decision making is to be detected and exposed, the broader public needs to be educated about these complex issues, providing a second line of defence outside of organisations. The challenge here may be reaching non-digital natives, and those who have not benefited from studying new technologies as part of their standard school curricula.

And there is a strong case for organisations having to use AI within well-defined regulatory frameworks. “There is a role for government and other institutions in terms of regulation and governance,” says Ioannou. “For instance, look what happened when the UK government decided to mandate reporting on the gender pay gap. It shines a light on the issue. Why not on AI and algorithms, especially when there are risks such as inherent bias?”

Indeed, given the widespread adoption of non-financial reporting in areas such as sustainability, for example, it is easy to see a future when reporting performance against a range of ethical metrics and frameworks becomes commonplace. And, ideally, organisations will constantly monitor the suitability of those metrics and frameworks in the context of the changing business environment including the emergence of new technologies, accounting for ethical considerations, to ensure relevance.

Despite the risks, assuming appropriate safeguards are introduced, there are good grounds for optimism – that adopting AI technologies may lead to more responsible behaviour by organisations. Not necessarily because decisions fuelled by AI will ‘automate’ the ethical content of decision making, or that AI optimisation means ethical decision making is easier or more likely, although that may prove the case. It is as much that growing interest in the possible impact of AI is likely to prompt greater and much needed awareness and discussion about related ethical issues.

“We have got to break the idea that this is a black box and does not really affect us, or that it is too complicated,” says Baker. “Because we need to care what is being done on our behalf, the influences we are being subjected to that we are unaware of.”

“You want to avoid a situation where you have a global elite that understands AI algorithms and their complex technologies and controls them without explicit accountability,” adds Ioannou. “Do we want private companies and their algorithms deciding what is and isn’t hate speech on social media, for example, or making those kinds of decisions on a range of other issues? What are the principles and safeguards we want around these kinds of algorithms?

“It is not that we are going to rediscover ethics, but as a society we need to have a conversation about ethics in this particular context, a situation which is different from anything we have seen before.”

View the original post here.