The Promise of AI with the World Economic Forum

Recently I had the pleasure of attending Dreamforce 2017 in San Francisco. Amongst the incredible line up of speakers, awesome technology on show and amusing campground shenanigans, I came across a very insightful panel discussion on the promise of AI with the World Economic Forum. With panelists including Zvika Krieger, Head of Technology Policy and Partnerships at the World Economic Forum, Paul Daugherty, Chief Technology and Innovation Officer at Accenture, Liesl Yearsley, CEO of Akin.com, and Terah Lyons, Executive Director at Partnership on AI, I just had to share some of the brain tingling topics that were discussed on stage.

We are now well and truly in the depths of the Fourth Industrial Revolution – with amazing technology like Artificial Intelligence (AI), Machine Learning (ML), Blockchain and Neuro-technology taking centre stage, building a digital revolution that has in a sense been going on for the past 50 years. We are more connected with global networks of processing and communication than ever before. But what does this really mean for us as a human race? Over time these technologies are becoming more accessible to the general population, and this panel tackled some of the hotly debated topics about the ethical and fair use of data, and who’s responsibility it is to do so.

We are now into our third generation of AI: the first was essentially around rules – if this then do that – think Alexa, Siri etc. The second generation focused on the probabilistic, tackling unstructured problems where you don’t really know why the machine made the decision it did, and it’s still quite limited as you have to give the machine a particular problem and an enormous amount of data to solve it. The third generation now focuses on solving complex problems with a limited amount of data, often in an ever moving state – these machines over time will take on more decisions and drive entire organisations.

Hype or Reality

There was no doubt amongst the panelists that AI was one of the most hyped technologies in the market today, but also that it was far more real than futuristic. We have never seen a technology that has moved as fast as AI to start impacting business and society as a whole, and it’s just getting started! Research conducted into the macro economic impact of AI found a 38% improvement in productivity, driving an additional $20 Trillion in increased economic output in the 12 largest economies around the world. Noting that unlike other economic influencers, AI is unique in that it both allows labour to scale more dramatically, and also helps capital increase in value over time – ultimately AI multiplies both labour and capital to drive greater output.

The Future Workforce

The reality is, we cannot prepare for what is coming. Many futurists looking 50 years ahead are talking about singularity – the hypothesis that the invention of artificial super-intelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. But there’s no evidence to suggest that we would ever actually step in and stop the evolution of technology, so instead we need to start planning for the values that these systems will hold, how they make decisions, and whether those decisions will be better than ours. We need to decide whether to give machines our values, knowing that the values of our society are flawed with the existence of sexism, racism, ageism etc. The reality is AI keeps moving up the skills chain and no profession is immune to disruption – we need to consider how humans will have meaning in a world where we do half the work we used to do and what we will do with that extra time to stay fulfilled.

The Role of Government

Governments have the incredibly important role to protect their citizens, which is a particularly challenging role in a landscape where they don’t fully understand the current state of technology. Governments around the world need to get a better handle on how technology interacts with its citizens, and there needs to be more cross-talk between civil society that are working to accelerate these technologies and the governments that are formed to protect them. In practical terms, this is starting to materialise in fellowship programs designed to bring technologists and expertise in from industry, and the introduction of regulatory systems related to self-driving vehicles and un-manned aircrafts etc.

But is the talk of protection and regulation enough? Will it truly work for AI as it has for other technologies introduced in the past? The panel were split on this topic, and rightly so. Governance is no longer just about government. As government lags behind, there is an increasing responsibility of Enterprise, of Startups, of Academia, to make sure we are optimising the benefits to society and mitigating unintended negatives.

In the past we replaced muscle work with technology and created programs to help bring people in from farms and re-skill them for office work. But now we are talking about replacing cognitive work with technology – we need to look at how we elevate the human ability. We have to consider the bigger picture and what we are doing to the human state. For example, a young boy tells Alexa “Order my pizza!” and she does. The same boy tells Alexa to “Shut up!” and she apologises. What is this teaching our children about how men and women interact with each other? As a society, we can’t just think about the financial bottom line – deploying AI is the most influential technology we have ever seen, that will make complex decisions on our behalf – what impact will it have on the human bottom line?

Taking Accountability

We can get caught in a trap if we expect government to regulate these technologies – it’s moving too fast. We need to take accountability in a more serious way than we have in the past. Leaders today have a responsibility to do the right thing. It’s fine to talk about machines making decisions, but at the end of the day a person is responsible – whether in business or in the household. Alexa didn’t order the pizzas – you (or your sneaky little children) did. We need to better understand the human responsibility surrounding AI, and put a renewed focus on transparency – black box AI should never be used where we need explainable outcomes in critical use cases such as criminal sentencing. Honesty is a non-negotiable – a growing number of companies have been using AI to break laws and game the system, and that has to stop. And when it comes to discrimination – we need to look at how we use AI to reduce discrimination and move towards fairness, rather than perpetuate some of the discriminatory behaviours we already have.

The reality is, humans are not always the best decision makers. In a recent study, a group of participants were asked if autonomous cars are better at making decisions, specifically on the hotly debated scenario in which the decision has to be made to potentially kill a group of five pedestrians or divert the car towards a single pedestrian. People categorically said the car should make the decision and was better suited to doing so. But when asked if they would buy a car that would choose to kill them over five others, the overwhelming response was “Hell no!”.

The Role of Enterprise

The role of companies is not just to drive the adoption and execution of AI, but to diversify the perspectives of technologists who’s responsibility it is to develop AI beyond just statisticians, data scientists and engineers. There is a critical need to involve social scientists and people from all backgrounds in the process to ensure AI technology develops for the greater good. This is a topic that has had a lot of discussion but very little action – more needs to be done to operationalise some of the philosophical questions so that engineers can bake these solution sets into the technology, and government can address challenges from a policy perspective.

It’s also critical that organisations understand the impact on workforce and their consumers – it’s easy to see jobs that may be eliminated, but harder to imagine the new jobs and the new ways of working that will emerge over time. Ever heard of a Personality Trainers for chat bots? Me neither – but it’s one of those emerging careers that organisations need to be ready for and start re-skilling employees. The panel talked about the “missing middle” – we tend to focus on what the machine can automate and what people can do, but the future is what’s in the middle, in what people and machines can do together. We need to focus on creating jobs that are more meaningful – people are needed to help machines be more effective, and in other cases machines will help people be more effective. It’s time to re-imagine what your business looks like, and how to invest in your people so they have the skills to be successful.

Perhaps the biggest challenge to organisations moving forward is how to address all the issues above and still remain accountable to shareholders. A highly successful campaign that sells a million credit cards may not be such a good thing for society, but a great thing for the bottom line. Enterprise needs a new way of measuring success, with a key metric focused on human state to measure what’s happening in the lives of their employees and their customers as an impact of AI. As organisations create “climate counter balance projects” to balance the impact of these decisions, they need to reconsider how to succeed in a world where we might actually buy less pizza and eat more carrots!

In AI We Trust

People are more likely to trust a robot than a human, essentially because there is no person on the other side to judge, to react or to manipulate. There is a perception that there are no hidden agendas. We are wired like cave men – designed to interact with our tribe who may only consist of 12-20 people at most. Even in today’s society, most of us really only interact with a small trusted group in our social networks on a daily basis. And yet we force people to have fake interactions with strangers each and every day, whether it’s the call centre agent who wants to “make sure you’re getting the right product or service”, the personal trainer who takes interest in your daily life, or the retailer who knows your favourite styles – but miss a payment and that personal interaction rapidly dissolves. Even the language of human to human communication is full of sludge: “I’m sorry for taking your time”, “I feel stupid for asking this”, but with AI, we are perfectly able to accept that the machine is there to help us with no strings attached. We are immediately immersed in the interaction and are at ease with sharing intimate details about our lives in order to get a better outcome.

From a physical perspective, we have been forced to interact with a powerful computer using just our thumbs. The promise of human-computer interaction is exciting – we want to be able to talk to digital humans and use visual gestures to get the outcomes we want.

And when it comes to data, consumers want to be able to choose what to do with their data. We want to chose who sees what, and more importantly, take away that right when we don’t get the level of service or interaction we expect. We need to give people more than just a simple opt-in when they give away their privacy.

But at what cost? Do people fully understand what it means to put their trust in AI? Organisations have a responsibility to educate people with regards to privacy and machine-objectivity. Something is not just objective because a machine made a decision. Individuals need to understand what types of data the machine has been trained on, how we interrogate the decisions machines make on our behalf, and what threshold we have power to step in and do so.

Ultimately, we have a responsibility to build AI that we can trust.

With all the challenges and concerns being hotly debated, it was great to finish on some examples of emerging AI that the panelists were most excited about. Top of the list were in Education – how we re-invent learning particularly for mid-career workers that will be impacted by technology disruption, Healthcare – augmenting the knowledge of existing infrastructure and healthcare professionals to better diagnose and treat disease, and AI in the Home – addressing the 20 hours of cognitive load required to run a home (in preparing food, making financial decisions, finding connections and love).

What I’m most excited about is that the Enterprise of the future is far more socially aware and accountable than ever before. I’m proud to work for an organisation that fights for equal pay, that fights for marriage quality, that fights for equal access to education, and is prepared to take the unpopular (and sometimes less profitable) path when it’s the right thing to do. I’m heartened to see more and more organisations publicly speaking up against laws and policies that negatively impact their employees and customers, especially as many of these are both developing and using emerging AI solutions – we need them to have a conscience to keep us on the right path. And I have hope that the open dialogue on morally-bankrupt organisations continues to make them fewer are further between, because the last thing we need is for corrupt organisations to determine the future of AI.