AI: Where It’s At & Where It’s Headed

AI: Where It’s At & Where It’s Headed


Twenty years on from Steven Spielberg’s A.I. and the reality of artificial intelligence is only just becoming clear. According to Wired UK’s Matt Burgess, the technology has the potential to change our lives in many ways – if we develop it properly. Matt told Tobias Gourlay how it could make our jobs better and why we don’t need to worry about a Terminator-style apocalypse for now…

Let’s start with your working definition of AI, Matt…
There are a lot of different interpretations of artificial intelligence (AI) out there and it can get super contentious for researchers trying to develop a definition. I think of AI as the automated processing of tasks by machines that work in a similar way to the way we humans do.

And this automated processing is already embedded in our everyday lives, right?
Yes, I think that’s one of the really interesting things that a lot of people maybe don't realise: Facebook’s news feeds are organised by AI; email spam filters are controlled and regulated by AI; Netflix’s recommendations are AI based. There are a lot of use cases already out there, but they tend to impact our lives in fairly superficial ways. If you get a bad recommendation on Netflix, it shouldn’t alter your daily life too much. What we’re going to see in the future is AI embedded in the systems and structures around the way we interact with companies and governments. That’s where AI is going to become a lot more contentious.

What does the cutting edge of AI look like today?
It’s probably the big tech companies working on early use cases of self-driving vehicles – the caveat being that these are still trials and experiments, and there are lots of controls around the AI. They’re at the data gathering stage, and the prospect of a self-driving car being set free on an open road is still some way off.

Would you say we’re developing and deploying AI responsibly right now?
I think we're really behind on AI regulation. The law is always slower than developing technology just by the very nature of them both, but a lot of big AI developments have happened over the last 20 years and it’s only for a little while that lawmakers have been talking about AI. In the last few weeks, politicians in Europe have been proposing regulations to limit the use cases of high-risk AI. This is pretty much the biggest and broadest piece of AI legislation yet that’s being proposed, but it’s going to be several years until we see it actually coming into force. With this kind of retrospective approach, there is an ongoing risk that huge deployments of AI that could be dangerous to people’s liberties or rights could be made before any laws to control them.

What could we be doing better?
Regulation is the big issue. At a smaller level, we could be auditing the AI systems we already have for fairness. For example, MIT researchers have found facial recognition systems to be severely wanting in their ability to accurately detect people's faces when looking at people of colour. I’d say some of that research has actually forced technology companies to improve their systems, so holding AI to account through auditing is one thing we should be doing more of. There should be a big onus on the tech companies developing AI to put in proper mechanisms to show that AI works in fair and transparent ways, and that the ethical implications of the technology are being considered. AI should be built to work for everybody and not just seen through the lens of big tech firms.

Where's AI headed next? Any looming breakthroughs we should be looking out for?
I think one of the big things we’ll see over the next few years is the development of medical AI. There are already plenty of case studies and examples of academics and healthcare professionals using AI in laboratory conditions to look at medical scans and images, and diagnose patients. I think they've been able to prove that AI can detect types of cancer as well as professionally trained doctors can. That sort of system could be one of the first AI use cases to really make a big impact in terms of benefiting society and helping out healthcare professionals. The medical world is ripe for this kind of disruption, which will enable doctors and medical professionals to spend more time with patients. The AI technology here could save the medical world a lot of time and training. I’m wary that this kind of AI has been predicted to arrive quite quickly in the past and there have been complications, but I do think we’re going to see these systems rolled out into real-world laboratories in the next few years.

"One of my biggest worries is the impact AI can have on the surveillance of people by private companies using things such as facial recognition."

Is this the most exciting area for AI right now?
Yes, probably. In this sphere there’s also huge potential for AI to analyse a lot of data and look for new ways and drugs to treat people, and this would hopefully have good results for society as a whole.

What worries you about AI right now?
One of my biggest worries is the impact AI can have on the surveillance of people by private companies using things such as facial recognition. I’d also be concerned about employees having their productivity analysed by AI systems, or anyone working remotely and having AI software that checks on their behaviour. We’re seeing this sort of thing happen already without AI. AI could supercharge the process with its ability to crunch data and really limit our potential to live privately.

What’s the solution? Is it regulation and control again?
With facial recognition, there are lots of ideas about individuals, for example, wearing t-shirts or face paint that can confuse AI cameras, but these are really just sticking plasters on a much bigger problem. Handling the wider issue of how these systems are used by companies and governments does come back to regulation and control.

With contact tracing apps and vaccine passports, perhaps the pandemic has shown us how new types of technology can help us, but also shown us that we need to think about how we look at technology as a solution to our problems. With a lot of the bigger problems we face as a society, technology is not going to give us an easy, single answer. You need to consider these problems from the perspectives of as many different groups as possible. Any group that’s going to be affected should be involved in the discussion

Where do you reckon AI might be in five years’ time?
I'm hopeful it’ll be having a good and positive impact on our working lives. There’s a lot it can do to make our jobs better. It can automate some of the most boring tedious processes we have to go through, potentially allowing us in office-based roles to focus on things like interpersonal connections and creativity.

So AI doesn’t mean robots taking our jobs and mass redundancies?
The world of work is going to change a lot, but I don’t think it’ll be a case of lots of robots coming to take our jobs. We’ll mostly work collaboratively with AI. For example, employment lawyers in London are now using AI to scan contracts for clauses like NDAs. The contracts still go to humans for review, but the AI saves us the drudgery of a lawyer having to sit and read hundreds of pages of contracts. That’s a pretty good vision of how AI and humans can work together. There will be some job displacement – for example in the delivery industry when AI self-driving vehicles arrive – but AI is a chance to reshape work so we can free up some of our time and focus on more creative things, for example.

But some jobs could be at risk?
AI’s going to have different impacts on different industries. For now, I’d say AI is not in widespread use and, where it has been deployed, it’s been in limited use cases and it’s not ready to replace human workers. The goal of creating an AI that can do every single task a job role requires without being taught and trained each time is still quite a long way off. For example, ‘bringing context’ is something that’s very human about the way we work. AI’s at the moment are self-contained and have very little awareness of the world around them. For example, if you had an AI system that was stock picking in a warehouse, it might be good at finding the right item but it would only really understand what was in its field of view. If its human supervisor had to go for a toilet break or something like that and disappeared, it wouldn’t really understand that. I’ve also seen research showing that a lot of businesses that have deployed AI to this point have failed to see any sort of real-world return on their investment.

"The world of work is going to change a lot, but I don’t think it’ll be a case of lots of robots coming to take our jobs. We’ll mostly work collaboratively with AI."

Can machines ever match us for intelligence?
The goal of lots of AI research has essentially been to create humans in machine form. It’s still to be seen whether we can do this, which I know is a bit of a cop-out answer, but you could say there’s still a long time in which we can achieve this. Today, a lot of research is looking at creating a general AI – i.e. one that can do anything we ask it to – but that kind of system is still a very long way off, probably decades away.

Should we ever let machines match us for intelligence?
I think it's good that people – especially AI researchers – are talking about this. If we start to think about these issues and how to tackle them now, we can potentially get ahead of them if they become a reality. The problems and challenges that AI poses to society are something we really need to be focusing on and thinking about now.

How serious a concern is a Terminator-style apocalypse?
There’s no doubt AI systems are going to become much more capable. That’s when we really need to understand what could happen in this space and then make sure we have systems in place to guard against this sort of thing. I think preparing for a worst-case scenario is never a bad thing.

In The Terminator, the AI becomes self-aware and the robots rebel in the early 2020s… How close are we to that possible moment in our world?
These sorts of sci-fi visions have an outsized influence on our understanding of new technologies. Right now, I’d say just look at the leading voice assistants out there. They’re easily confused and can be mistakenly triggered by, for example, voices on television. I don’t think we can compare them to the sorts of sci-fi AIs we’ve seen elsewhere.
Let’s finish with a more utopian vision then, Matt… How could AI improve the world in the long term?
AI has huge potential to do a lot of good in the world. I’d like to see it being used to unlock new areas of knowledge. As discussed, it has huge potential to find new types of drugs, as well as analysing the world around us in ways we've not seen before. It allows us to look at a lot more data and understand that data in ways that we cannot. For example, AI can look at satellite images to understand our impact on the planet and could be used to help with a lot of challenges around climate change and the climate crisis. Analysing such huge data sets and finding patterns in them can unlock new insights and be incredibly beneficial for us.
Before we get to the sci-fi versions of AI – if they are even possible – we need to tackle the problems and challenges in the here and now. AI is created by humans and it doesn’t operate independently from us, so it’s down to us to consider and reflect on the sort of society we want to live in. It’s down to us to make sure AI is fair and used to benefit society rather than amplifying the inequality we already have.
Artificial Intelligence: How Machine Learning Will Shape The Next Decade by Matt Burgess is out now. Buy it here.

DISCLAIMER: We endeavour to always credit the correct original source of every image we use. If you think a credit may be incorrect, please contact us at [email protected].