In our fast-moving industry we all need to be more informed than ever to ensure the diversity of experts taking a seat at the tech table. In a series of two interviews, the interviewer – with a non-technical background – was challenged to find out more about AI with an aim to reduce its apparent technical complexity. Her research culminated in two interviews with professors of AI working at the Radboud  University in Nijmegen, the Netherlands: this is the first with Professor Marcel van Gerven. (Link)

Interview with Marcel van Gerven ‘Algorithms and Superintelligence’

Marcel van Gerven is a professor of Artificial Cognitive Systems and Head of the AI Department at the Donders Institute for Brain, Cognition and Behaviour, based at the Radboud University in the Netherlands. His current research focuses on the interface between AI and neural science by working on how we can better understand how our brains work by researching the neural mechanisms that lead to the process of thinking. The following is a snapshot from our hour-long discussion on 18 January 2019.

Brain-Inspired Computing
LL: It must be a really interesting time to head up an AI department given the current interest and growth around it. Your current research focuses on brain inspired computing, what can you tell me about that?

MvG: I am Head of the AI department of the Donders Institute, a large centre which aims to understand the human mind – the human brain – and the AI department tries to mimic these cognitive mental processes in intelligent systems. We try to develop new AI algorithms that are able to act in a natural and intelligent manner. By doing this we hope to develop new applications that will help society in general, and humans specifically.

LL: OK! Applications such as?

MvG: My group has focused on brain computer interfacing: the idea of interfacing brains and machines – which may sound quite scary – but at the same time it has positive applications: such as trying to make blind people see again; or paralysed people walk again. To do this we need intelligent algorithms that are able to interpret brain data and translate this into a machine which operates a robot arm or similar.

AI in the Here and Now

LL: You’ve talked a bit about the AI systems that are being developed here. How can that help people now, at this moment, rather than waiting for the future?

MvG: I can give you a concrete example of one of the projects I am involved in, and this is a really exciting project. The aim of the project is to restore vision in the blind. We do this by combining sophisticated AI techniques with new techniques in neuroscience which allows us to stimulate the human brain. The idea is that if you are blind, you could use a camera to detect the environment, and what we are developing is a system which translates this camera image into a stimulation pattern which directly stimulates the brain. If we stimulate the brain, we can start to restore vision, and this is a goal we are after, and we’d like to achieve in the coming years.

LL: So does that mean someone can actually ‘see’, or is it a stimulus they get?

MvG: They will see, but it will be a low resolution – if they look at you, they would see a number of dots that indicate your face for instance.

LL: And will they see that in their eyes or within their brain?

MvG: In their brain, so we by-pass the eyes and we directly stimulate the brain and I think that’s a very useful use of advanced technology.

LL: And how many people do you feel you’ll be able to help and in what timescale?

MvG: So of course this is a long-term process and we are currently performing initial tests, but you need to get approval to implant patients (there were some people implanted in the 1980’s and they regained some sort of vision, to a very low degree, and we’re trying to push this forward) and of course you need to go through a pipeline of getting medical approval etc. and so I think if we would implant patients in a couple of years then we’d definitely have a working system but to then get this to market, let’s say, probably a matter of 10 years.

Algorithms
LL: I wanted to check my understanding of what an algorithm is. The development of an artificial intelligence is about machine learning – it’s not about killer robots, right?

Mvg: True!

LL: So an algorithm is the same as a set of instructions –we could illustrate this by making paper planes perhaps? We will fold it based on certain rules we learnt when we were young. Possibly your rules will be different from mine (a British paper-plane variant).

They fly the planes …

MvG: Very impressive!

LL: (Not so impressive, that’s a Dutch design win!) I wanted to show that the output is not the same as the outcome, the learning of a child making a plane can reflect the types of learning that a computer goes through, is that right?

MvG: I would say there’s a nice analogy between making these planes and building intelligent systems. Indeed, there’s a set of instructions, and the final outcome is a machine that is able to fly. In AI we do the same, we follow (define) a set of instructions and the hope is we can build intelligent machines. Of course if you want to build these planes, you need to know the principles of aerodynamics, which is why these planes fly. If you want to build intelligent machines you need to know the principles of intelligence. And I think there are different fields which can make a large contribution, on the one hand there is the field of mathematics which allows us to formalise these rules. On the other hand, we can also gain insight by studying human brains: of course biological organisms are the only existence-proof of a truly intelligent system. We can build these algorithms but they are no-where near what humans or animals could do – by decreasing this gap, this is the ambition of my department.

Algorithmic Bias
LL: I am just curious to find out your opinion about biases. I have used my background, my childhood to produce this plane, you will have done the same. If we are programming algorithms, environment and biases will surely come out in the training data, and reflected in the outputs and outcomes if we are training computers to produce results for us?

MvG: the algorithms as such are objective, they don’t have these biases. However the machine learning algorithms we build, they need to be fed with lots of data, these data are used to detect patterns and these patterns can be used to generate decisions. But the data that is fed to these systems can already be very biased. So it’s the biased process of data collection that is reflected in the outputs of those systems, and a simple example, for instance in text mining, that concepts like nurse are related more closely to women, where a concept of a surgeon is related more closely to males, it’s in the data, and still those associations are there. We need to remove those associations to build better decision-making systems. That being said, these associations reflect the biases of humans, so even before the advent of machines, humans are also making decisions based on these implicit biases: and I think the nice thing is that we can now make these biases explicit and remove them by identifying those correlations and figuring out if those correlations really reflect reality or reflect social constructs.

Algorithmic Audits
LL: Cathy O’Neill, in her TED talk, referred to algorithmic audits, where companies check the integrity of their data. Do you think that’s feasible and do you think companies are doing that?

MvG: I think it’s feasible, yet very difficult. It’s very important, and I don’t think companies are doing this and of course companies are making use of large data sets, they are not scrutinising them in the manner we’d want them to be scrutinised. And the same holds for governments, also the Dutch government is making use of this Systeem voor Risico Indicatie, a system for assessing the risk that people will commit fraud, and these systems are also fed with lots of data, but I don’t think, actually I know, that not a lot of time and effort has been spent in making sure that these data are not biased by themselves.

LL: Should the data integrity check be embedded in law, or at least in a strong moral, or ethical code?

MvG: I fully agree, and I think the big difficulty we have now is that we have these very advanced systems which can detect complicated patterns in the data but non-experts are lagging behind in terms of legislation. So I know the faculty of law at this university is really interested in figuring out, OK what’s going on in the field of AI, and how do we make sure that these biases are not there, and how do we define the right legislation in the use of these data sets.

Exponential Growth
LL: So I would like to know your opinion about the exponential growth. Is it a good thing or should we be concerned that technology is taking over?

MvG: So if I look at the development of AI and the role that Moore’s Law has played there, it’s been hugely important. So AI has existed since the 1950’s, over 60 years ago, and in the first couple of decades we built some learning systems etc.; and we could solve some ‘toy’ problems but we were completely not there yet. If you look at the last couple of years, people now know about AI, and because of the increase of computational power (the application of Moore’s Law), and the exponential increase in the availability of data, we can now make systems that require a lot of data, extract patterns from these data and we teach them, very rapidly, using these sophisticated advances in computing machinery. So for AI, this has been the driving force which made it possible to go from solving simple toy problems to building self-driving cars. That’s one of the most important reasons we could achieve this. What’s going to happen next? Of course, Moore’s Law will also play a role in the future but there are also signs that Moore’s Law is decreasing in speed: we are reaching the limits of miniaturisation which allows us to create faster chips, so at some point, this exponential increase will start to slow down. Of course, people are working on different computing paradigms, like quantum computing that could play a role in increasing the speed of Moore’s Law again, the question is where do we end up? So, I would say people are scared of robots taking over the world and super-intelligent machines: I think the current systems we have can been seen as very sophisticated statistical procedures (‘statistics 2.0’ or ‘statistics on steroids’) and
these systems are able to detect very complicated patterns. That’s a form of ‘Weak AI’ where we can solve a specific problem using these algorithms. But if you look at the original objective of AI it was building ‘Strong AI’: building truly intelligent machines that might even display such things as consciousness – we are not there yet. So we need to think about building smarter algorithms. Of course, if we get there, there are potential dangers, but there are also potential uses that could either help or be detrimental to society, like any technological advance.

Superintelligence
LL: I am interested in the term ‘superintelligence’ – what does it mean?

MvG: Superintelligence is a term introduced by Nick Bostrom, one of the researchers at the Oxford Future of Humanity Institute and they are investigating the long-term implications of AI and the notion of superintelligence is that these AI systems are becoming smarter and smarter all the time, and at some point these systems might reach the singularity, and this singularity idea is that at that point of time, the systems become so smart that they become smarter than humans, “superhuman” and once they achieve this level, they can start to improve themselves and then their intelligence will exponentially increase and we will be very simple beings compared to them, and then we won’t be needed any more.

LL: is this really going to happen?

MvG: Well, of course I work in AI and I am focusing on AI making predictions, but I  am not an oracle, so I don’t know what’s going to happen in the long-term! I do know that we have to be aware of these potential dangers and I also think that currently we are facing much-more pressing dangers. We are completely not there yet in building these types of very smart algorithms, but we do have algorithms that are able to solve very specific problems and it can be used in all kinds of manners – in good manners such as improving healthcare, but also to monitor the population or to help in automated warfare. So these are the dangers I really see as more pressing or urgent, currently.

LL: And without a set of worldwide principles? Is that something we need to be concerned about?

MvG: yes of course, but it’s hard because you need to align all the governments, and all the companies across the globe who are using this technology. At the same time there are examples of how to do this, in genetics for instance, where we also have legislation to make sure things are kept in check. But also there, there, there are particular excesses, people are rogue and do something else.

LL: You are hoping intrinsically, that ‘humanity’ will come to the fore and make sure these rules will be in place?

MvG: Definitely and I am also hoping that AI can really be of help in current society, because I think there are very pressing problems, thinking about sustainability, society, safety and I think AI can help to build a better society, and I think that’s also the ambition of my department.