From robots to AI: can you build a human?
The notion that we could create a human being by some means other than normal reproduction has been a facet of popular culture and scientific debate since Mary Shelley’s Frankenstein, 200 years ago. But with developments in artificial intelligence, robotics and biotech, could we now seriously consider the idea? If so, where would we start?
For robots to be able to function in human society, they would need to be able to move around like humans. So a human-robot would have to have some fairly intricate physical abilities and spatial awareness – something only clunkily replicated to date.
We might also we need to build robots that can predict and understand human behaviour, and show intelligent – and ethical – responses. In order to do this, we need to understand what intelligence is, and whether ethical behaviour is something that can be programmed into robots.
But to build the equivalent of a human, we need to go further to develop something else: consciousness. Human conscious experience involves complex phenomena such as being able to reflect on the workings of one’s own mind, to represent the thoughts and feelings of other people, and having a sense of free will. Human intelligence is profoundly social: we learn from and with other people throughout our lives and inherit thousands of years of intellectual gains.
Human intelligence is also shackled to emotional experience, laying the foundations of moral behaviour; feelings like guilt and shame can put the brakes on behaviour that is undesirable. Could these phenomena be replicated in machines or would machine intelligence and consciousness take other forms? How would we recognise consciousness in a robot?
If we succeeded in building a human, there are potentially thorny ethical dilemmas. Most importantly, should we ascribe personhood to robots? Some have argued that if AI is sufficiently human-like and we respond to it as if it’s a person, then it should be considered a person. In humans, the category of personhood involves concepts like reason and consciousness, but also moral responsibility. Would it be wrong for a robot to kill or to kill a robot?
Will it ever be possible to build a human? Can we even agree on what makes us human? When humans already exist and we have machines to assist us with specific tasks, what would be the purpose of building an artificial human at all?