What does it mean to be human? Philosophers and theologians have been grappling with that question for millennia. From Aristotle and Plato to Descartes and Jung, we have made countless attempts to qualify what it means and to show how our ‘humanness’ is displayed.
For the layperson, our response has probably something to do with the concepts around being able to think, being creative, being able to have and experience a range of emotions, being able to have a moral framework, being able to show empathy and being able to grapple with the concept of our own mortality.
The writer Philip K. Dick, who wrote Do Androids Dream of Electric Sheep made famous by the two Blade Runner films, posed the same question but he came at it from a vastly different starting point. His book, set in a post-apocalyptic earth, focuses on androids – robotic machines that look like humans, but are programmed machines. They are programmed to withstand extreme temperatures on other worlds or perform tasks that would exhaust the average human. These ‘replicants’ are hunted down and attempts are made to exterminate them when they appear to go ‘rogue’ and have thoughts, dreams and aspirations of their own.
Why is this important? Because today we see the beginnings of science fiction and philosophy collide, a big bang if you like, a potential moment of technological singularity where we create a superintelligence.
If AI had access to all the data in the world, then it could create anything that humans have created and potentially even better things. Give a generative AI large language model all the data ever produced by humanity and it would rival Shakespeare, outdo Da Vinci and Michelangelo in painting, create the most profound and moving music and more.
So where would that leave us mere mortals, with limited capacity to ever outdo a computer-based entity in fast thinking let alone creativity? Would we curl up in a foetal position and mourn the loss of a meaningful life? The concept of self-actualisation, the final step in the pyramid of human needs, from the basics of food and shelter at the base of the pyramid to the goal of creativity and personal growth, would potentially be made redundant. What happens when we take that away? Or when we realise we will forever be second rate to a machine?
What about consciousness, the sense of who we are and that we are alive and thinking? This leads me back to Blade Runner and androids/replicants. Philip K. Dick envisioned a time when even machines could develop a sense of themselves, to feel deeply and have a desire for freedom and worth. At the end of the movie, the main replicant, Roy, saves the protagonist and in his dying speech claims: “I have seen things you people wouldn’t believe… My memories will be lost in time like rain.” He is aware of his impending death and the sadness that goes with that. He has become sentient.
Have we, like the famous Mary Shelley, created a Frankenstein that one day will either turn on its master or outdo it?
Again, I ask, what does it mean to be human and will the lines blur in the not-too-distant future?
Have you pondered the future of AI and humanity? Share your thoughts in the comments section below.
Also read: Turn ageing into living large