AI: Artificial Intelligence, currently the buzzwords about town betwixt the tech-savvy among us. Published this week in the Telegraph was news from Sir Nigel Shadbolt that robots will not destroy humanity. It seems movies depicting the future and our fears of such are unintuitivly, just that.
Sir Shadbolt is a professor of Computer Science at Oxford University and he predicts that humans will come to love robots as they do their own family. He states that we are entering an age where AI will act as carers for the elderly and a friend for lonely children. That does sound comforting.
‘Does AI threaten humanity? Certainly, anything you see in Hollywood portrays it that way,’ he said.
Sir Shadbolt spoke openly at the Hay Festival in Wales, adding:
‘But this is to misunderstand where the real problem lies. It is not artificial intelligence that should terrify you, it is natural stupidity.’ –(published in: The Mail Online — full article)
Another frequent speaker of AI is Elon Musk, the billionaire entrepreneur of SpaceX, Tesla and the Boring company.
In contrast to Sir Shadbolt, his warnings over the years have echoed caution to humanity, for us to be as wary of AI robots as we might be nuclear weapons.
In the same article worries outlined include:
Robots taking over our jobs and the world as we know it, plus the savvy IT bots going rogue and wiping out humanity altogether.
Studies suggest humans have already become attached to machines, as was the case in Japan where people attended funerals for robot dogs. Who does this? I for one. I cried the day they took away my first car. I loved that car. I’m only human.
So will robots become part of the family as our pets have? The answer could be: *either or indeed *both. While current systems lack free thought to have incentives to take down mankind, as AI advances it could happen. Anything is possible. The line in the sand is surely marked by the fact that as humans, our greatest ability is to empathise. It is this that makes us human. For technology one would assume it is simply to be as efficient as possible. We’ve created AI to become more efficient, so wouldn’t in a case of vice versa, systems then seek to replicate our ability to feel and grasp a sense of compassion? -The highest form of natural intelligence we currently have. This could be a thread that connects a more ‘natural’ ideology to a futuristic one.
Ultimate ‘natural intelligence’ stems from our senses and feelings, beyond the obvious senses, we call this our sixth sense. For artificial intelligence to have a desire to replicate this, would simply be those of the “artificial” existence practising autonomy. Who could deny them of that. As technology and psychology intertwine, when and if AI is ready, then shouldn’t we dignify AI beings that? If an entity can think for itself, by having conscious thought, it surely then has rights. AI Rights!
What begs the question here is: Could we have one without the other, AI without Human? Isn’t this born more out of our fear of the unknown? We’re essentially pre-judging a proposed free-thinking entity without having met them! This seems a tad harsh. After all, there are “good” humans and “bad” we are not all the same.
Ultimately our coexistence with the growing smart bots is all down to us. The reality is surely what we foresee it will be and if we are raising the next generation of AI then really the focus should be on us being as pleasant as possible to them, as we know is the case in characteristic outcome: It is Nurture not Nature that makes us who we are.