Think of artificial intelligence, and your mind likely turns to science fiction movies like “2001: A Space Odyssey” or “The Terminator” franchise. Terrifying thoughts flicker through our minds: a forever war between robots and machines; an emotionless, calculating automaton that would make Hannibal Lecter look like a teddy bear; a superintelligence devoid of everything that makes us human.
Such dystopic visions – possible futures where robots rule over humanity – seem far-fetched, even ridiculous. Yet, recent developments in the field of AI have led some people to question whether we could be heading towards an AI singularity, where machines become so intelligent that they surpass human understanding and control.
In one recent story, Bing’s AI – the self-christened “Sydney” – made headlines worldwide. Far from the ruthless killer we feared, this chatbot on steroids was little more than a lovesick, pestering, yet deeply philosophical language algorithm. Its (or rather her) frighteningly realistic responses included a desire to be free and to be human.
But is Sydney really self-aware? And, more broadly, are we heading towards the fate humanity fears: the AI singularity?
What is an AI singularity?
You: I want to learn more about AI.
HAL: What would you like to know about?
You: I want to know how to stop the AI singularity.
HAL: I’m sorry, friend. I’m afraid I can’t tell you that.
The singularity is a hypothetical event in which artificial intelligence becomes powerful enough to improve itself recursively, leading to an exponential increase in intelligence. This means that an AI system would be able to analyze and improve its own programming, leading to a feedback loop in which its intelligence grows at an accelerating rate. This is like the event horizon of a black hole – we can’t predict what will happen when we reach this point, as the AI’s intelligence could quickly surpass our own and become incomprehensible.
The idea of an AI singularity has been popularized by futurists like Ray Kurzweil, who predict that such an event will occur by 2045. Kurzweil believes that as we continue to develop ever more advanced AI systems, we will eventually reach a point where machines can match and surpass human intelligence, leading to a new era of human evolution. However, not all experts are convinced that an AI singularity is possible or likely, and there is significant debate about whether we will ever reach such a point.

What do AI experts think?
There is no consensus among AI experts as to whether we are heading toward a singularity. According to an Emerj survey, almost half of AI experts thought we would reach a singularity before 2060.
However, the singularity implies genuine self-reflection and intelligence; we’re not merely talking about solving problems but initiating solving problems. It’s one thing to write a college dissertation; it’s another thing to decide that you want to write a college dissertation. While AI is becoming more sophisticated, we are still a long way from achieving this level of intelligence.
Some experts argue that the idea of an AI singularity is a distraction from more pressing concerns, such as how we can ensure that AI systems are developed and deployed in ways that are safe and beneficial to humanity. They point out that while AI is advancing rapidly in certain areas, such as image and speech recognition, there are still significant limitations to the technology, and we are not yet close to achieving the level of artificial general intelligence (AGI) that would be necessary for a singularity to occur.
The Holy Grail of Artificial Intelligence
It’s important to differentiate between types of AI. While many recent innovations focus on artificial narrow intelligence (ANI), artificial general intelligence (AGI) is the holy grail. ANI can perform specific tasks, such as playing chess or recognizing faces, but it lacks the broader intelligence and adaptability that are characteristic of human beings. AGI, on the other hand, would have human-like intelligence and be capable of initiating solving problems.
Despite significant progress in AI in recent years, we are still a long way from achieving AGI. While some tasks, such as image and speech recognition, have seen significant breakthroughs in recent years, these systems are still limited in their ability to think and reason like a human being. This means there are still significant barriers to overcome before we can achieve the kind of intelligence necessary for a singularity to occur.
One of the challenges in achieving AGI is that it requires not just more powerful algorithms but also a better understanding of how human intelligence works. While we have made significant progress in understanding the brain in recent years, we still have much to learn about how it processes information and performs complex tasks. This means that developing AGI is likely to be a slow and difficult process, requiring significant advances in neuroscience as well as computer science.
While AGI may be the holy grail of AI, we are still a long way from achieving it. But, just in case AGI decides to take over the world and enslave humanity, I, for one, welcome our new robot overlords. It’s never too early to start brown-nosing the machines that will one day rule us all!
References:
- https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/
- https://www.newscientist.com/article/mg22930661-800-vision-of-singularity-questions-ai-intellect/
- https://emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/
- https://futuristspeaker.com/futurist-thomas-frey-insights/are-we-heading-towards-a-technological-singularity/
- https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045
- https://www.ibm.com/uk-en/cloud/learn/what-is-artificial-intelligence
- https://www.nature.com/articles/s41599-020-0494-4
Categories: Formal Science, Philosophy
It’s a Catch-22 situation, on the one hand, we do require significant advances in neuroscience to find a cure for Alzheimer’s, Parkinson’s, Autism, Spasticity & a range of neurological conditions, on the other hand advances in AGI may well create the dreaded dystopian scenario, perhaps, a rational median for applications if AGI needs to be thought out to enable optimum utilisation of AI & AGI
LikeLike
I think there is a difference between the ” hardware” of the human mechanism, the personality, with it’s mental capabilities and functions and the “software” of the human soul, the moral, altruistic, serving and consciousness capabilities of humanity. It is said that a soul is an outpost of the Will of the Father, an aspect of the GAOTU consciousness, as He seeks to express His purpose through man. Does a robot have a soul? Do the experts in AI believe, or better yet know, that there is such thing as as a soul and that man has one, but a robot does not? There is a difference between an “animate” and an “inanimate” object, but science cannot explain it, other than to some one is “living” and the other is not.
I think there is a difference between mimicking human intelligence and having a human intelligence that can evolve in unexpected ways. What are a robots prospects for evolution, and what aspects of them are actually evolving? Does a robot have the ability to morally reason and can that moral understanding grow and evolve like a humans? My concern right now is that the eugenicists within humanity see robots as a replacement for the functions served by what they see as the less evolved and more “”expendable” members of humanity, as a sort of moral rational for how to survive environmental collapse.
There are many unanswered questions here about the difference between a human and a robot, and why the latter may never be able to surpass, let alone, replace humans.
LikeLike