In this article, I will share with you the key points of the singularity principle by David Wood why some of the concepts raised in the book are important and need to be considered widely, and why this book is a good place to start this conversation.
My thoughts on the singularity principle by David Wood
I read this book a few months ago at the height of hype around Artificial Intelligence which has awakened a lot of excitement in me. It reminds me of how I felt as a child when I was first exposed to computers. But I also worry that while there is an upside, there are potential downsides.
So while I found the book to be a hard read, it raises some important concepts, so I recommend it to anyone interested in AI.
What is a Technology Singularity?
At the heart of this book is the concept of the technology singularity, and I should define a Technology singularity based on my notes from this book. A technology singularity is a technological breakthrough which brings massive upheaval to human civilization.
It got me thinking have humanity lived through other technological singularities? As I thought about that question, I quickly thought of two examples of past technological singularities, which are farming and writing.
The discovery of farming allowed people to live together in large settlements enabling us to build our first cities. Reading and writing allowed humans to share information and knowledge with others across distance and time. How different would our lives be to today without those two discoveries?
In the singularity principle, David Wood argues that General Intelligent Artificial Intelligence will be a technology singularity event and is not alone in making this point.
How can we know what impact Artificial General Intelligence will have on humanity, or even if the benefits outweigh the negatives?
The Technology singularity raises important questions about what we want to get from this technology? Are we prepared for life with a technology that we have created and has the same level of intelligence as us? Will it be sentient? What do we do if it is sentient?
What is Artificial General Intelligence?
Artificial General Intelligence is Artificial Intelligence(AI) which has general human-level intelligence, and Amir Husain, in his book The Sentient Machine, thought Artificial General Intelligence will likely have the ability to set its own goals and could even have the capability for “self-directed thought”.
No one knows when Artificial General Intelligence is likely to occur or if it will. One prediction is that if Moore’s law continues to hold true, then Artificial General Intelligence could emerge by 2045. That won’t be easy, but Intel has predicted a trillion transistor processor by 2030. If this prediction by Intel were to hold true, it would mean Moore’s law survives until at least 2030.
How can we prepare for a technology singularity event?
The book aims to start a discussion on the likely problems of a technology singularity event and how we should look at managing those issues. In the book, David considers 21 principles. It is a discussion that needs to start now.
Here are some things that need to be considered in these discussions.
International treaties on Artificial Intelligence
We have international treaties on nuclear technology, and we need to start negotiating international treaties on Artificial Intelligence; this is the difficult bit. It must be agreed upon by everyone, and its goal must be the development of Artificial Intelligence for the good of humanity. This will be challenging, but it is something we must try.
Fully understand how Artificial Intelligence works
One of the biggest concerns with the current development of Artificial Intelligence is that it is a black-box technology meaning that no one fully understands how the technology works.
The Artificial intelligence technology at the forefront of research and available for public use is called Generative Artificial Intelligence. It is designed to successfully predict the next word in a sentence with one instruction. The technology can be used in multiple ways, from helping me write the layout for this blog post to writing computer code.
One of these Generative Artificial intelligence applications is Google Bard, which earlier this year surprised Google by teaching itself Bengali (Link to Wiki page), the official language of Bangladesh, with only a little help from Google engineers. Google can’t explain how Bard did this because it is a black box application.
We must only continue future development of black-box Artificial Intelligence systems once we better understand what is happening inside the models. That will help us steer the development of Artificial in the right way.
Artificial Intelligence Prime Directive
I’m planning to explore this further in a future article, so I will only go into it in a little detail in this article. But the basic idea is that you have a set of rules or philosophies that Artificial Intelligence can’t break, similar to the concept of Isaac Asimov’s Three Laws of Robotics.
One of the criticisms of this idea is that it might not be possible to stop General Artificial Intelligence from breaking it, which may be a valid point. Still, it will act as guidance for those developing Artificial General Intelligence.
Answer fundamental questions about ourselves
This one is not directly mentioned in David Wood’s singularity principle. It has come to me from other books and articles I have read, especially Amir Husain’s book The Sentient Machine, which I plan to write a summary of later.
To help us manage the process, we must determine what we want to get out of it and consider the possible outcomes of General Artificial Intelligence singularity. One obvious question is, what happens if we no longer have any jobs?
But one of the most important questions to ask ourselves is, what is Sentience, and how can we measure it? A question philosophers have been themselves for thousands of years, and we are not really any closer to answering it. This question is important. If there is a link between sentience and intelligence, wouldn’t General Artificial Intelligence be sentient?
Conclusion
While I didn’t enjoy reading David Wood’s Singularity Principle, I’m glad I did. I’m not recommending it for everyone, but if you are genuinely interested in Artificial Intelligence and its potential impact, you should read it.
I will be writing more about Artificial Intelligence and how we manage its development. I recommend our summary and review of Manna, a science fiction book which explores two potential futures.
Last Edited 21/09/2024.