Are we ready for the Fourth Industrial Revolution? — Forecasting the next 10,000 years

Namburi Srinath
6 min readApr 27, 2020

--

We all remember that famous scene from “Avengers Infinity War” where Dr. Strange foresee the possible outcomes to fight against Thanos.

Dr. Strange foreseeing ~14 million possibilities of future (Source: Internet)

Though it is a fictional character, it is important for us to foresee the future and be prepared for all the possible scenarios given the fact that:

“The great tech solves the mediocre problems but introduces great problems which will be solved by the greater tech and so on”

A new technology solves many of our existing, mundane problems but raises new, sophisticated ones. For example, the advent of email improved communication and made our lives easier but introduced concerns with spam filtering, security login, hacks etc; In fact, every research has 2 domains, to advance it and to solve the problems caused by its previous version

Whenever a disruptive technology comes and changes humankind, it is roughly annotated as “Industrial Revolution” and till now we have 3 such revolutions.

The 3 Industrial revolutions occurred in humankind and waiting for the fourth one i.e Super-intelligence (AGI — Artificial General Intelligence) (Source: World Economic Forum)

While the problems bought by the previous tech are not very hard to solve, the situation is rapidly changing. The new tech brings more difficult problems to solve.

For example, “Deepfakes”, a recent trend in AI has a far reaching implications on Security, Privacy (Source: MIT lecture on Intro to Deep Learning, https://youtu.be/l82PxsKHxYc).

The advantage we have till now is that any tech cannot think on it’s own in broad scale, i.e we are nowhere near to Artificial General Intelligence (AGI) or high level, self thinking machines. So, we got time to think and solve the problems. But research ultimately brings us to a day where the tech will recreate itself and will become independent of humans, termed as Singularity which many believe is in near future.

“What will happen after that?”

There comes a point in human race where technology will take care of itself and does not need human intervention, called as Singularity (Source: Wait But Why)

So, it’s not just about “Achieving AGI” but also to “Foresee the problems caused by it and preventing it in advance, if possible”. And given our supreme goal, i.e “Spread human race to the cosmos to prevent from ultimate extinction” it is very important to know how we are heading towards it and thus to become Dr.Strange and look into the future possibilities.

In the book “Life 3.0”, MIT physicist Max Tegmark discusses many concerns and scaled to:

  1. Short term goals: What happens within the next 50 years i.e how to prevent humankind from destruction caused by climate change, epidemics, nuclear wars, economic fallout etc?
  2. Medium term goals: What happens within the next 10,000 years i.e the different possible scenarios on Earth?
  3. Long term goals and the Ultimate one: What happens in the next billion years to the humankind because the entire solar system fades out within a billion years?
Cover page of the book Life 3.0 by Max Tegmark, MIT physicist (Source: Wikipedia). Only one chapter’s excerpt is provided. Please look into the book for more details such as “When AGI might come?”, “How AGI might come?”, “Does it have consciousness?” etc; For survey results, check this link

Max Tegmark scaled down most of the scenarios (~14 million according to Dr.Strange) that might come within the next 10,000 years to the following:

Libertarian Utopia

  1. AGI exists but does not take total control. The only rule is property rights i.e the division of land based on zones to avoid conflicts.
  2. AGI solves every problem for us
  3. We can transfer consciousness to machines. So, theoretically there is no death
Land can be divided as 3 zones (Red: Machine zones, Blue: Mixed Zones and Green: Human-only zones)

Downsides:

  1. Life becomes banal as nothing to suffer, nothing to accomplish and eventually there will be no purpose in life
  2. How to decide whose consciousness to upload? (Every human and animal or select based on a criteria)

Benevolent Dictator

  1. AGI exists and takes complete control of the world. Divides into zones based on occupational sectors and has strict rules implemented universally and locally
  2. AGI solves every problem for us
Cartoon representation of benevolent dictatorship. (Source: Poorly Drawn Lines)

Downsides:

  1. No freedom because of dictatorship
  2. Life becomes banal as nothing to suffer, nothing to accomplish and eventually there will be no purpose in life

Egalitarian Utopia

  1. AGI does not exist, humans and robots work together with robots doing most of our tasks
  2. No properties, no patents i.e everything is open sourced
  3. Give fixed income to all

Downsides:

  1. Lacks motivation for smart people due to lack of incentives
  2. Eventual increase of tech will create AGI and thus this scenario can’t last longer

“Humans will become as irrelevant as cockroaches” — Marshall Brain

Gatekeeper

  1. AGI exists and interferes with us minimally
  2. The main objective is to prevent any further AGI creation
Cartoon representing the ultimate aim of Gatekeeper i.e to avoid any further research on creation of AGI (Source: mathwithbaddrawings)

Downsides:

The progress in tech will be stymied (as our ultimate aim is to spread to cosmos i.e to other galaxies)

Protector God

  1. AGI exists and is similar to benevolent dictator but we have challenges as it does not solve every problem for us.
  2. Something that closely resembles our current world where theists, atheists are present

Downsides:

Leads to Theodicy problem, i.e though AGI knows how to solve, it will not solve every problem thus making us suffer till a point of time. So, the question “Why a good god would allow suffering?” arises.

Cartoon depicting “Theodicy problem” and the issue with Protector God scenario (Source: http://stephenlaw.blogspot.com/2016/01/god-and-theodicies_10.html)

Enslaved God

AGI exists and we control it. Thus we use it as a weapon for our desires

Downsides:

Who will control AGI and how to control it?

Without even knowing exactly how it is working, there is no way to control AGI in future. In fact “Humans controlling AGI” might be similar to a scenario where “Ants controlling humans” (Source: Machine Learning, xkcd)

Conquerors

AGI exists and makes us extinct cruelly (thus bringing sixth extinction).

Some might think AGI cannot extinct/control human kind. It’s as if an animal 10,000 yrs back thinking “We don’t threaten humans, so why would they kill us?” (Source: The NewYorker)

“I’d like to share a revelation that I’ve had during my time here. It came to me when I tried to classify your species and I realized that you’re not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You’re a plague and we are the cure.” — Agent Smith, from the movie Matrix

Descendants

AGI exists and helps us to spread to cosmos but eventually we become extinct.

The only thing we carry forward in this scenario is the feeling that “Our children are much more intelligent than us” (Source: VoxEurop)

Zookeeper

At present, we are protecting rare species by keeping in zoos. Similarly, AGI keeps humans in zoos to avoid complete extinction of humankind.

“Humans might be the next tigers in zoo”

Zookeeper scenario where humans might be put in cages. It is worse if AGI might decide to make others roam free because other creatures are not threatening for it’s existence.

1984

Stop working on tech related to AGI as it has unprecedented consequences. It’s again stymieing research and thus preventing research community to give greater good to humankind. It is as if to say “Don’t produce cars as it occupies extra space on earth” (Though this is a real problem)

A caption from novel 1984 by George Orwell saying “Big Brother is watching you”. Roughly translating that there is no privacy on activities/research etc;

Reversion

AGI gets created and deletes all tech that has existed till then thus taking humankind to 1500 years back to farming, fishing, poultry etc;

Self-destruction

Entire human race becomes extinct even before creating AGI. Climate change, wars and epidemics will lead to this if unchecked.

“In the long run we are all dead” — John Maynard Keynes

So are we really ready for the fourth industrial revolution?

Maybe we aren’t but the fact is, it has already started and takes at least 50 years to come into full effect thus leaving us some time to think and solve problems raised in all the domains. In fact, Max addresses many concerns in domains of law, policy, economics, health, science, philosophy etc; in this book and it’s worth a read (though high level tech content is present)

Each chapter covering from the past 13.8 years to the next billion years outcome (Source: https://space.mit.edu/home/tegmark/ai.html, Life 3.0)

A one line summary of the book is:

“Almost every species that existed on earth becomes extinct. How to prevent humankind from that?”

Personal Website: namburisrinath.github.io

Medium Handle: namburisrinath.medium.com

LinkedIn: https://www.linkedin.com/in/namburi-gnvv-satya-sai-srinath/

--

--

No responses yet