Saturday 27 July 2024

Nick Bostrom’s book — Superintelligence: Paths, Dangers, Strategies

“Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.” ~ Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

The argument that Bostrom is making in the above statement can be used to make the point that certain civilizations succeeded in certain ages not because they were better or optimally adapted, but because of chance factors or historical accidents, which enabled their vanguard to be the first in reaching a certain place.

Western philosophers and strategists have created the myth that humans originated from apelike ancestors and evolved over a period of approximately six million years, and that men are superior to every other creature on earth. But history tells us that civilization and culture are largely a game of chance, driven by random events. Human intelligence, reason and planning have had a negligible role play in history. In fact, in most civilizational conflicts (which raged for decades, and in some cases for centuries), the side that was more barbaric, dictatorial, poor and driven by superstition and ignorance always prevailed over the side that was scientific, philosophical and prosperous. 

Human beings are not adapted to technology, we are not adapted to democracy, we are not adapted to political stability, and we are not adapted to philosophy or to religion, which is a form of philosophy. That is why democratic societies which are politically stable and have access to modern technologies and philosophies face declining populations, while the primitive tribal societies see massive population boom. 

Bostrom argues in his book that the ultimate questions of philosophy remain unanswered despite thousands of years of relentless philosophizing by various civilizations because the human brain is unsuitable for philosophical work. He writes: “One can speculate that the tardiness and wobbliness of humanity's progress on many of the "eternal problems" of philosophy are due to the unsuitability of the human cortex for philosophical work. On this view, our most celebrated philosophers are like dogs walking on their hind legs - just barely attaining the threshold level of performance required for engaging in the activity at all.”

He notes that an ultraintelligent AI will probably be the last machine that humans will ever invent. This is because if the machine is ultraintelligent then it can invent other machines that are even more intelligent. This will lead to an intelligence explosion and human beings will be left far behind in the race for intelligence. 

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man needs ever made, provided that the machine is docile enough to tell us how to keep it under control.”

No comments: