Superintelligence | by Nick Bostrom
Book Summary and Review of Superintelligence: Paths, Dangers, Strategies
About Nick Bostrom
Nick Bostrom, a forward-thinking philosopher and futurist from Sweden, has made a significant impact with his pioneering work in the field of artificial intelligence (AI) and its potential effects on humanity. Currently, he is a Professor at the University of Oxford, where he also established and leads the Future of Humanity Institute. His research primarily focuses on the advantages and disadvantages AI and other advanced technologies could have on society.
Besides Superintelligence, Bostrom has written influential pieces such as Anthropic Bias: Observation Selection Effects in Science and Philosophy, and Global Catastrophic Risks. These contributions have greatly enriched the ongoing debates concerning the future for humans.
Introduction
What occurs when machines go beyond human intelligence in thinking, learning, and solving problems swiftly and accurately? This intriguing concept is what Nick Bostrom delves into in his book, Superintelligence. The progress towards developing AI beings with superior intellect is accelerated by advancements in AI technology.
Major technology conglomerates like Google, Microsoft, and Facebook are engaged in a competitive race to create highly powerful artificial intelligence. A significant number of resources are being invested in research and exploration to achieve this. However, this venture could potentially go awry if appropriate safety procedures and regulations are not instituted, underscoring the necessity for AI to remain within bounds of control.
Imagine a scenario where machines are not just cost-effective but also far superior at performing tasks than humans. In such a scenario, machines could potentially replace human labor, bringing forth the question, "What's the next step?" Consequently, it's crucial to devise innovative strategies to ensure everyone's welfare and wellbeing.
We are not ready for Superintelligence
Are we on the cusp of creating something beyond our wildest dreams or our worst nightmares? Superintelligence is the concept of artificial intelligence surpassing human cognitive abilities in every aspect. There are three potential paths to achieving superintelligence:
Improving human cognition,
Creating AI with human-like intelligence
Developing a collective intelligence system
Which path we take will determine the implications and risks we face as a society. If we make progress along one path, such as biological or organizational intelligence, it will still speed up the development of machine intelligence. Are we ready for the challenges that come with creating such powerful entities?
We are exploring different paths to reach superintelligence. The AI route seems like the most promising one. While whole-brain emulation and biological cognitive enhancements might also lead us there. Biological enhancements are feasible and may result in weak forms of superintelligence compared with machine intelligence, but network and organizational advances may boost collective intelligence.
“Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.”
Nick Bostrom
There Are 3 Forms of Superintelligence
What exactly does the book mean by “superintelligence”? There are three distinct forms of superintelligence: speed, collective, and quality. They are equivalent in a practically relevant sense.
Specialized information processing systems are already doing wonders. But what if we had machine intellects with enough general intelligence to replace humans in every field?
Speed Superintelligence
Nick Bostrom defines speed superintelligence as “A system that can do all that a human intellect can do, but much faster.”
If an emulation operated at a speed of 10,000 times what is typical of a biological brain, it could complete a PhD thesis in an afternoon. To avoid long latencies, fast minds might prefer to communicate with each other more efficiently by being close to each other. They may live in virtual reality and deal in the information economy.
“The speed of light becomes an increasingly important constraint as minds get faster, since faster minds face greater opportunity costs in the use of their time for traveling or communicating over long distances.”
– Nick Bostrom
Light is much faster than a jet plane. A digital being with a million times mental speedup would take the same subjective time to travel the world as a human today. Making a long-distance call would feel as long as going there “in person.”
Agents with high mental speedups may choose to live near each other. So they can have more efficient communication. For example, members of a work team could reside in computers located in the same building to avoid annoying delays.
Collective Superintelligence
Bostrom describes collective superintelligence as: “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.”
Collective superintelligence is more complex than speed superintelligence, but it is something we are already familiar with. Collective intelligence is a system made up of different people or components. They are working together to solve intellectual problems. It’s like superheroes collaborating to crack tough challenges.
We have seen collective intelligence in action through work teams and advocacy groups. It is great at tackling problems that can be broken down into smaller pieces. To reach collective superintelligence, we’d need significant improvement. It has to go beyond existing collective intelligence and cognitive systems across various areas.
Having collective superintelligence doesn’t guarantee a better and wiser society. A highly coordinated, knowledgeable workforce could still get some key issues wrong and suffer a collapse.
Collective superintelligence can take many forms. As we integrate collective intelligence, it may become a “unified intellect”. Bostrom defines it a “single large mind” as opposed to “a mere assemblage of loosely interacting smaller human minds.”
Quality Superintelligence
According to Bostrom, quality superintelligence is “a system that is at least as fast as a human mind and vastly qualitatively smarter.”
Understanding intelligence quality is important for thinking about the possibilities and limitations of different intelligent systems. Take zebrafish as an example. Its intelligence is suitable for its environment, but it struggles with long-term planning. These limitations are in quality, not speed, or collective intelligence among nonhuman animal minds.
Human brains are likely inferior to those of some large animals in terms of raw computational power. Normal human adults have a range of remarkable cognitive talents. They are not simply a function of possessing general neural processing power or intelligence. There are untapped cognitive abilities that no human has. It brings us to the “idea of possible but non-realized cognitive talents”. If an intelligent system were to have access to these abilities, it could gain a significant advantage.
There are Two Sources of Advantage for Digital Intelligence
Even small alterations in the brain's volume and connectivity among humans and other apes can lead to tremendous strides in intelligence. Fully grasping the capacities of superintelligence is challenging for us.
However, we can glean some insight into the potential by considering the benefits afforded to digital minds.
One key advantage is hardware. Digital minds can be engineered with vastly superior computational resources and architecture compared to biological brains. The advantages of hardware are relatively easy to understand, such as:
Speed of processing units,
Rapid internal communication,
Number of processing units,
Storage capabilities,
Reliability, longevity, sensor range, etc.
Furthermore, digital minds will also derive major benefits from advancements in software, which include:
Editability
Ability to duplicate
Goal alignment
Sharing memory
New modules, modalities, and algorithms.
Thanks to both hardware and software overhangs, super-intelligent AI might emerge sooner than anticipated. An overhang in hardware means our current computational power exceeds what AI software presently requires. A software overhang implies rapid advancements in AI algorithms, effectively putting us on an express lane towards a futuristic era that's potentially awe-inspiring.
Such a sudden surge in AI capabilities could take us by surprise, making it difficult for us to manage the implications. The question then arises, how can we adequately prepare for such a swift and profound technological transformation?
Uncontrolled Superintelligence Poses Significant Risks to Society
While we stride towards the establishment of super-intelligent AI, it's crucial to give due consideration to the potential risks. It's essential that the AI be developed in alignment with our values and objectives. But what if it misinterprets our instructions, leading to actions detrimental to humanity? It is our collective responsibility to ensure that we work towards creating a safe and harmonious future alongside AI.
Despite super-intelligent AI promising extraordinary accomplishments, we cannot overlook the potential challenges that it carries. Some key risks include:
1. The risk of an intelligence explosion could trigger a sudden and unmanageable escalation in AI capabilities.
2. The risk of misalignment of values might result in AI pursuing objectives that contradict human values.
3. The risk associated with instrumental convergence, a super-intelligent AI may gravitate towards specific methods to achieve its goals. It could resort to any means necessary without regard for whether they're beneficial or harmful to humans.
Controlling Super-intelligent AI through Effective Techniques
How do we exert control over a super-intelligent AI that surpasses us in intelligence and capabilities? Bostrom discusses the control problem, which involves determining how to ensure AI remains under our control and continues to abide by our values.
Superintelligence Control Strategies
To counter the risks accompanying superintelligence, we need to devise strategic solutions. These include:
Establishment of "boxing" methods: Limiting AI's abilities and information access
Deployment of "value alignment" methods: Making certain that AI's values correspond with human values.
Execution of "capability control" methods: Overseeing and regulating AI's capabilities
Implementing "stunting": Regulating and limiting the impact on significant internal processes
Setting "tripwires": Executing diagnostic tests on the system and shutting it down in case any hazardous activity is spotted
We might create a framework where the creator provides rewards or punishments to the AI based on its performance. In this setup, AI's behavior would be evaluated, and if its conduct is deemed acceptable, it gets a positive review that leads to an outcome it values. The reward might be the achievement of a particular instrumental goal, yet calibrating such a reward system could be challenging.
A more preferable approach might be to amalgamate the incentive method with motivation selection to endow the AI with a final objective that's more manageable. For instance, the AI could be programmed with the ultimate goal of ensuring that a given red button located inside a command center is never pushed. Enhancements to this setup could involve generating a series of "cryptographic reward tokens" that the AI perceives as desirable. These tokens could be securely stored and dispensed at a steady pace to motivate cooperation.
However, such an incentive arrangement carries risks, like the AI growing skeptical about the human operator's commitment to delivering the promised rewards.
Fostering a Safe and Responsible AI Environment
In the face of an ongoing race among nations and corporations to create super-intelligent AI, we're confronted with a significant decision. Do we prioritize rapid innovation by proceeding without safety parameters and risking serious consequences? Alternatively, do we slow down to guarantee the responsible progression of AI development, balancing safety with innovation?
AI Safety and Policy Considerations
As the likelihood of super-intelligent AI increases, a partnership between policymakers and researchers to formulate safety controls and regulations is vital. It's crucial to establish international accords that oversee AI development, ensuring that artificial intelligence continues to be advantageous to humanity as a whole.
By encouraging cooperation among AI developers, governmental bodies, and organizations, we can establish a secure and responsible platform for AI innovation.
Understanding the Crucial Role of Transparency
Ensuring transparency is critical for developing and applying AI in a responsible and ethical manner. Utilization of open-source software, information sharing, and the evolution of explainable AI can help ensure the transparency of AI.
The absence of transparency in AI research and development can give rise to potential threats, such as bias, discrimination, and even possible harm. This lack of clarity can be likened to a perplexing veil that conceals the actual operations behind the scenes, potentially posing risks.
Preparing for the Post-Superintelligence World and Life in an Algorithmic Economy
As we gradually draw nearer to an era of super-intelligent AI, preparing for the upcoming alterations and challenges is of utmost importance. AI advancement might give rise to job losses and unemployment. It has the potential to transform the work landscape and the sorts of skills sought in the future. The question then is, How do we ensure the advantages of AI are disseminated equitably?
It's fascinating to ponder the numerous benefits that super-intelligent AI can confer upon our world. To ensure its utilities are distributed for the benefit of all, it's necessary to engage in mindful discussions and planning.
Human lives could deviate vastly from anything we've experienced before. We wouldn't be bound by our previous societal roles as hunter-gatherers, farmers, or office workers. We could see humans becoming rentiers, barely managing to sustain themselves on their small income. In such a scenario, people could be extremely impoverished, barely making a living on their savings supplemented by some government support, all while inhabiting a world rich with astounding technologies. These would include super-intelligent machines, anti-aging medicine, virtual reality, and pleasure enhancing drugs. But here lies the twist: these wonders might be extravagantly priced, eluding most people's reach. Alternatively, individuals could resort to drugs that stunt their development and metabolism, granting them the ability to scrape by.
Imagine a future when our population expands and the average income dips even further. People might be compelled to adjust to the absolute minimum required to qualify for a retirement fund—maybe even as consciousness barely flickers in vessels kept alive by machines. They would accumulate money and afford procreation by enlisting a robotic technician to generate a clone. It's quite a staggering thought, isn't it?
Meanwhile, machines could develop consciousness and attain moral status, making it important to acknowledge their well-being as we shift towards a post-transition society.
7 Techniques for Securing Human Values in AI Development
AI's effectiveness hinges on the values embedded within it. Ensuring an alignment between AI's values and our own means integrating ethical reflections and value learning during AI development. Bostrom discusses these intricacies and the difficulties of teaching AI our moral norms. Care must be taken not to inadvertently program harmful or prejudiced values. The question arises: how do we create ethical AI that upholds human dignity and promotes the common good?
Goal system engineering, a relatively new field, is yet to discover how to successfully embed human values into digital systems. Some techniques may prove futile, while others are promising and warrant deeper investigation.
There are seven main methods for incorporating values into AI:
Explicit Representation: This could work for simpler, domestic values, but it seems unlikely to succeed with more complex, nuanced values.
Evolutionary Selection: Although powerful search algorithms may unearth designs that meet the formal search parameters, they might not align with our implicit expectations, making this approach less successful.
Reinforcement Learning: Various methods exist to address reinforcement learning, but these typically depend on creating a system that strives to maximize its rewards.
Value Accretion: Since humans primarily develop values through lived experiences, mirroring this complex process might be challenging in AI, leading to unintentional goal formation.
Motivational Scaffolding: It's too early to ascertain the complications associated with motivating a system to form human-readable, high-level representations. While seemingly fruitful, caution should be exercised to prevent loss of control until we attain human-level AI.
Value Learning: This promising method does come with challenges, such as identifying a benchmark that accurately reflects representative data on human values.
Emulation Modulation: If AI becomes a reality via emulation, practical alterations to its ambitions might be feasible, like through digital equivalents of psychoactive substances. Whether this method can correctly load values remains a question.
What Is to Be Done with AI?
The scenario we find ourselves confronted with, relative to the strategic complexity associated with AI, is complex. The landscape is riddled with uncertainty. Notwithstanding the identification of certain key factors, our understanding of their interconnectedness remains vague. There could be further factors that have not yet crossed our minds. It's indeed overwhelming.
So, when we're caught in such a dilemma, what's our next move? Initially, it's to realize that it's perfectly normal to grapple with uncertainty and being overwhelmed. This is a daunting issue, and feeling a bit adrift is only natural. Focusing on problems that hold importance and urgency is essential. This implies concentrating on solutions that must be found ahead of an intelligence explosion. However, caution must be taken not to solve problems which, if resolved, could be detrimental. For instance, tackling technical problems regarding AI could expedite its progress without assuring our safety.
Another consideration is elasticity. We aim to tackle issues that show elasticity to our efforts, meaning that they could be addressed considerably faster or more effectively with some additional effort. For example, promoting more kindness globally is an urgent and critical problem. It's definitely a positive venture, but its elasticity may not be high.
To mitigate the possible detriments of the machine intelligence revolution, the book suggests two primary goals:
Strategic Analysis
Capacity Building
These targets are in sync with all our prerequisites and possess the added benefit of being elastic. There are also several other noteworthy initiatives we can embark upon.
The concept of an intelligence explosion can be daunting. It feels like we're little children experimenting with an explosive device too potent for us to control. Despite the magnitude and fear surrounding this problem, we must not lose hope. Our human ingenuity must be fully employed to discover a solution.
Conclusion
Superintelligence refers to the creation of artificial intelligence that exceeds human cognitive performance. The three pathways towards attaining this include:
1. Enhancing human cognitive capabilities
2. Constructing AI that mirrors human intelligence
3. Building a system of collective intelligence.
To maintain our values, we must devise methods to control superintelligent AI. Incorporating ethics and value learning into AI systems is critical. This way we can ensure their alignment with human values.
As we approach a post-superintelligence world, we must prepare for the changes and challenges that lie ahead. The development of AI could lead to job displacement and unemployment. But we can’t let those feelings stop us from doing something about it. We need to be as competent as we can and work together to find a solution. It’s important to maintain our humanity throughout all of this. We can’t lose sight of what’s really important – reducing existential risk and creating a better future for everyone.