Académique Documents
Professionnel Documents
Culture Documents
our tribe against the evil, alien-looking outsiders. We see similar dynamics at play to a lesser
degree when people react negatively against "foreigners stealing our jobs" or "Asians who are
outcompeting us." People don't want their kind to be replaced by another kind that has an
advantage.
But when we think about the situation from the AI's perspective, we might feel
differently. Anthropomorphizing an AI's thoughts is a recipe for trouble, but regardless of the
specific cognitive operations, we can see at a high level that the AI "feels" that what it's trying
to accomplish is the most important thing in the world, and it's trying to figure out how it can
do that in the face of obstacles. Isn't this just what we do ourselves?
This is one reason it helps to really internalize the fact that we are robots too. We have
a variety of reward signals that drive us in various directions, and we execute behavior aiming
to increase those rewards. Many modern-day robots have much simpler reward structures and
so may seem more dull and less important than humans, but it's not clear this will remain true
forever, since navigating in a complex world probably requires a lot of special-case heuristics
and intermediate rewards, at least until enough computing power becomes available for more
systematic and thorough model-based planning and action selection.
Suppose an AI hypothetically eliminated humans and took over the world. It would
develop an array of robot assistants of various shapes and sizes to help it optimize the planet.
These would perform simple and complex tasks, would interact with each other, and would
share information with the central AI command. From an abstract perspective, some of these
dynamics might look like ecosystems in the present day, except that they would lack interorganism competition. Other parts of the AI's infrastructure might look more industrial.
Depending on the AI's goals, perhaps it would be more effective to employ nanotechnology
and programmable matter rather than macro-scale robots. The AI would develop virtual
scientists to learn more about physics, chemistry, computer hardware, and so on. They would
use experimental laboratory and measurement techniques but could also probe depths of
structure that are only accessible via large-scale computation. Digital engineers would plan
how to begin colonizing the solar system. They would develop designs for optimizing matter
to create more computing power, and for ensuring that those helper computing systems
remained under control. The AI would explore the depths of mathematics and AI theory,
proving beautiful theorems that it would value highly, at least instrumentally. The AI and its
helpers would proceed to optimize the galaxy and beyond, fulfilling their grandest hopes and
dreams.
When phrased this way, we might think that a "rogue" AI would not be so bad. Yes, it
would kill humans, but compared against the AI's vast future intelligence, humans would be
comparable to the ants on a field that get crushed when an art gallery is built on that land.
Some might object that sufficiently mathematical AIs would not "feel" the happiness
of accomplishing their "dreams." They wouldn't be conscious because they wouldn't have the
high degree of network connectivity that human brains embody. Whether we agree with this
assessment depends on how broadly we define consciousness and feelings.
In any event, it's possible that the first super-human intelligence will consist in a brain
upload rather than a bottom-up AI, and most of us would regard this as conscious.