Artificial intelligence and superintelligence
Inari Listenmaa
2015-04-27

Presenter Notes

I. J. Good, 1965

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make

Presenter Notes

I. J. Good, 1965

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Link

Presenter Notes

Irving John Good

Later on Good changed his mind:

The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) . . . began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his [Good’s] words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

Outline

1. History & terminology

2. Ethical questions of today's AI systems

3. Moral status of an intelligent machine

4. Superintelligence

Presenter Notes

e.g. insurance companies -- who pays if a driverless car gets in an accident? machine learning algorithms

1. History & terminology

Presenter Notes

Rough AI history

1960s-1980s: high-level symbolic manipulation (GOFAI)
  • rule-based
  • limited domains
  • not robust
1990s–: organic, learning
  • neural networks, evolutionary algorithms
  • probability theory, expected utility maximisation
  • graceful degradation
  • uncertainty
Modern AI systems
  • Good at specific tasks
  • As soon as it works, no one calls it AI anymore (John McCarthy)

Presenter Notes

Drive metros, beat world champions in chess, perform surgery, Suggest ads, deduce we live in Gbg and work at Chalmers, detect credit card fraud Translate websites, recognise speech, make jokes "What do you get when you cross an optic with a mental object? An eye-dea"

Some terminology

Weak AI/Narrow intelligence
  • aid human thought; solve specific problem (e.g. play chess)
Strong AI/General intelligence
  • attempt to mechanise human-level intelligence

Presenter Notes

What is intelligence

Hence, an AI is a machine that is capable of matching or exceeding human performance in most areas, whatever its metaphysical status. So a true AI would be able to converse with us about the sex lives of Hollywood stars, compose passable poetry or prose, design an improved doorknob, guilt trip its friends into coming to visit it more often, create popular cat videos for YouTube, do original AI research, and so on.
And yet there have been many domains that seemed to require common sense that have been taken over by computer programs that demonstrate no such ability: playing chess, answering tricky Jeopardy! questions, translating from one language to another, etc. In the past, it seemed impossible that such feats could be accomplished without showing “true understanding,” and yet algorithms have emerged which succeed at these tasks, all without any glimmer of human-like thought processes.

Link

https://www.youtube.com/watch?v=zVLA8fvBczg

Presenter Notes

2. Ethical questions of today's AI systems

Presenter Notes

Desirable properties of today's AIs

Transparency and predictability
  • Machine learning algorithm that decides bank loan applications
  • What will insurance companies do in case of an accident by self-driving car?
  • Who will be convicted if a robot weapon kills bystander?
Robustness
  • Machine vision detecting if passengers have bombs on plane

Presenter Notes

http://www.defmin.fi/files/3138/MATINE_public_Laakasuo.pdf

Toaster safety vs. AI safety

Safety concerns for toaster (or nuclear reactor):
  • Envision specific events that might go wrong and prepare for them
AI, even a task-specific AI, does things its creators don't envision
  • Deep Blue plays chess better than its creators, and its local moves aren't predictable
  • Need to extrapolate consequences of actions, to have an AI acting safely in many domains

Presenter Notes

3. Moral status of an intelligent machine

Presenter Notes

Principles

A being has a moral status if it is sentient and sapient

  • Uploading a mind vs. creating one from scratch

Substrate and ontogeny non-discrimination

  • Doesn't matter if it's biological or digital, born by random selection or generated

Presenter Notes

Minds with exotic properties

Unlimited copying / Massive desire for reproduction
  • Question of personal identity
  • Social welfare to support AIs' reproductive rights?
Subjective rate of time
  • Punishment in subjective or objective time
  • Alleviating pain first for a fast AI, who suffers subjectively longer than a human?
Enhanced cognition; enhanced moral status?
  • Human and AI both want something, interests conflict, AI's desire greater than human's?
  • e.g. animal threatens human → human can kill it

Presenter Notes

Rethinking ethical principles

Ethical principles of today's society might need to be modified, if societies include persons with those properties

Presenter Notes

4. Superintelligence

Presenter Notes

Superintelligence as powerful optimisation


Intelligence explosion; self-modifying AI won't stop when it has reached human-level intelligence.

Intelligence and meaningful values (to humans) are orthogonal

A superintelligence whose super-goal is to calculate the decimal expansion of pi will never reason itself into benevolence. It would be quite happy to convert all the free matter and energy in the universe (including humans and our habitat) into specialized computers capable only of calculating the digits of pi.

Presenter Notes

Being highly intelligent doesn't mean that AI would find goals that are meaningful to humans

Great power, great responsibility

“Chimpanzees are intelligent, but it's still humans whose actions decide what will happen to them”

  • Once we have a superintelligent AI, it'll likely be in power; those who would not use it, would be in disadvantage

Superintelligent AI in power doesn't just let things happen

  • It will influence things much more than humans
  • It will calculate the consequences of its actions much better than humans
  • Singleton scenario

→ There is no difference for an AI between a sin of commission (doing something bad) and a sin of omission (not doing something good).

Presenter Notes

But it gets worse: it seems we need to solve nearly all of moral philosophy in order to program a safe AI.

The AIs would be pulling on so many levers of influence all the time that there would be no such thing as a neutral act for them. If they buy a share of stock, they end up helping or hindering sex trafficking in Europe—and they can calculate this effect. In the same way, there is no difference for an AI between a sin of commission (doing something bad) and a sin of omission (not doing something good). For example, imagine someone is getting mugged and murdered on a dark street corner. Why is the mugger there? Because their usual “turf” has been planted with streetlights, at the AI’s instigation. If the streetlights hadn’t been put up, the murder wouldn’t have happened—or maybe a different one would have happened instead. After a very short time in operation, the AI bears personal responsibility for most bad things that happen in the world. Hence, if someone finds themselves in a deadly situation, it will be because of a decision the AI made at some point. For such an active AI, there is no such thing as “letting events just happen.” So we don’t need the AI to be as moral as a human; we need it to be much, much more moral than us, since it’s being put in such an unprecedented position of power.

Potential solutions

Try to limit AI's capability ("AI box")

  • AI as an instrument, human overriding decisions
  • Superintelligent agent would find ways to escape
  • Humans would grow dependent on AI

Try to make it friendly from start (Value selection)

  • Can't count on a list of today's moral principles
  • Superintelligent AI needs to be adaptably moral, recognise moral process of societies

Presenter Notes

For instance, one popular suggestion that reappears periodically is to confine the AI to only answering questions—no manipulators, no robot arms or legs. This suggestion has some merit, but often those who trot it out are trapped in the “Terminator” mode of thinking—if the AI doesn’t have a robot body bristling with guns, then it can’t harm us. This completely fails to protect against socially manipulative AIs, against patient AIs with long time horizons, or against AIs that simply become so essential to human societies and economies that we dare not turn them off.

Another common idea is to have the AI designed as a mere instrument, with no volition of its own, simply providing options to its human controller. But that image of a safe, inert instrument doesn’t scale well: as we’ve seen, humans will be compelled by our slow thinking to put more and more trust in the AI’s decisions. So as the AI’s power grows, we will still need to code safety precautions.

Other approaches, slightly more sophisticated, acknowledge the complexity of human values and attempt to instil them into the AI indirectly. The key features of these designs are social interactions and feedback with humans. Through conversations, the AIs develop their initial morality and eventually converge on something filled with happiness and light and ponies. These approaches should not be dismissed out of hand, but the proposers typically underestimate the difficulty of the problem and project too many human characteristics onto the AI. This kind of intense feedback is likely to produce moral humans.