Vous êtes sur la page 1sur 21

Slide 1- Artificial Intelligence

So we're going to talk about what artificial intelligence

 actually is, what can AI do?


 What can AI not do?
 What's the difference between what we think is possible and what actually is possible?

So one good way to see what people are thinking and dreaming about technology, and in
particular artificial intelligence, is to look at things like science fiction.

C3PO
 Who's the big gold one? C3PO (Humeniod Robot used in Star war)
o So no, we haven't completely been able to build C3PO,
o How would the other guy?

What does R2-D2 do?


o is a fictional robot character in the Star Wars franchise created by George Lucas. He has
appeared in nine of the ten Star Wars films to date.

we don't actually know how to build AI systems that do a little bit of everything.

 We know how to build systems that do one thing very well if they have enough data, if
they have enough compute.

AI Generations
But fundamentally, this is the '70s, and in the '70s, what we think about AI and what that kind
of technology could mean for the future,

What was going on in the movies in the '80s?

Killer robots from the future. It's getting a little darker. If you actually look at it, these
Terminators

The difference here is the software

o Anyway, in the '80s, we start worrying about hardware. Maybe hardware could be
scary.
o Maybe this technology that we're building could turn against us.

In the '90s, we realized that software can be scary. Software can be very scary.

1
o And in fact, that's the difference between the hardware. And so you can see here,
where

you're moving from hope that this technology can make our lives better to, what if it doesn't?

Well, we're shifting now from worrying

 AIs will defeat us or rise up, to a little bit more


 What if they're indistinguishable from us?
 What if we get replaced?
 The fear here is not so much about a war, but a replacement.

Reality Check
we are and what we can and can't do.

So let's switch from science fiction and how we're thinking about the future to what's in the
news.

 So it used to be that science fiction writers were the ones who got to think about, what
is AI going to do and how could it change the world?
 But now everyday reporters get to watch developments and think about, what's
happening right now and how can we communicate that to society? So you've probably
seen a lot of these things in the news.

What's this? This is IBM's Watson.

 IBM Watson is a cognitive computing platform originally developed by IBM to answer


questions on the quiz show Jeopardy.
 It was inconceivable 30 years ago. that we would have a AI system that could beat
humans at Jeopardy.
 Data accessing all of these things that we have written down out there on the web and
databases, and computation to connect that up and do reasoning to figure out how to
answer these questions, reason about consciouses, and so on.
 Really cool thing.
 GO Game:You probably saw the headlines about Go. Go is an abstract strategy board
game for two players, in which the aim is to surround more territory than the
opponent. The game was invented in China more than 2,500 years ago and is believed
to be the oldest board game continuously played to the present day.
 For a long time things like chess and checkers were games that we could play very well

2
 and we understood very well how to play. But Go was something that was very hard to
make progress on with AI methods.

There's all these things that you see on the news that we can do. But there's a lot of
things we can't do.

Autonomous Cars

 So for example, automated driving, autonomous driving.


 It's amazing what we can do with autonomous vehicles right now, but we're not
there yet.
 To go back to science fiction, we do not yet have KITT, if you remember Knight Rider
from the '70s. We can't do that yet.

So famously, Elon Musk is worried

 that the biggest risk we face as a civilization is actually artificial intelligence.


 What if this stuff works?
 What if it works really well?
 What if it doesn't do what we expect or what we want?
 How do we define
 what we want and how to keep these things safe?

And you can start to see in these forward-looking worries starting to reflect what you see in
the movies.

What if AI ruins our civilization, either in a spectacular or subtle way?

What's the difference between what we can do, what we can't do?

What are the big ideas that are going to keep recurring as we go through generation after
generation of solving things, hitting walls, breaking through those walls, and advancing this
technology?

So what is AI?
Actually, famously, AI is a self-defeating definition.

Because if you say, well, AI, that's all those things that require human intelligence.

 apparently, they don't require human intelligence.


 all the different fields that orbit each other in the AI sphere and how they relate to each
other.

3
it's the science of making machines that think rationally.
let's take that to mean, think correctly.

And this is the logistist tradition.


 This goes back to Plato and Aristotle and things like modus ponens, figuring out,
o the rule of logic which states that if a conditional statement (‘if p then q ’)
is accepted, and the antecedent ( p ) holds, then the consequent ( q ) may
be inferred.

 where P, Q and P → Q are statements (or propositions) in a formal


language and ⊢ is a metalogical symbol meaning that Q is a syntactic
consequence of P and P → Q in some logical system.
 An example of an argument that fits the form modus ponens:
o If today is Tuesday, then John will go to work.
o Today is Tuesday.
o Therefore, John will go to work.
 what are the rules that govern correct thought?
 If we have rules that govern correct thought,
 we can automate that and we can turn those logical rules into theorem, provers,
and deduction systems.
And thereby, turn computation into reasoning and thereby, behavior.
So this is the oldest approach.
This is basically what drove AI early on.
In, for example, the '80s, most of the methods were based on this idea of thinking in a
correct way. And this didn't scale.
Another approach that has come up a lot historically is building machines that think like
people.
 This really isn't so much AI anymore.
 This is actually more connected to cognitive science.
o This is trying to figure out, what's going on in our heads?
Cognitive Science
What are these thought processes that our brains do?
 This is actually really important to AI, because we have exactly one general purpose
intelligence system.
o we don't really know yet what goes on inside our minds.
o it became less and less important what exactly the process of reasoning was,
than the outcome and the action.
o For example, when we talk about chess playing systems.

4
 They work very differently than humans, but they come up with similar
moves.
So people decided that maybe we should be building systems not based on
how they think,but how they act.
This is actually a very important change from thought processes to
underlying to the resulting actions.
Turing Test
And who's heard of the Turing test?

Very famous initial AI idea from Alan Turing, which says, one way to tell intelligence is to put a
computer in one room and a human in another room, and have a human talk, presumably by
typewriter, to both entities and see if you can tell them apart.
o And if they're functionally equivalent, well, you've achieved something.
o Maybe you've achieved intelligence or maybe you've just achieved human
bluffability.
o Maybe that's like intelligence, maybe it was not.
Rational
we mean maybe something about like, unemotional.

But when we talk about rational in this course, we mean something very specific and technical.
It's not about what those goals are.

o You can have a robot that's designed to clean up dirt as well as possible.
o You might call it vacuum cleaner.
o You can have another robot that's designed to make things as messy as possible.
o And those two things might both be rational if they're optimally achieving the goals
that they have set for them.
o So rationality only concerns the decisions that are being made, not the thought process
behind them.
o There can be many ways to get to a correct decision.
o You can get that through computation, through looking at data, past experiences.
o And those goals are always going to be expressed in terms of the utility of different
outcomes.
o You're in a situation, you have some actions you can take, it'll affect the world in some
possibly unknown way.
o And then you have utilities over the results.
o We'll unpack all of this during the rest of the lecture.
o But being rational, in a nutshell, is maximizing your expected utility.
o So this is an introduction to artificial intelligence.
o It would probably be better if we titled this course Computational Rationality.

Maximize your expected utility.

5
What does maximize mean?

o It's a good lesson for life.


o But it's also really going to be the rest of this course,
o we're going to take our time unpacking these words.

What about the brain


So like I said, AI's really hard, but we have an existence proof of an intelligent system.

There are 700 of them in this room right now.

So why aren't we done?

Why haven't we reverse engineered this?

Well, one, brains are very good at making rational decisions, but they're not perfect and
you can get distracted by their limitations.

But really, the main issue is that brains aren't modular like software.

We can't pull bits out and look at them, and see what they do, and put them back in,
and replicate them, and follow
that kind of modularity.

To the extent that brains are modular, it doesn't look like software modularity.

Not in a way that we've been able to exploit.

There's a famous saying that brains are to intelligence as wings are to flight.

People spend a long time trying to get mechanical flight to work.

And we had existence proofs, we had birds.

And one of the key things that made it possible to start getting automated flight was
when we stopped trying to make them flap their wings.

So sometimes you want to follow the existence proof and sometimes you want to build
things differently, because there's something about the engineering context
which changes those assumptions.

6
But that doesn't mean we haven't learned anything from the brain. And we certainly
have a ton more to learn from the brain.
There are a couple of things we learn from the brain.
One of the main ones is that there's really two components to making good decisions.
Remember, that's what AI is going to be about, making good decisions in a context.
One is memory.
Data.
You can make a good decision because you remember your experiences in the past.
Or an advantage humans have is remembering reading about other people's
experiences, so you don't actually have to make all the mistakes yourself.
So for example, one reason I might not touch that fire is, I've done that before and it
didn't go well, so I'm not going to do it again.
Another way to make good decisions is simulation, which is basically computation.
Unrolling the consequences of your actions according to a model.
What's going to happen next?
And playing what if in your head so that you can think through the consequence of
things without actually trying them.
So maybe I don't touch that fire because I can like play it forward in my head and realize
this is going to end poorly based on my model of how things work.
And of course for humans, those things are all intermeshed.
That model came from data and experiences.
And so in this class, one of the things we're going to do is talk a lot about both how
these two ways of making decisions are different, and also about how to interleave
them as
we get further into the course.
So to look at this course broadly and think about, what are you guys going to go through
for the next semester?
Well, the first part is really about getting intelligence or smart behavior emerging from
computation.

So we're going to think about search, satisfying constraints, thinking about uncertainty
and adversariality in the world.
And this is going to be about algorithms that, through computation, take a situation and
figure out something smart to do.
So the smart behavior comes from algorithms, from computation.
The second part of the course is going to be about making good decisions and having
intelligence on the basis of data and statistics.
And this is where machine learning comes in.
Here are all of your experiences.

7
Here's a new situation.

How should you act on the basis of what you've seen previously?

And of course then, we'll be able to interleave these things as we get further into the
course.

And as we go throughout this course, we're going to talk about applications.

What are you actually going to do with all of these methods and all of this intelligent
behavior?

Think about things like natural language, and vision, and robotics, and games. So I think
Pieter's going to come and tell you a little bit about what's happened, stories so far in AI.

History of AI
[VIDEO PLAYBACK]

[MUSIC PLAYING]

- The thinking machine.

- Hello again.

With me tonight is Jerome B Wiesner, Director of the Research Laboratory of Electronics


at MIT.

Dr. Wiesner, what really worries me today is, what's going to happen to us if machines
can think?

And what interests me specifically is, can they?

- Oh, that's a very fine question.

If you had asked me that question just a few years ago, I would've said it was very far-
fetched.

Today, I just have to admit, I don't really know.

I suspect if I come back four or five years, I'll say sure.

8
But it is confusing.

- Well, if you're confused, Doctor,

how do you think I feel?

- We're just really beginning to understand the capabilities of computers.

I've got some film that will illustrate this point, which I think will amaze you.

- That man isn't playing checkers against a computer, is he?

- Sure, and it's playing pretty well.

- Now, which color--

- While most computer scientists saw it as a mere number-cruncher, a small group


thought that the digital computer had a much grander destiny.

Being a general purpose machine, it could be programmed to do things which in humans


required intelligence.

Playing games like checkers and chess, and solve brain-teasers.

The field became known as artificial intelligence.

- Can machines really think?

Even the scientists argue that one.

- I'm convinced that machines can and will think.

I don't mean that machines will behave like men.

I don't think for a very long time we're going to have a difficult problem distinguishing a
man from a robot.

And I don't think my daughter will ever marry a computer.

9
But I think the computer will be doing the things that man do when we say they're
thinking.

I'm convinced that machines can and will think in our lifetime.

- I confidently expect that when a matter of 10 or 15 years, something will emerge from
a laboratory which is not too far from the robot of science fiction fame.

- They had me reckon with ambiguity when they set out to use computers to translate
languages.

- A $500,000 super calculator, most versatile electronic brain known, translates Russian
into English.

Instead of mathematical wizardry, a sentence in Russian is to be fed-- - One of the first


non-numerical applications of computers, it was hyped as the solution to the Cold War
obsession of keeping tabs on what the Russians were doing.

Claims were made that the computer would replace most human translators.

- Present, of course, you're just in the experimental stage.

When you go in for full scale production, what will the capacity be?

- We should be able to do about, with a modern commercial computer, about 1 to 2


million words an hour.

And this will be quite an adequate speed to cope with the hole-out with the Soviet
Union in just a few hours computer time a week.

- When do you hope to be able to achieve this peak?

- If our experiments go well, then perhaps within five years or so.

- And finally, Mr. McDaniel, does this mean the end of human translators?

- I would say yes for translators of scientific and technical material.

But as regards poetry and novels, no, I don't think we'll ever replace the translators for
that type of material.

10
- Mr. McDaniel, thank you very much.

[END PLAYBACK]

PIETER ABBEEL: All right.

So this is dated 1961.

Let's take a look at the history of things that were being worked on at the time.

'40s and '50s were the very early days.

One of the first things people did was build a Boolean circuit model inspired by the
brain.

Turing wrote his Computing Machinery and Intelligence book and Turing test started to
exist.

Then a lot of excitement.

Early AI programs played checkers and do some theorem proving, it could do reasoning
about geometry.

The name artificial intelligence was coined in 1956.

And then in 1965, a complete algorithm came about for logical reasoning.

So a lot of excitement, a lot of progress is being made.

And people thought this is going to move very fast.

Just like you saw in this video, they said, oh, maybe in five years we'll have machine
translation solved.

We're 55 years later, and we're maybe starting to get there.

So very optimistic views.

From there, transition to knowledge-based approaches.

11
So people thought, OK, we have these engines that can do logical theorem proving,
reasoning, and so forth.

If we can just put enough knowledge, enough facts together-- and fact one implies fact
two, those kind of propositions--
then maybe that allows us to reason about anything in the future and starts putting
more and more information in these systems.

But it didn't lead where people hoped for.

In fact, things didn't work as well as hoped for at all.

Not much money was made with AI.

Not much progress was made anymore along these lines.

And this settled in an AI winter, where very little work happened on AI in industry, and
very little funding from government would go to research labs
that tried to work in AI.

Then in the '90s, there was a resurgence through statistical approaches, where there
was a lot of combining of new statistical ideas with sub-field expertise.

Leading to an ability to reason about uncertainty and maybe an AI spring.

And then 2012 onwards, deep learning started to come on the scene quite a lot.

And people got just as excited again,it seems, as they were back in the '50s and '70s.

And I guess we'll see if the cycle after repeats or not.

But one fundamental difference right now is that AI is used in many, many industries.

And we'll give you some examples in the remainder of this lecture of many application
domains where AI is already useful and can be expected to be even more useful

in the foreseeable future.

what AI can do today


12
So let's take a look at what AI can do, and let's do it through a quiz.

So we'll do a raise of hands for each one of these questions and see what you think.

And then we'll see what we think and go from there.

So which of the following can be done at present by an AI?

Play a decent game of table tennis.

Who thinks that's possible?

Raise your hand.

OK.

Yeah, about half of you. Well, let's reveal.

Indeed.

It might depend on what you call decent.

You're not going to have a robot running around the table.

But a robot that quietly plays back and forth with you is definitely available at this point.

DAN KLEIN: We already talked about this, but how about play a decent game of
Jeopardy?

Yup. Absolutely. So you might worry here that the computers are going to take all of our
game shows from us.So far, that hasn't happened.
IETER ABBEEL: How about driving safely up a mountain road? Raise of hands.

So this actually happened in 2011. So this is a done thing. Under certain conditions and
so forth, but it's happened.

DAN KLEIN: What about, drive safely along Telegraph Avenue?

I don't know if I can drive safely on Telegraph Avenue.

13
But this is actually really important. We'll give it a question mark.
Autonomous driving is getting better and better.

But this goes to prove that just because you can do an initial step of a technology
doesn't mean that once you get to real world conditions and complex environments and
high safety

standards, that you can still do that in an autonomous way.

PIETER ABBEEL: How about buying a week's worth of groceries on the web?

This is easy now.

The computer just uses Instacart and everything will show up.

DAN KLEIN: What about buy a week's worth of groceries at the Berkeley Bowl?

That's a lot harder, right?

It's packed, you've got to make sure you don't bump your car into other people.

And then you've got to distinguish 73 varieties of Apple. And it's tricky, right?

I give this one a no, I can't send a robot to go do my Berkeley Bowl shopping yet.

PIETER ABBEEL: How about discovering and proving a new mathematical theorem?

So a small number of people think this might be possible.

We're giving it a question mark. AI is becoming pretty decent at proving things.

If you give it a statement,it might figure out how to start from some actions to get to
whether that statement is true or not.

But the big question then still is, what should it try to prove? So many things to prove.

5 plus 7 is 12.

5 plus 8 is 13.

14
So many, so many things.

And so, when it's going to really decide what is worth proving and building abstractions
around is a whole other question.

DAN KLEIN: How about converse successfully with another person for an hour? Can we
do this?

I would say it depends greatly on the person.[LAUGHTER]

But not in a general. So there's a whole history of computers basically bluffing. And chat
bots, for a while, can bluff.

And sooner or later, you figure out that you're either talking to a computer or this
conversation is deeply weird in some other way.

PIETER ABBEEL: How about performing a surgical operation? So keep your hands up if
you say yes.
Now, continue to keep your hand up if you're happy to have a robot to open heart
surgery on you?

Couple of you. We do surgical robotics research in my lab. I'd love to find a few more
participants.

The surgeries that do happen with a robot tend to happen through a human operator
still totally operating the robot to get the surgery done.

DAN KLEIN: What about, translate spoken Chinese into spoken English in real time?

All the pieces of this actually work pretty well, at least under limited context and
circumstances.

PIETER ABBEEL: How about folding laundry and putting away the dishes?

I think the mixed answer is pretty accurate in that there is some pretty cool initial results
starting to do this.

But it's nowhere near the level that you can just get a robot at home that's going to do
this for you.

15
DAN KLEIN: What about writing an intentionally funny story?

One fun thing about this lecture is, we can more or less keep the same list and over
time, things just move from one column to the other as AI begins to be able to do
these things better and better.

That said, this is a case where we still can't write intentionally funny stories.

The keyword there is actually intentionally.

So let me give you some examples of what this means. And not just examples, but what
it says about how we need to approach AI and what's going to work and what's not.

So let's go back in time to 1984. This is a very famous system from Roger Schank called
the Tale-Spin system.

It would basically have a bunch of characters. They're all animal--anthropomorphized


animal characters and objects, and it would create stories by chaining things together.

And let's see the kinds of things that it comes up with.

So here's a Tale-Spin story.

"One day Joe Bear was hungry.

He asked his friend Irving


Bird where some honey was.

Irving told him there was a


beehive in the oak tree.

Joe walked to the oak tree.

He ate the beehive.

The end."

Natural Language

16
So in natural language-- this is actually the area where I work-- is a bunch of kinds of
technologies that deal with different aspects of how humans communicate with each
other, and therefore with computers under appropriate circumstances.

So for example, speech recognition is taking the sounds that somebody says and
mapping them onto basically text.

Not necessarily understanding that text, but transcribe it. Text to speech synthesis.

This is the opposite.

It's going from something you want to say to a WAV file that embodies that speech
verbally.

These are two areas which have undergone amazing progress in the past five to 10
years.

A lot of that's been neural nets. But actually, even before that, a lot of that was just
large data methods applied appropriately.

There's also dialog systems, which would fit in between. When you say something to a
system, what if it's going to actually formulate a response and say something back?

This technology is nowhere near as far along. This is something that we're still figuring
out how to do, how to build a system that you can have a meaningful sustained
conversation with.

OK.

But let me give you an example on the text to speech front of the kinds of things that go
into it, and how we are and aren't able to do a good job of it today.

So this is an automatic transcription system. There have been big advances. The state of
the art systems are actually better than this today, but I think it's a good example.

Machine Translation

Sometimes machine translation between similar languages, like English and French, can
work amazing.

17
It depends on the amount of data, the difference between the language, the amount of
compute

you put into it, a whole bunch of different kinds of contextual factors.

But machine translation works pretty well.

Your phone has more power, your toaster has more power than a supercomputer
from the '50s.
And a whole bunch of other stuff falls under the domain of natural language.

Web search, understanding how your query relates to what's on the page and mixing
that with things like link analysis.

Or the big thing now is click stream data, about what people do and don't end up using
and clicking on.

Text classification, spam filtering, there's a whole bunch of things that basically anchor
into natural language and analysis of natural language.

Mapping natural language from one thing to another works a whole lot better than
understanding it deeply today.
AlphaGo beating Lee Sedol. This is a huge advance.

Logic
So computers can play your video games for you, they can go to the gym for you. Turns
out, they can also probably do your math homework for you.

There's a long tradition, like I said earlier, about logical inference. That's one of the
earliest places that AI was applied.

And we still have a lot of progress in real systems in theorem proving.

Also, these logical methods are used for things like fault diagnosis, cases where you
need to really be able to trace through what the computers doing.

A big issue with machine learning overall is often the computer does well, but you can't
tell when it makes mistakes or what's gone wrong, or why it made the decision it made.

18
And this has lots of consequences. But people have made lots of progress in logical
systems taken very broadly.

One particular place where there's been a ton is in satisfiability solvers, which are used
for all kinds of things.

Maybe, as Pieter said earlier, one of the biggest things that's going on is, AI is
everywhere now. It's not just chess and checkers.

Applied AI automates all kinds of things. In some ways, it's the glue or the electricity, as
people say, of a lot of modern business.

So of course search engines, but also all that route planning, maps, traffic, logistics,
things like medical diagnosis, help desks being automated, spam and fraud
detection,all the intelligence in your camera, in your thermostat.

All of that is increasingly being driven by AI to give you smarter devices, product
recommendations, and increasingly much.

So not only is AI doing more-- not only can we do more, but we can do more kinds of
things. We still can't build
one system that can do a little bit of everything. We build systems for each individual
task.

But there are a lot of tasks that we can improve with AI augmentation now.

Quick Review of Next Topic

Quick recap and we'll give you a demonstration

to end the day here.

We're going to talking about designing rational agents.

That's an agent that perceives an ax. So think about this little guy here trying to get that
apple down. The abstraction.

It's a very important abstraction, is that you have an agent and you have an
environment.

19
You control the agent, you don't control the environment.

So what comes into the agents are percepts from the sensors.

What goes out of the agents are actions from the actuators.

And the question mark in the middle, that's the agent's behavior.

It's the agent function. That's what we write in this class.

We write the decision making procedures that map from what you perceive to what you
do.

The environment responds, you might not know how.

We're going to have to build that into our algorithms.

So we talked about these characteristics.

And that's what this course is about. Lots of different techniques for different kinds of
conceptualization of the boundary between agent and environment.

We're going see a lot of examples of agents and environments and where that boundary
is set.

When you drive, it's your eyes and your hands that are basically the boundary.

But if you're an autonomous car, it's the camera and the control lines into the wheels.

Something you may not have thought of as an agent is Pac-Man.

So we're going to see a lot of Pac-Man in this course.


Hopefully you'll recognize Pac-Man.

Has anybody never played Pac-Man? Good.

OK. If you have never played Pac-Man, but you're too embarrassed to admit it, go play a
game of Pac-Man on 188.

Pac-Man is an agent.

20
Its sensors perceive the state of the world, which is the labeling of all the dots and ghost
positions and all of that.

Its actuators are like a little joystick-- up, down, left, right. And then the environment is
the ghosts, and who knows what they'll choose to do and how the world will evolve.

So we're going to show you to close out the day here-- you want to come join? ou will
build this soon.

OK.

So what I want you to do is, I want you think about-- this is going to be Pac-Man. My
hands are off the keyboard. This is being played by an algorithm.

So this is going to be behavior that appears to be intelligent deriving from-- who knows.

You're going to build it.

But start thinking today, what's going on here in the agent function?

What computations give rise to this kind of behavior?

Do you want to do the honors?

21

Vous aimerez peut-être aussi