Vous êtes sur la page 1sur 11

So, we’d like to start with asking you to introduce yourself.

My name is Andrew Moore. I grew up in England, came to the United States in 1990 and I do
machine learning and artificial intelligence.

Great. So, Andrew could you tell me a little bit about how your work in AI began?

I was really inspired by Eliza when I first read about it, and I had one of these tiny little 8-bit
machines that people had in the 1980s and wrote my own Eliza. And that was really inspiring to
me. So, Eliza was one of these first programs where you could type random things into a
computer and the computer would pick up on words that you typed in and give apparently
sensible replies. And, so it felt like you were talking to an actual artificial intelligence system.
That was inspiring, and it was clear even to me as a young stupid kid that it was kind of a parlor
trick. It wasn’t anything like real intelligence, but it was still amazing to imagine that we might be
able to get computers to do some of the intelligent things that humans can.

Yeah, so that’s a really interesting social foundation—the benefits based on elements of trickery
or a simple, a simple exchange process. I’m wondering were you influenced ever by elements of
science fiction or any element of popular culture?

Yeah, so I think I really liked science fiction growing up like many, many kids did. This is going
to sound incredibly childish, but at the age of about 14 or 15 a friend and I set up something
where we claimed to have invented a really good new AI system and I was hiding in his attic,
remotely controlling answers from a supposed AI system and we had a whole bunch of friends
who were like absolutely shocked, “Wow the CIA are going to come in and take this over”. So,
that was fun for a while, while it lasted. And it was really like inspiring to imagine what if there is
some other kind of intelligence we can converse with.

I’m really struck by the social element actually in all of this. Whether you’re hoodwinking or not.
So, could you go into a little bit on your past work beyond the parlor games in regards to AI and
lead us up to some of your present work in that area.

Back in my PhD days I was particularly interested in whether robots could learn and this was
before a great deal of research that subsequently happened in the world of robotics. And, so we
were just playing with robot arms and having them do tasks and seeing if you could make them
improve themselves during that process. Now for me personally, I changed a lot during that
process of doing the PhD, first, I really did start the project with the idea of trying to simulate
how humans or animals maybe learned, and I was really fascinated by that. But something
happened to me during the PhD, and I got more fascinated by just the algorithmic question of
how can I make this happen as effectively as possible, I don’t care whether its human-like or
not? And I think that’s a split which you often see happening in the field of artificial intelligence.
Those are two completely valid, really important and interesting questions. How can we
understand the phenomena of intelligence through simulating it or trying to implement it versus
how can we make something which is making really good practical decisions but is not
necessarily using any of the kind of processes that real creatures use?

So, can you talk a little bit more about that, that notion of optimization that goes really beyond so
many concepts associated with mimesis? You’re moving in a different direction away from the
replication processes that so many people seem to be beholding to when they’re thinking about
intelligence?
When I started teaching artificial intelligence here I asked a friend from Australia, a guy called
Wray Buntine how he taught AI. And his answer was don’t try to understand how the brain
behaves, try to understand how the brain should behave which is an incredibly arrogant way of
putting, like we humans, we can do a better job of trying to make intelligence work. The
interesting thing of course is in lots of pretty superficial ways we do see absolute superhuman
performance of computers. Most computers are much better at arithmetic than most humans
are. And there’s many of these other things where we thought that there was some sort of
brilliant thought going on where in fact as we understand it computationally we realize nope, you
actually didn’t need a notion of natural intelligence to accomplish it.

Yeah, so on some level it seems almost like a false comparison to make the comparison to the
human brain in terms of the super notion. Because on some level the processes ideally are a
departure from at least from our conventional understandings of learning and thinking within the
human constraint. Is that fair to say it?

Yes, I don’t think it would ever make sense to complain that the people trying to understand
biological intelligence are doing something stupid nor the engineers who are just trying to build
systems to make autonomous or intelligent decisions are misguided, they’re just two separate
efforts which hopefully can build from each other. But I don’t think we should be confused into
thinking that that should all be lumped into one scientific discipline.

Yeah, so I’m really struck by a couple of these threats, the, the social element that started your
work, the ways in which it really seemed to move into elements of machine learning, you know,
small M, small L, but I’m really interesting now in talking a little bit about the social contract of
communicating on these systems. It seems that oftentimes our lexicon falls into either metaphor
or readily available language, oftentimes pulled from science fiction to describe the systems that
are being developed or optimized presently in regards to AI. I’m wondering if you can offer an
example of a particularly accurate or particularly useful communication on AI to broad
mainstream publics. Not necessarily practitioners in the discipline, but those who would not
necessarily have the language available to them to understand the particularities of these
systems.

So your question is really asking if I know of places to go to get an understanding of how AI


systems, the way they really work, is so different from the way that we might imagine that they
work based on science fiction.

Yeah, or I guess more simply, is there an example recently or in the past that you can think of
that does a particularly good job in communicating veracity of these systems to a public?

I’m going to give you one example which continues to this day to really inspire me and it comes
I believe from Herb Simon [...]. This is the example which I used when folks at the moment
asked me, well looks like computers have caught up with humans now. They’re just as smart.
And of course, I don’t believe that at all. I actually do think that we, in engineering artificial
intelligence, are just paddling around in the very shallowest waters when it comes to actually
simulating anything which we might remotely describe as the intelligence of an animal or a
human. So, here’s the big example. And you guys might already know it in which case you can
tell me to fast forward. It’s called the chessboard problem. [...]

I have but I think since so many people will be looking at this, I think your interpretation of it
would be great.
[...] I’m holding in my hands an imaginary chess board. As you know, that’s 8 columns and 8
rows, 64 squares in all. And they’re diagonally patterned with black, white, black, white, in this
nice diagonal pattern. So, if I have a bunch of 32 dominoes where each domino covers two
squares, I can give you a puzzle which is can you cover the whole chess board with these 32
dominoes. And if I take a typical chess board of course it’s pretty straightforward. You can do it
neatly, or you can do it in all kinds of weird and interesting patterns, but most folks I’m sure
many 3 or 4 year old kids would be able to do that. Now, the special problem is when you say,
I’m going to cut off two of the corners. Top left-hand corner and bottom right-hand corner. And
ask the question, can you now cover the chessboard with 31 dominoes? Because I took away
two squares, so now it’s a more interesting problem. Many people will spend a while trying to
figure that out. Now, humans are after a while, usually realized something, as they start to get
frustrated around this. Which is that taking off those two diagonal corners. It’s hard to see this in
a word picture, but in reality, you’ve taken off two corners which are the same color. So, let’s
say we just chopped off two white pieces. And a human might think, you know what, every
domino that I have placed on this chessboard, has to cover one white and one black square on
the chessboard. And so, I’ve now got more black squares than white squares. It cannot be
done. So, the human will just stop at that point. Here’s what I find inspiring, and I’m not sure if all
my colleagues in the world of artificial intelligence would agree, but for me, I believe that we as
a field have completely forgotten to even look at a question like that. We do not have
technology, nor to my knowledge, is anyone working on the technology to implement that kind of
thinking outside the system type of reasoning. We are so good at within the box reasoning. If
instead of a chessboard and dominoes, you’re looking at optimizing the movements of Uber
drivers throughout a city to stop wasting fuel or something, computers are far far better than
humans at coming up with near optimal solutions, but when it comes to anything where you’re
jumping outside your current reasoning and doing something maybe like making an analogy
with something else. We just don’t know how to do that. So that to me, I think, this partly
answers the question you sort of initially brought up of what are the examples of things which
really sort of show the difference between the sort of biologically plausible ways of thinking and
the ways that we engineers are actually doing it?

And, along that line, my observation as a cultural critic paying a lot of attention to language is
that that precision or the willingness to take the time to actually articulate the complexity of the
problem instead of rushing to describe possible solutions, is, is frequently the case presently in
terms of media descriptions perhaps or even sometimes the ways in which engineers are
struggling to articulate the problems they’re attending to.

I think.

Do you think that’s the case?

I think it’s true, because I think we all when we’re talking about machines or other things we
anthropomorphize them. Even clearly non-smart machines like an umbrella blowing away in the
wind or something. We just cannot stop ourselves from anthropomorphizing, and so it’s very
easy if you’re having to talk about something which is actually sort of acting like it’s got intention
to just make that assumption while you’re talking about it, and I think in this particular case, as
you’re indicating it confuses everyone because we aren’t actually talking about something which
has got intention, we are just talking about a bunch of rules firing automatically.

So, you were in a particularly unique position as a practitioner as well as an educator, and an
administrator. Can you talk a little bit about the responsibility that you feel in communicating with
the public on AI?
It’s a good question because I do think we’re at a very critical time for discussions about artificial
intelligence. The responsibilities I feel are first, it is probably completely rational that many folks
in the public are concerned about what AI means. And first thing I would say is I feel a real
responsibility not to minimize those concerns. Like all technologies, AI can easily cause
problems or actually cause disasters. And someone in my position, we see so many optimistic
things all the time just amazing uses of technologies to help save lives or keep people out of
danger or solving justices and this sort of thing. Of course, we’re naturally optimistic, but I
cannot just bring that in front of the camera when I’m discussing it. Because every other major
technological revolution, although we’re glad that they happened, caused absolute sort of
brutally horrific things to happen to certain segments of the population as they were going on.
And this one could be the same but based on our knowledge of history and hopefully a better
understanding of how social cohesion works, we could do a better job right now if we’re realistic
about the impacts. There’s a discussion topic which I think is really confusing everyone which I
don’t like, and this is the fear of super intelligent machines. And my answer to that is not to
dismiss that fear, it’s just that is such a minor concern compared with other things which are
much more likely to happen. And in particular the malicious use of artificial intelligence. Not an
AI itself mysteriously turning into a psychopath that wants to kill humans, but just some state
actor or terrorist actor deciding to use very advanced technology to dramatically, if I can put it
this way, improve the efficiency of their killing. That’s a super serious danger and that is
something I would like to see us getting ready for and meaning to address. And I feel like the
somewhat more attractive and appealing science fiction discussions about the nature of
consciousness and self-directed intelligence are such more attractive to talk about that we’re
kind of not discussing these very mundane but actually quite dangerous possibilities.

And so, as Dean in one of the influential educational systems that are graduating the next
generations of technologists, a new AI major has come out at Carnegie Mellon, launching this
fall for undergraduate students. How do you communicate the weight of this generations
technologists responsibilities in regards to communicating?

I might not spend enough time discussing the importance of clear communication. What I do
really focus on is the responsibility that comes with being one of the builders of these very
advanced pieces of technology. And so, it is to some extent a double-edged sword and I’ll
explain what I mean here. One important way to inspire folks to come into this discipline and to
be sort of open your arms to be inviting, not to make it seem like something that’s only for a
small subset of the population, is to talk about the fact that within the world and society we have
at the moment, there are only a few things you can do to really influence the future and how
things are going to be. One of those is technology, and so, the importance, if you like the
fragility of the current state of the planet and human society is a really good reason to get
involved in activities which can keep the world safer. And so, folks in my position are able to
really articulate that in a way that means that we’re speaking to 13-year-olds and 16-year-olds
about that sort of inspiration. And I think a lesson that’s being learnt in education for maybe over
the last decade or so is that this is how you begin the communication about getting folks into
science and technology, in particular, computer science. As opposed to beginning with saying,
like do you like computer games, would you know how to build them? Which is a whole different
way, a much less inspiring way of bringing folks in. Now I called it a double-edged sword to use
this motivation to bring folks in because we can’t just inspire by saying do this because it’s an
awesome responsibility. We do still then have to follow up. And when I talk to graduating
classes, that’s the main focus for that class, is you’ve now developed a huge amount of skill
which you can use to be some of the probably most leveraged agents of change in the history of
the human race, but you have a big responsibility that goes with that. So, my discussions with
students at sort of that level, they’re usually about the level of responsibility, and frankly until this
conversation, I hadn’t specifically thought about the issues of helping make sure that they can
have useful ability to present and discuss the issues with the rest of the world.

Yeah, cause I think we’re at an interesting moment, not too different than some of the
revolutionary artists where artists at certain moments could say let my work could speak for
itself, it seems to me that the generation of technologists that will be graduating in the coming
years will not be in a position to just allow the tools or the systems to speak for themselves, but
they’ll be considered authors and makers of the systems as well that may have intended but
also unintended consequences. So that ability to communicate on that could be, could be an
important role.

That’s true, now I want to push a little bit on that, the analogy with artists who produce
something and then can say I want to let it speak for themselves, that is a creative act where the
artist usually goes away and based on their own experiences constructs something to then
present. Now one of the big lessons of product management and design through research over
the last couple of decades has been, you cannot develop useful software or even, or certainly
not artificial intelligence systems by going away into a dark room, producing something, and
then presenting it. So, I think it’s equally important to talking about something about you’ve
produced it is the communication that goes into the design of a system. So, this is something,
for example which, as I’m working with the Department of Defense at the movement, on how to
help introduce artificial intelligence responsibly there. The number one discussion topic is how
to make sure that you’re working with the people who are going to be using the technology or
relying on the technology and discussing that with them. You know, every day almost, certainly
every week in these sort of AGILE development methodologies, where you are all discussing
about how things are going to be when the product is complete. So, I guess I would
passionately say the communication responsibility is stronger for the engineers because they
have to be communicating during the conception and initial implementation of something rather
than just explaining it afterwards.

So, all of them are a reason...

Yes.

...that this skill...

Yes.

And I guess it comes back to that, some of those social elements that were present even in
some of your first interests in these areas.

Yes.

So, there’s a lot of popular discussion in regards to AI systems and the way that people have
worked up until now. How do you think AI systems have influenced labor, let’s say over the last
10 to 20 years? And what are some of the anticipated changes that you might enumerate in
regards to the coming 20 years?

In some places, the introduction of automation. I’m not, I don’t really want to spend a lot of effort
trying to determine whether the automation is AI or something else.
Sure.

Has displaced jobs, but not caused the, shall we say, the loss overall of jobs. And of course the
introduction of ATMs, cash machines, is a great example, where the role of the bank teller really
did change into more discussions with the customers, but you’re actually in no point during the
introduction of these things, which completely replaced a large fraction of the bank tellers time,
did you actually see a reduction in employment in the banking industry, where it just moved into
more services, more presumably higher-valued services, or at least, more marketing services to
work with customers. And that is the nicest case. This happens of course also in the introduction
of office technology, where people whose job was sort of to take dictation and write letters, that
particular role disappeared, but the number of administrative folks working in organizations,
doing more, what we might call, more demanding cognitive tasks to do with how to present
information and communicate it are growing. So, these are the happy cases where having
automation has helped people perhaps get to do some of the more interesting things in their
jobs, rather than what might be described as grunge work. At the moment, another example of
this is in, there’s a local robotics company called Bossa Nova Robotics which has a robot which
wanders around the aisles of supermarkets looking for things which are out of stock. And the
reception in every time when it’s being rolled out, the reception by the workers in these
supermarkets has been very positive because it was an aspect of their jobs which was causing
great frustration that everyone got stressed out and angry when things were out of space on the
shelves and so this has just resulted in folks doing a better job at their work. The places where
it’s more frightening to me in terms of what it does to people are in things like self-driving or
other aspects where there’s a clear job role which is just going to be switched off. And I don’t
think there will be time for folks to adjust to the ability to do higher-valued activities instead of
the self-driving. So those are the ones where I expect to see a much more painful and actually
much angrier response to the introduction of advanced technology.

So in regards to the prospect of an example like, driverless vehicles having an impact on that
labor market, what are some of the tensions that you expect in regards to human dignity in a
market like that, when these systems are introduced?

It’s a very interesting question about what this kind of introduction of artificial intelligence does to
human dignity. And the immediate question that was in my mind is, is this different from when
owners of stables started to lose their jobs as horses were replaced with cars.

That’s a good comparison.

Yes, and of course there’s songs and poems about when humans were replaced by machinery
during the 19th century and the effect on dignity, so you’ve got me wondering. Is this one going
to be different? I think it’s from a sort of class-system, social-based view, I think the interesting
thing is that what has traditionally been regarded as white-collar work for the educated classes
is now as much under attack as what might have been regarded as blue-collar or low-skilled
work. And so, the sort of closest socialist in me thinks well at least that’s fair. I’m not sure if I
want that to actually go out in that way, so I’m going to be careful about how I say it. Course you
can use it if you want to, and we have no way of saying this. Given that much of the history of
technology seems to have involved the unskilled lowest paid workers getting the rawest end of
the deal as automation comes in, I think this one is different in the sense that you will see all
levels of either status or pay getting attacked equally. That there will be jobs which currently
might be regarded as blue-collar jobs which will remain wholeheartedly safely there for the next
50 or 100 years. Like a community police officer, there’s absolutely no way that you’re going to
have just autonomous robots patrolling an area. That there really are human interactions
needed there. Whereas someone who is actually really good at reading legal contracts looking
for potential dangers which other folks won’t have spotted, that really apparently impressive skill
might just get suddenly automated out of existence. And then that would be a big hit on
someone’s feeling of self-worth, if the skill that they might have trained many years for has
suddenly been automated away.

Yeah, we could probably talk all day about it. It would be interesting to see the ways in which
different AI learning tools could be leveraged for that pivot in terms of skill development for
retraining and how adept certain populations would be versus others in, in working in those
ways, how adept governments are to actually facilitate the development in such programs to
anticipate these shifts or if we will actually repeat those atrocities of sorts from the past.

Yes, one thing which was really impressive about the United States at the start of the previous
century, as mass manufacturing came into existence, that was when the government said
“Alright, we’re going to have to make sure everyone gets educated up to age 16. Because the
whole space of work is going to be much more highly skilled.” And they really saw this
happening and I don’t quite know how they did it, but it was absolutely critical because at the
same time the automation was coming in, they upscaled the nation. Previous revolutions, the
agricultural revolution, had not done that. And the thing which I find sad is I’m not seeing us
doing something equivalent this time around either. We’re obviously paying lip service to the
importance of education, but I’m not seeing anything near the investment or relative investment
that we did in the previous major revolution.

Yeah and some might argue that that was actually facilitated through technological advances
like the printing press and its effect on the development of literacy, right. And by virtue of that
then being able to scale on public schooling, beyond elementary primary level education.

Yes.

So, we’ve been really interested in power negotiations. So, on some level, that’s where some of
the questions in regards to human dignity have emerged but I’m wondering if you can talk in
your own research or your observations in terms of your colleagues work, of a particular AI tool
in which you see the power being transferred from the human user to that actual system?

So, one common place where power is transferred is something which I actually don’t think a lot
of people really like which is in sales and negotiations. So, in the advertising industry there was
quite a strong profession and career to be had to be someone who is on the phone all day
talking to clients who want to advertise products and services, and venues that can put them
there. And really building up a social network of interactions and trust where you help take the
right piece of marketing to the right audience through understanding what’s going on. And it was
an incredibly social and complex thing based on like years of building up trust between groups,
really trying to sort of convince people that the marketing campaign you put in place was having
an effect based on something or another. And that is something which has been really changed
by the sort of advent of online advertising and the fact that such a large fraction of people’s
exposure to advertising is coming online at this point, because folks have been able to bring in
automated systems to automatically figure out where it’s going to be most effective to show
what to whom. So, there you’ve seen I think a generation of folks who are really good and
actually very creative about putting together campaigns of various kinds, really being replaced
by linear algebra and logistic regression and things like that. So that has been a big one. [...]
There are now dashboards, quite complicated dashboards for big advertising engines like if you
happened to be someone who’s really good at reading the graphs of the performance of a
Google campaign or Facebook analytics. And you can start tweaking some of the nobs on your
campaign which often there are nobs to do with telling the AI what high level goals to shoot for.
That now is a good living and a pretty well-paid job. [...] One of the people on the School of
Computer Science advisory board is Bruce Cleveland a venture capitalist who set up a training
school in Oregon for folks who have been displaced from some of the manufacturing industries
there to manage these kinds of nobs on these advertising campaigns and is able to actually
help give them actually interesting jobs now in managing the AIs who are doing the marketing
which were then themselves jobs stolen away from the early generation of human social
marketers. So, I guess that’s overall a story where something which seems to be a profession
based on relationships and trust and persuasion has now turned into a much more numerical
optimized industry that there are still people employed within it. They’re just doing a very
different kind of job.

Yeah that’s a very interesting circuitist journey. On the employment aspect, I’m wondering if you
can talk a little bit more on the influence or the implications on the consumer side of that too.
Because there was, a, you used the word trust in terms of the sales and the marketing
individual, the practitioner building that relationship with potential customers who would
purchase. Can you talk a little bit about the ways in which those dynamics have shifted by virtue
of this interface now with the AI system even if it is still mechanized or manned so to speak by
new, a new generation of people?

For many years, when it comes to marketing you had clients who were, say a manufacturing
company or a retail chain who wanted folks to be aware of what’s going on. And the way that
the marketers try to demonstrate that they were doing a good job was meetings with cups of
coffee or glasses of wine where they’d talk about all the good that their campaign had been
doing, often with anecdotes and board meeting type discussions. And suddenly that’s been
swept away and this thing which was based on folks on either side of those relationships
knowing each other and trusting each other, they don’t have to trust each other anymore
because they can just see the metrics of how many click throughs they get or how many sales
as a result of how many click throughs they got. So, in fact the automation in this industry has
taken away some of the need for this kind of persuasion of the quality of your service. Now, in
some ways, you can view this as sort of sad. Something which was exactly what we humans
are good at, communicating and trading ideas, has been taken away. On the other hand, it’s
probably a good thing that it’s now a much more transparent industry and if you are doing a bad
job but you’re able to talk your way out of your underperformance it’s actually harder to do that.
So typical I guess tradeoff there, as the world gets more metriced and measured and
transparent, you are displacing folks whose job was previously to sort of smooth out the fact that
there wasn’t the clear data around to deal with.

So perhaps more transparent but certainly also more transactional.

Yes, exactly.

So, what about examples of how these systems might undermine human decision-making
power? So, you’d spoken a little bit earlier about the ways in which this generation of engineers
will continually have to build communication lines with designers, with the potential users of
these systems, but I’m wondering if there’s an example that you can think of that really moves
towards undermining human decision-making power?

I think I’d like to go for two different places where this is a concern for me. One of them, so the
first place where I think we might conceivably make all our lives crappier would be if we are so
monitored that we feel we have to take actions that we don’t really want to in order to be
approved of, if you like. So, I give you an example from the old days was maybe someone who
bought a subscription to the Economist magazine would’ve got ranked in some marketing
database as a likely high earner, and so they would be given more special offers to get
opportunities to sort of partake in nice vacations or something. So that example which is more
from the old days, is something where a person who perhaps didn’t even care about the
Economist might conceivably have said I better have subscribed to this thing just so that I look
good. [...] You could argue that some higher education certificates or degree programs might
have some of the same flavor, where even if you’re not quite sure that you’re going to be
learning a useful skill you better get it anyway because getting that label will help you. So,
imagine what would happen if that really did get down to the microlevel where if I tend to be
someone who bounces on my chair a lot, there’s an actual indication that maybe in 10 years
time I’m more likely to be in a lawsuit regarding interior decoration or something. I then have it
on my app, it warns me about this, “Hey Andrew, you’re bouncing around on your chair a lot. It
doesn’t really matter except that we think there’s these other predictive models out there which
are going to give you a negative score, so stop behaving that way.” Wouldn’t that be awful if we
actually find that because we’re being tracked we actually need to change our behavior so that
we look good to the algorithms that are tracking us?

I was just listening to a report yesterday on NPR in regards to introduction of credit, private
credit as well as business credit in China. And algorithms that are being used in terms of the
states pulling of data and that then being used as potential criteria for evaluation of whether or
not individuals or businesses might be able to take out credit.

If I happen to like dropping out at a, dropping off at a junk food burger bar once or twice a week
but then I hear that some algorithms have associated that with unlikeliness of paying back debit,
bingo, I will have to stop having my awesome cheap burgers and start to just eat alfalfa sprouts
or whatever the high credit people eat.

You said you had a second?

There are some areas of technology where it is getting really complicated to go back to first
principles to explain something. And obviously particle physics is one which really annoys me in
that I would love to understand it but no matter how much I watch Nova specials on it or
something I’m never really going to get it. I’m just sitting there trying to sort of get a sense of
what’s going on. What’s been going on with computer science is pretty similar, right, there are
so many stacked layers of technologies on top of each other that we can, almost none of us can
really sort of explain how everything works when Siri answers a question, tells you what the
weather’s going to be like tomorrow because there’s so many different layers. So, as a result of
this, we have definitely moved to a world where some of us can handle it when people talk
about technology and know kind of enough that we don’t freak out that we can’t follow every
step. But I would worry that, for many people who aren’t trained in computer science they will
work out this the same way that I work out particle physics. It’s just some alien random thing
going on which means we can’t really expect to invest time in understanding it, which means
that when we’re asked to help make policy decisions like voting for a politician who has a
particular stand on privacy or net neutrality, we don’t feel qualified to make that sort of decision.
So that’s the other concern, and I would not really blame anyone necessarily but there’s such
specialization and expertise in these things which affect all of society. It’s much harder to have a
typical sort of democratic discussion where most of the folks do kind of roughly understand the
forces in play.
So maybe it’s time to start really thinking about technological literacy or fluency in terms of
ensuring that we have a responsible citizen, right, who can still participate in weighing in on the
development of some of the regulation that I imagine will need to be developed in the coming 10
to 20 years as these systems continue to become more sophisticated, intertwined in our lives,
so that those of us who don’t understand even specific layers of the systems can also make
informed decisions, not just as consumers but also potentially as individuals who are affected by
legislation and a political system that increasingly is also infiltrated by these systems.

Yes.

Perhaps that’s one of the next levels of responsibility in regards to communicating and
educating.

Yes, so at the moment I think it is very important that people like students at Carnegie Mellon
and faculty or our alumni really do help out in briefing like aids on Capitol Hill or politicians who
are trained to figure out what stem they should take on something. Help them understand that
stuff. What I would like to believe, maybe it’ll happen somewhere that’s annoyingly well-adjusted
like Finland or something, is that policy makers and law makers will actually have so much
training that they are able to, when they’re in the middle of debating whether or not a certain
kind of technology should be allowed on traffic lights to detect pedestrians the answer might be
something like well no there’s no way you can sort of do that detection in real time because the
latency is going to be too large from a roundtrip to a data center. It would be really good to hear
a legislator with an opinion like that in the middle of a discussion.

Yeah, teach them on the Hill.

That’s right.

So, I just have a few last questions in regards to the prospects of machine autonomy and these
might fall pretty squarely within your work. As a practitioner but also as an educator, what do
you perceive as valuable in the prospect of machine autonomy?

I really like the idea of machines helping out when human nervous systems are not fast enough
to make decisions. So, the notion of sudden last-minute interventions to prevent disaster seem
like a clear win. That by itself is enough for me to say we should be working on this. I really do
want to see in transportation or buildings or disaster response for systems to be working out like
as a beam is falling what are they going to do to save a person. And so that’s a place where
there is no chance of having a human decision-maker there in the moment, so you have to build
in autonomy. There are a few others such as exploration in network-denied environments like
searching in a collapsed mine for miners and things involving space exploration and so forth. So
that’s the first place where I take a view which might not be popular that artificial intelligence
research should be looking at building full autonomous systems as well as the more traditional,
but I think actually more reassuring aspect of artificial intelligence which is to build advisors for
humans to make better decisions. The other place for autonomy is when you have a much too
much information for humans, so if imagine after a tsunami you have 25,000 drones launched
over a region each sort of sweeping a 20-foot by 20-foot area looking for people who are
struggling in the water. Those things among themselves are going to have to quickly coordinate
what information to get back to a rescuer. You’re not going to be able to have 25,000 people
manually pile into each of those 25,000 drones and so again it helps to have things where we
humans can concentrate on the important stuff and let the machines take care of the routine
stuff.

So scaled and aggregated sharing?

Yes.

So, if we’re thinking of potential pitfalls for machine autonomy, how do you think responsibility
should be ascribed if a system does cause harm?

So, I mentioned earlier I am genuinely worried about intentional malicious harmful robots being
used. My understanding of international law is that it’s not legal to make systems for instance
which will seek out an individual to kill them. But if a rogue state or even one of the big powers
starts to make those things I want to make sure that we have technology to counter them. So,
this is a very different kind of answer to the ones where we’ve also got to be making sure that
we’re educating our scientists and our engineers to like be well enough informed not to produce
those kinds of devastating pieces of technology. But I still want to make sure that we’re ready
for that kind of behavior. We saw, in the last month, the examples of drones being used in an
attempted assassination. There will be more of that.

So, our last question and I think you commented on this, where is your thoughts on the
development of general artificial intelligence?

For me, the idea of trying to do general artificial intelligence is kind of interesting but it is like
talking about interplanetary or interstellar travel, it’s a whole other area that folks can explore.
[...] For me, the idea of fully general artificial intelligence which somehow beginning to actually
equal what we humans are capable of is an interesting long-term goal, but it is not what 99.99%
of all the AI engineers and scientists in the world are doing. And the ones who are doing it, I
think they’ve got a lot of work ahead of them. And so, it would be, probably a mistake to assume
we’re going to see fruition from that work in the next few decades.

Vous aimerez peut-être aussi