Vous êtes sur la page 1sur 11

Could you start just by saying your name and your position?

My name is Martial Hebert. I’m the Director of the Robotics Institute and also a Professor here.

Very good. So, Martial, can you tell us a little bit about how your work in regards to artificial
intelligence began?

I started my work doing object recognition. In other words, being able to recognize automatically
the location and orientation of objects. This was in fact at the time, now we’re talking 1982, at
the time for industrial robots a task that is called bin picking. The idea of being able to grasp
objects out of a bin. Which is still by the way an important problem in manufacturing. And then a
couple of years after that I moved here. And started working with a whole team here of course
on autonomous driving. At the time, we’re not calling that self-driving, autonomous ground
robots. It was in fact the first program I think on at least on that scale on self-driving your
autonomous robot to escort the autonomous land vehicle, it was a DARPA program. And first,
you know, at times or demonstration, at what we call now self-driving.

And, was your work in any way ever influenced by popular culture, science fiction, anything like
those science.

Was it like influenced you mean?

Influenced or captured your imagination in any ways early.

Well, let’s see, the robotics aspect of course were always influenced by that. And in particular
the idea of, you know, devices, machines being able to reason and make decision. You know,
about the environment and their actions. So that sort of needs a component that’s always been,
you know, fascinating, right.

Yeah, and was there anything in particular, a particular film or novel...

Oh, oh.

Anything along those lines ever influence you?

I cannot remember a specific one that influenced me. It’s more, you know, this, this body of
popular culture on robots and intelligent system, as you can.

And can you talk a little bit about your present work in regards to AI?

Well I’ve worked on, and I’m still working on many aspects of computer vision. And basically,
machine perception. Both for different kinds of robots. Ground robots, flying robots, as well as
computer vision in general. Two of the things that I’m particularly interested now, one has to do
with being able to learn how to see in computer vision with minimal supervision. One of the,
we’ve all heard of the progress due to deep learning, you know, deep learning techniques, that
are revolutionizing the field. The problem with many of those learning techniques all of them
actually is that they require very large amount of data. Somebody has to tell the system this is
what I want you to learn and it has to give many, many examples. In the case of computer vision
of images or videos that are annotated. And it’s fine if you have, you know, all the time in the
world and all the man power in the world. You know, maybe if you’re Google or Facebook or
something. But in, in practice that’s not something that can scale. And in particular for robotics,
one has to be able to adapt very quickly more or less to be able to retrain, to relearn new
concepts very, very quickly. And for that one has to be able to do it with minimal supervision.
With little data. The second class of problem that is of interest to me these days is this issue of,
the different name for it of introspection, performance prediction, etc. It’s the idea of systems in
particular vision system, but not just vision system, being able to understand their own
performance. And be able to predict their own performance. And be able to act on that, okay.
This is a capability that we humans have to a very harsh degree you know. We know our own
capability and not only we know our own capabilities in absolute terms, right. We know how fast
we can run or things like that. But we also know that in relative term, in other words, something
happens in the environment, we know how we should adjust our behavior. So, for example, take
a very stupid example, you know, if you’re driving and you’re suddenly surrounded by fog, right.
You know, you don’t have to reason about it, you know your visual system is impaired and you
know intuitively in what ways it’s impaired. Basically visibility, distance that you can see. And
you’re going to immediately adjust your behavior accordingly, right. It’s very important to be able
to endow autonomous system with that kind of capability. One reason is that it will never be the
case that vision systems, for example, will be a hundred percent correct. That will never
happen. Nothing is ever hundred percent, you know, correct, or performant, right. Instead, we
need to have systems that recognize hey I’m now in a situation when maybe, for example, my
vision system is not performing very well. And maybe it’s not performing very well in that
particular way. And so, I now need to use a different type of behavior or maybe I need to do this
action to try to improve the situation and see better, right. Like I might turn on my anti-fog light or
something, okay. So, that’s something that I think is critical for robots to be deployed in the real
world. You know, even being able to carry out tasks, over very long periods of time and reliably,
okay. This is very different from the offline application of AI. Alexa lied, etc. Which are different,
they’re completely different set of very tough challenges. But they don’t have that one, they
could be wrong once in a while. They could, you know, be in a situation that they cannot handle
and, and just, you know, be wrong. You can do that if you have, you know, a self-driving car or
any kind of system that, that needs to make decision continuously all the time. And in a way that
can lead to a critical, you know, critical decision, catastrophe, decision in some cases when
humans are involved. So that’s the second broad area basically that I’m interested in. And the
two are linked of course. The two have to do with how to take some of those AI techniques, in
this case computer vision, and adapt them so that they can work on actual system, actual
robotic system that operate in real world continuously and interact in humans in particular in
ways that can be, you know, critical.

That’s actually a really cool segway. This, this notion of the, the direct application and
application within specific systems. What you just described is a highly complex system and one
of our concerns in these interviews is hearing a little bit about how technologists communicate
the sophistication of these systems to a public in a way that’s still accessible. So, I’m wondering
if you can think of a particular example of an accurate or particularly useful communication on AI
to broad publics in the US or in another context?

Well yeah, I can think of many inaccurate.

That’s, that’s what we’ve been operating under as well, so I’m wondering if there’s any models
of accuracy or perhaps we use the inaccurate as an object lesson.

Let me tell you why it’s difficult to answer the question. It is because when it comes to the type
of complex systems that, that we’re talking about. The robotics one. The one that interact
people, the self-driving and all that. There’s no such system yet really, right, that are truly. By
that I mean that are truly operational with the general public, you know. On the day to day basis,
right. So, in that sense, it’s a little bit difficult to find example of such a thing where the
communication was [missed @ 9:17, 1st video].

I think that makes very good sense. Because they’re, they’re partial systems in that they have
very particular function.

You know, another thing that people ask quite often, and maybe that’s one of your question
later, is, when will a truly intelligent system be deployed with people, or when will we have true
home robots, for example. And my answer to that is that we already do, but we have to be
careful as to what is the definition of complex systems, you know. When you’re asking, so, one
example is the, you know, vacuum cleaners, right. The new generation of vacuum cleaners from
various companies, I’m not naming any company in particular, the level of the intelligence that is
in those machines is amazing, okay. And I think it’s in a way, to me it has not being quite
communicated right. But in a different way than what we think sometimes, I think we should, we
meaning the field, should brag more about that. The level of intelligence that they have in terms
of mapping, sensing, people detection, people understanding, you know, and I’m talking about
the new generation that’s either on the development or recently developed. And that’s an
example of a system that has a very high degree of AI in it that actually interact in the
environment. It’s not just the, you know, a question and answer system. It actually does stuff.
Moves the stuff. It interacts with people. The distinction here is that because of the nature of the
task, it’s a known, it’s not going to hurt anybody. As a result, there is a level of performance
that’s acceptable, okay. And I find that interesting in other words instead of thinking, you know,
from, you know, the general research and then all of the way to self-driving cars deployed
everywhere or with everybody around moving at high speeds. There’s all kind of intermediate
stages, right. That, by the nature of the task slowly evolve from the completely safe task
because they just question answer and they can be wrong sometimes and so forth, evolved
from that to things that are more and more challenging in terms in terms of interaction with
people and safety issues and trust issues and all that. But you see, they slowly evolved that
way. And but yet even in if you look at those early things like, I think it was [missed @ 12:05, 1st
video] being a very, you know, intelligent thing, the fact is they have already a very high degree
of intelligence. If you look at the nature of the AI techniques that are used in them.

Yeah, this boundary space is exactly what we’re interesting in capturing through these
interviews actually. Because of the ways in which we have variance in terms of the
sophistication of some of the tools. But we also understand that they’re not threaded through
every facet of our life presently. I’m wondering what you see in your responsibility in
communicating with the public on AI.

We need to be very careful with that. We state clearly what each component is capable of doing
and not capable of doing. Okay. As well as there is a concern about the, how can I say,
sometimes with bias, that goes into those things, or the designer bias. So, not that we do those
kind of explanations very well. But there has to be, we have to be careful to explain what comes
from choices, design choices, right. What comes from the data itself? Right, there’s a lot of
discussion about data bias and all that. And, in fact, there’s a very difficult explanation, I was
talking to colleagues actually last week about this, we had a long discussion about this. There is
something that I don’t know exactly how to explain, so maybe I shouldn’t. But there is a
distinction between the data bias in the sense that I or the group designing the data did not
introduce their own bias in designing the data. And therefore, the system is biased and so forth.
The more completed that, that to my mind is not really an AI problem. This is something that
exist in all areas of life where people use data to draw conclusion, okay. So, it’s the same
problem basically. And there are tools to deal with this. The fact that it’s AI to me does not
necessarily make it worse. Well it might make it worse because it might make more decisions
based on that or more decisions [missed @ 14:18 1st video] but fundamentally the problem is
the same. Whoever designed the data did not follow the proper ethical rule in sampling the data.
But there is a problem that is much harder which is that we don’t understand necessarily the
effect of the data distribution on the behavior of the system. When it comes to the newest AI
system, you know, since like deep learning, black box kind of techniques. So, it might be that a
data distribution that from no more statistical tool is fine. It’s not biased. It passes all the test that
conventional so to speak, you know, ethical and statistical people have, you know, designed.
Yet, it does exactly the wrong thing. Because for whatever reason this black box architecture
that uses it is sensitive to some strange configuration of data. That is extremely hard to
characterize and, you know, that’s something that can be very problematic. And it goes to this
more general thing, I was talking about this performance characterization, this more general
point of trying to understand exactly what those learning techniques, those AI techniques do.
Those black box techniques, right. You know, there’s some terminology floating around like you
know explainable AI and things like that, so it’s not just explaining how the system gets to a
decision, it’s also explaining, you know, the influence of variation in the sampling of the data, in
the behavior of the system which again can be very different from what we think of in terms of
classical statistics.

Yeah, we also the potential of the scaled implications for the decision making. It’s from the
individual, you know, having the now with the, the advancements that we have this scaled
implications, whether it’s coming out of the black box or the single person’s system.

Yeah.

They’re developing. So that, the rapidity of that implication.

Yeah.

Has changed.

And to [missed @ 16:36, 1st video] your question. So, there’s two problem with that thought. And
sorry I was long winded in explaining, I have to find a way to make that more concise, but
there’s two difficulties. One is a communication difficulty, how to explain that and the distinction
between making sure that there’s, for example, there’s the right percentage of people of
different types in a sampling population, for example, classical statistical thing. So, there’s the
communicating that. And then there’s the fundamental research problem of how do you even
get a handle on how to characterize those effects, right.

Yeah. So, with your work in terms of object recognition, even in the earliest phases in the 80s
and thereafter, I’m curious about how you see AI systems changing the way that people work up
until this moment?

There’s two ways that AI system can, can impact how people work, right. One way is to take
existing processes or existing tasks, existing ways of doing things, and completely transforming
them by taking perhaps sometimes actually surprisingly one very small part of that process and
just by changing that very small part of that process it actually transforms everything. Right, and
it enables now people to do it in a completely different way, perhaps much more efficient, etc.
So, we work, for example, with a number of companies on manufacturing processes in that way
where we look at things that have been in place frankly for decades, okay. Same way of doing
things. And then you inject a little bit, just a little bit of new, you know, AI data or data to even a
technique and you can truly effect in a big way how things are done. Either for efficiency, but
also for safety, okay. And things like this. That’s one category of task. The second one is
enabling completely new things that were never possible before, okay. So, for example we
have, you know, working surgery for, you know, being able to do surgery without incision, and
things like this. That can be coupled even with other things that you get vital data, continuously
and can, you know, do anticipate problems like your patient going into shock and that kind of
thing, that can be also combined with other research on sensing that allow us continuous
monitoring of blood and things like this in a non-contact way. All those things together enable us
completely new ways of doing medicine and surgery, right. In manufacturing we’re looking at,
you know, modular robotics type of thing. Completely reconfigurable system and rapidly trained
system that again enable eventually will enable for example individuals, small enterprises, you
know, to be able to put together a state-of-the-art robotics facility. Something that is absolutely
impossible now, right. So, it’s not just, you know, improving things a little bit, it’s making things
possible. In education and in computer vision, a lot of the progress has to do with people
understanding. So being able to understand people’s facial expression, you know, body posture
and motion, intention, right, internal state. If you have all those things together you can now
redefine what it means to have virtual classes or intelligent tutor because now it’s not just a
matter of the student watching a video online or even having some textual interaction and all
that. You can imagine a system actually truly understanding the reaction of the student. And just
like your real teacher would do, right. And truly adapting on the fly, right. So that’s another place
where it’s creating completely new ways of thinking about education and things like this. So, I
think there’s a lot of places where it’s going to create completely new opportunity and
completely new skill that frankly in many cases, we don’t yet imagine really.

Yeah, so speaking to the, the future, in your position as Director of RI, you have a particularly
poignant position. We’re wondering a little bit about what you might predict in terms of the
changes in the ways that people work in about twenty years from now? You’ve touched on it a
little bit, but if we’re really thinking in, in the near future rather than the far from.

Yeah, I think the, so this is the optimistic view, right. If we prodded a little bit with the changes
that came about were basically the laptop, you know, and the, you know, digital revolution as it’s
called sometimes, right. What it did is to first give more independence to people, okay. They can
do things independently by them, by themselves. And empowerment, you know. So, to again go
to the digital revolution analogy. You know, I’m old enough to remember when, you know, there
were a lot of the work that was being done had to do with pushing paper one way or another,
okay. Now, you have people being able to do with an Excel spreadsheet things that, you know,
you would have a highly paid engineer with an IBM mainframe forty years ago. And if you think
of it in this way, it’s incredible empowerment, right. And so, the same thing will happen with AI
now. So, for example, you know, you need to think of being able to do some kind of document
research either because you are an employee somewhere or because you want to create your
own business and you need to search through some pattern things or some document, some
literature on all this. Now you can do it. Or you can do it but it’s going to be very expensive, you
need to have. Then we’d be able to do it because you will have AI agent that will be able to
answer any question you have about any massive amount of data you get. So that’s the same
analogy basically with the spreadsheet kind of thing, right. So that’s the idea of, you know,
independence and empowerment, okay. Now I know it’s a little bit generic. I’m not giving you a
specific scenario here and part of the reason frankly is that again, I don’t think we have
completely worked through, you know, all the possibilities, you know. Just like in the digital case
when those tools were introduced some of the things that are obvious now certainly were not
thought of at the time, right.
Yeah. So, you speak a little bit about empowerment in, in this near future. I’m wondering what
tensions you might expect in regards to human dignity when introducing some of these AI
systems to particular labor markets. So, we spoke a little bit primarily about what I imagined be
a professional labor market, especially with the example of the Excel sheet and information
gathering for potential entrepreneur things along those lines, I’m wondering about other parts of
the market.

Well but that’s the interesting thing, you see. Is this notion of, I don’t know what the right word is
but of lowering the barrier to entry. You see, when you said it’s professionals only, that’s not true
in the case of the digital revolution. In other words, people who were office clerks, office workers
now are people who use, can have access directly and are able to, you know, to use all those
tools directly, right. So, in other words because it lowers the barrier to entry, you have now a
whole segment, I think of it as actually the other way, you have a whole segment of the
population who is not considered, you know, professional or being able to do the kind of things
you mentioned, now able to do that, right. So that’s, you can think of it as lowering the barrier to
entry as having, technology being less and less elitist.

Yes.

Because now more and more people can use it. Okay. Now I understand where the question
come from, right, because when we talk about AI, we talk about things that can make decisions
and things like this and perhaps this flavor that it is only for people who have some kind of
training in those fields. But you could turn that around completely, again following the digital
example, right. That it’s going to basically democratize potentially technology and make tools
accessible and task accessible to segments of the population for whom it was not accessible
before. And by the way that includes also in terms of human dignity there’s another thing that
I’m hopeful now. It might, there’s some political aspects to it so maybe it won’t work but I’m
hopeful that it would make a big difference. There is the issue of isolation of certain population,
right, the discrepancy between you know the large urban centers and the rest basically that’s
isolated. Now AI can make an enormous difference here by providing technologies that allow
both in terms of logistics, in terms of transportation, right. In terms even of medicine because
now if you can lower the barrier to entry for some advanced medical intervention, surgery or
otherwise, you know, be more diagnostic, things like that. Education, the example I gave about
education, right. Now you could imagine having, you know, top of the line, you know, high
schools and things like this available everywhere because now you could have those virtual
teachers that are true virtual teachers, not just watching, you know, a class on the video, right.
So, in that sense again, the optimistic view, is that, it will provide tools that we would use the
discrepancy between those two sides of the population and essentially again lower the barrier to
entry because now you have this other side of population that has access.

So, continuing with this motif of elements of dignity we’re also really interested in negotiations of
power. Certainly, between human populations but as well between humans and machine
systems. So, I’m wondering if you can describe an AI tool in which the power’s been transferred
from the human user to that system?

Yes, some aspects of automation in the context of field robotics for example would fit the bill.
Well you have basically completely automated system.

Yeah.

Executing some task. So, in mining and other applications. So that would be one.
So, within that example, how do you think that configuration has affected human power
relationships? Maybe the group who’s working around that system or...

It has certainly changed the work flow and how the people involved interact with the system. But
in all of those systems and this is sometimes a little bit of a misconception, they remain fully in
control of the system, right. That is the case in any system I can think of at the moment. So, the
organization of course changes and the roles are going to change but in terms of, you know,
control, that still remains with that human team, right.

Do you imagine that we’d relinquish that control?

No. I truly I don’t.

Not in any system?

No, at least.

Not even a vacuum?

No, not even a vacuum. There is an issue, another issue, the issue of trust of course.

Yes.

Alright. So, of course it can happen that a system makes a decision or has behavior that is not
acceptable to the controlling team. But that can always be corrected. Or that can always be
controlled. In other words, to me this issue of relinquishing control is not, you know, it’s not, to
me it’s not quite the difficult question because we always have control, we can always stop. And
we design the system and whatever, the more difficult question is the one I had at the beginning
which is to have system that can properly anticipate when they’re going to be in situation that
they cannot handle, when they’re going to be in situation where they know their behavior is not
going to be understood or not going to be the right type of behavior that would be expected by
the human team. So, having the system recognize that is the hard question, I think.

So, you’re arguably one of the most seasoned roboticists and developers of AI systems. So, I
can imagine the ways in which your position affords you that notion of control. But I’m
wondering if you can imagine the ways in which other individuals who might be configured
around a system that is making decisions may have to be educated.

Yes, so...

In ways to think about those dynamics.

Yeah, that is correct. And that has to do also with this issue of communication. By that I don’t
mean, you know, wireless communication something. I mean communication in the sense of the
system acting in ways that are legible and understandable by the team, yes. And on the other
side by the team being properly trained and explained. What is expected of the system, what
the system can do, what it cannot do? Including what kind of mistakes the system may make.

Yes.
After all, as humans when we work together, we do have a mental model of each other and we
know our kind of roughly speaking our limitation and the kind of mistakes that we can make.
And the same should occur there. I think one mistake sometimes is to think of, you know,
because it’s an AI system or something it’s going to have, you know, be [missed @ 5:26, 2nd
video] or something. It’s going to work perfectly, but that’s certainly not the case, okay. So,
that’s where this communication is important coming back to my earlier comment about, you
know, performance communication is this idea of being able to formally document and explain
this limitation of the system so that, you know, like you were saying there’s no proper
communication basically between the person and the system. And by the way, when I said, and
I don’t mean to minimize by the way, the problem, those things are still very, very challenging
things to do, okay. It’s not like those are solved problem at all. But coming back to the issue of
power and things I think those are, those are the key issue, more than worrying about somehow
not being in control at all.

So more on that concept, can you describe an existing or even a conceptual AI system that you
think empowers an individual or a group of people?

Certainly, some of the manufacturing system that are around. Especially, some of the
manufacturing systems that are around, especially some of the ones that are emerging with
more collaborate manufacturing can be, can be used as examples like this. On the known
robotics side, any of the emerging question answer systems, not just the question answer, you
know Siri that and all that, but also the ones that allow you to analyze large amounts of data to
find information quickly which you could not do before. You could do at enormous human cost.

Extra time and labor.

Not human cost, but labor, time, yes.

So, you’ve talked a little bit about ways in which communication on the fallibility of certain
systems to allow for human intervention can, can basically bolster those, those human power
negotiations. I’m wondering if you can think of an AI system that either you’ve seen, or that you
can imagine would actually in one way or another undermine human decision-making power?

What do you mean by undermine?

Basically, put a human in a position where they are deferring to the machine or deferring to the
system to make a better decision than they can.

Predictive maintenance for example, right. So, those are system that can look at such massive
amounts of data and have such elaborate techniques to predict, you know, failures or need to
maintenance from that very large amount of data. That’s something that, you know, no human
or team could possibly do for example.

I want to dig in a little further to the concept of autonomy and it sounds like on some level we’re
working within the constraints that these systems are not necessarily autonomous and that a
human can, can always intervene.

Even well that’s the thing, even in the autonomous systems, okay, I know that’s a debate about
self-driving and we can debate that but many of those systems there is still human control.
Yeah. But, but within that, that configuration can you talk a little bit about what you perceive as
valuable in the prospect of machine autonomy?

The ability to, you know, be able to carry out task in complex environment and one thing that
potentially machines can do better than human is to make decisions quickly in complex
situation. You know, we’re talking about the World Cup not long ago, and if you think about it,
one of the capabilities that those players have is the ability to quickly understand a lot of data.

Yes. It’s a creative sport.

In terms of situational awareness and things like this. Now, potentially autonomous systems,
you know, AI systems can have that capability at a much, you know, higher degree than we can
have. So, that goes to the aspects of safety and performance and all that, that they can achieve
potentially higher degree of those things because of those higher capabilities.

Yeah. Potential pitfalls that you would focus on in regards to machine autonomy? Are there any
chief concerns that you have?

You know, in traditional engineering, when you build a complex system, an airplane or nuclear
powerplant, you’re not afraid of going in airplane, why? Because you know basically behind this
airplane, there’s two hundred years of engineering practice even before airplanes existed right,
with best practices, formal tools, formal method that have evolved over, I don’t know, two
hundred years, maybe approximately, that engineers know how to, given a complex system, to
say the system operates under those condition, the performance of the system would be within
that envelope. Okay, that kind of thing. And there’s entire body of engineering science basically
on how to do that. The problem with AI system or robotic system is how do you do that now with
systems that use machine learning component such that the performance of the system
depends on the data not just on the correctness of the code and the machine? And worse than
that how do you do that with systems that evolve over time, so now it doesn’t just depend on the
data you use initially, but it evolves? How do you do that with systems that have complex
interaction with people? Complex meaning that the system itself guesses what the intent of the
person is, right, which is what is going on typically in recent AI and robotic system, right. How do
you do that with systems that have very large amounts of sensor input from which they [missed
@ 12:31, 2nd video] that information? Large amounts of sensor inputs, the failures of which
cannot, can no longer be characterized explicitly, you know, in the purely, you know, traditional
engineering way. So how do you do all those things? We don’t know, right, I mean we don’t
know, there’s a lot of work on this. Of course, there’s some tools for this. But we’re far from
having the level of practice that exists for let’s call them standard engineering system, right.

Yeah.

So that’s the biggest challenge, okay. The biggest challenge to me is not improving this
particular vision system so it works, you know, at 95% instead of 90%. I mean that’s a big
challenge but that’s something we kind of know how to do research on and know how to make
progress on. The kind of thing that I described need new tools, new ways of thinking maybe,
new practices.

Yeah and pushing on that a little bit, how is responsibly ascribed if autonomous systems like this
cause harm?
That question comes up of course all the time and I wonder if it’s, and this may be the wrong
thing to say in this kind of AI forum, but I wonder if it is so different from a classical, let’s call it,
classical complex engineering system.

In what ways?

Well in the following way that just like in a complex engineering system, a plane for example,
that can kill a lot of people. It is the responsibility of the designer to have the kind of formal
performance characterization that will make sure, that will guarantee that at a ten to the minus
nine, or one minus ten to the minus nine probability it will not crash when you get in it, okay.
That’s what they do, right. So that responsibility’s the same in that sense.

So, in that regard the more traditional engineering systems offer us a paradigm that we might be
able to construct in regards to responsibilities.

Yeah. And so, to me the question is not really who’s responsibility it is. It is the responsibility of
whoever designed the system in a way, right. Just like for an engineered system, right. The real
question is how to, the same thing I said earlier, how to develop the tools such that that
responsibility can be affected, right. So that when I put a robot that’s going to operate next to
you or something, I can tell you those are, you the general public, those are the procedures and
the processes that we follow. And this is the expected performance, and those are the
limitations of the expected performance. I mean when we get on an airplane and there are
limitation, right. I mean the constructor, the manufacturer of the airplane can tell, you know,
there are birds hitting the airplane, and this particular, the engine will seize, right.

And there are maintenance expectations and protocols in regards to maintenance.

There are maintenance procedure and things like that, right.

And limitations in terms of stressors and use over time.

Yeah, right. And so, that’s well understood in traditional engineering. We need to develop all of
that in the AI and robotics field, right. And that’s going to take a while, that’s going to be faster
than traditional engineering because things happen faster and we have all that experience from
before, but that’s what really needs to happen. That to me is the big question, you know, how to
do that, how to push that, and also how to make sure that some systems are not deployed
before they are properly tested like this. But that is no different, that’s what I meant, to a high
level, it’s no different than any other engineered system being deployed too quickly before it’s
properly characterized, right. In terms of the responsibility question, right.

Yeah, so would kind of clinical trials in drug approval, be another paradigm that would be
useful...

That’s another one, yeah.

Useful comparison.

And in fact, there has been a lot of comparison between the self-driving testing and some of the
protocol, particularly in terms of the risk involved and the protocols used for drug testing for
example.
So, that’s something that you would advocate for you think, something as a paradigm.

Yeah, we should learn from those things. Yeah. But basically, what I’m saying is we should look
at those questions from this kind of maybe that’s too strong of a statement, but it’s more
important to me to look at those questions from this kind of technical viewpoint rather than
speculating as to, you know, who’s responsibility it is and especially starting to talk about
responsibility of the AI system itself and things like that, okay.

Yeah, so it allows us to work within the paradigms that have worked for quite some time. Rather
than going into the...

Yeah.

Hysteria of science fiction.

Yeah, exactly. And keeping a very down to Earth, maybe boring approach, that, you know, there
are system that can make decision at an unprecedented level, but they are still, you know, in a
way they are still engineered system, you know. Just of a different nature.

So, can you share some of your thoughts on the development of general artificial intelligence?

Do I have to?

No, you don’t. It’s the last question.

Yeah, let me, let me skip that.

That’s fine. Is there anything else that we’ve missed that you think we should discuss?

No, no, this was interesting. I hope I give you useful answers.

Certainly, thank you.

Vous aimerez peut-être aussi