Vous êtes sur la page 1sur 4

ENGLISH​ ​3

ESSAY:​ ​ARTIFICIAL​ ​INTELLIGENCE,​ ​A​ ​RISK​ ​OR​ ​A​ ​BENEFIT?

Alejandro​ ​Castaño​ ​Rojas.​ ​CC:​ ​1020484140

Universidad​ ​de​ ​Antioquia


Medellín
2017
Introduction

Artificial​ ​intelligence​ ​has​ ​been​ ​the​ ​field​ ​of​ ​study​ ​in​ ​which​ ​the​ ​hope​ ​of​ ​humanity​ ​is
concentrated​ ​for​ ​understanding​ ​through​ ​simulations​ ​of​ ​the​ ​capabilities​ ​and​ ​limitations​ ​of​ ​the
human​ ​mind.​ ​This​ ​is​ ​how​ ​projects​ ​with​ ​good​ ​or​ ​bad​ ​purposes​ ​have​ ​emerged​ ​from​ ​a​ ​set​ ​of
ideas;​ ​that​ ​taken​ ​to​ ​a​ ​machine​ ​can​ ​lead​ ​to​ ​a​ ​whole​ ​new​ ​world​ ​of​ ​solutions​ ​marked​ ​by​ ​strategic
decisions​ ​and​ ​that​ ​in​ ​turn​ ​lack​ ​any​ ​feeling,​ ​leading​ ​the​ ​ideal​ ​of​ ​being​ ​objective​ ​in​ ​decision
making,​ ​to​ ​the​ ​limit.​ ​Is​ ​this​ ​decision​ ​mechanism​ ​that​ ​a​ ​human​ ​being​ ​wants​ ​for​ ​their​ ​survival
and​ ​/​ ​or​ ​progress?
In​ ​a​ ​world​ ​dominated​ ​by​ ​the​ ​most​ ​intelligent​ ​species,​ ​it​ ​is​ ​essential​ ​to​ ​think​ ​about​ ​progress​ ​in
order​ ​to​ ​maintain​ ​this​ ​supremacy​ ​intact​ ​in​ ​any​ ​situation.​ ​Throughout​ ​history​ ​we​ ​have​ ​seen
how​ ​a​ ​great​ ​number​ ​of​ ​decisions​ ​have​ ​written​ ​the​ ​course​ ​of​ ​humanity,​ ​decisions​ ​based​ ​on
needs,​ ​requirements,​ ​passions​ ​or​ ​in​ ​short,​ ​decisions​ ​based​ ​on​ ​feelings;​ ​that​ ​perhaps​ ​they​ ​are
not​ ​the​ ​most​ ​appropriate​ ​decision​ ​for​ ​the​ ​species,​ ​or​ ​perhaps​ ​it​ ​is​ ​the​ ​decision​ ​that​ ​an
individual​ ​prefers​ ​over​ ​the​ ​needs​ ​of​ ​a​ ​collective.​ ​However,​ ​with​ ​the​ ​approach​ ​to​ ​cutting-edge
technology,​ ​computers​ ​and​ ​automatic​ ​machines,​ ​we​ ​have​ ​been​ ​able​ ​to​ ​impregnate​ ​our
experience​ ​in​ ​machines​ ​so​ ​that​ ​they​ ​are​ ​capable​ ​of​ ​simulating​ ​many​ ​of​ ​the​ ​human​ ​capacities
so​ ​that​ ​they​ ​can​ ​make​ ​strategic​ ​and​ ​accurate​ ​decisions,​ ​ignoring​ ​the​ ​method​ ​but​ ​achieving​ ​the
proposed​ ​goal.​ ​This​ ​may​ ​be​ ​a​ ​problem​ ​or​ ​not​ ​depending​ ​on​ ​the​ ​proposed​ ​scenario.​ ​For​ ​this
purpose,​ ​a​ ​basic​ ​case​ ​will​ ​be​ ​presented.

Suppose​ ​a​ ​politician​ ​must​ ​decide​ ​whether​ ​to​ ​keep​ ​his​ ​people​ ​happy​ ​at​ ​the​ ​cost​ ​of​ ​stagnating
their​ ​city​ ​in​ ​time,​ ​or​ ​on​ ​the​ ​other​ ​hand,​ ​take​ ​them​ ​to​ ​progress​ ​leaving​ ​aside​ ​some​ ​autonomy
before​ ​a​ ​large​ ​multinational.​ ​Decisions​ ​like​ ​this​ ​can​ ​be​ ​totally​ ​marked​ ​by​ ​the​ ​ethics​ ​and
morals​ ​of​ ​the​ ​individual​ ​in​ ​question,​ ​who​ ​not​ ​only​ ​assesses​ ​the​ ​situation​ ​from​ ​his​ ​position​ ​as​ ​a
person​ ​but​ ​as​ ​the​ ​role​ ​he​ ​has​ ​assumed​ ​to​ ​be​ ​(in​ ​this​ ​case)​ ​a​ ​politician.​ ​With​ ​supervised
learning​ ​an​ ​artificial​ ​intelligence​ ​could​ ​acquire​ ​enough​ ​experience​ ​to​ ​make​ ​this​ ​type​ ​of
decisions​ ​in​ ​shorter​ ​times​ ​but​ ​this​ ​is​ ​where​ ​it​ ​becomes​ ​a​ ​difficult​ ​issue​ ​to​ ​solve:​ ​if​ ​someone
supervises​ ​their​ ​learning,​ ​no​ ​matter​ ​how​ ​objective​ ​it​ ​is,​ ​the​ ​decisions​ ​of​ ​the​ ​new​ ​artificial
intelligence​ ​will​ ​have​ ​impregnated​ ​the​ ​subjectivity​ ​of​ ​the​ ​subject​ ​who​ ​was​ ​supervising​ ​his
learning,​ ​while​ ​on​ ​the​ ​other​ ​hand,​ ​if​ ​he​ ​is​ ​not​ ​supervised,​ ​the​ ​artificial​ ​intelligence​ ​that​ ​he
wants​ ​to​ ​put​ ​into​ ​operation,​ ​will​ ​have​ ​an​ ​initial​ ​stage​ ​of​ ​errors​ ​too​ ​high​ ​as​ ​to​ ​take​ ​their
decisions​ ​into​ ​account​ ​and​ ​apply​ ​them​ ​in​ ​reality.​ ​We​ ​must​ ​then​ ​dismiss​ ​this​ ​case​ ​where​ ​there
is​ ​a​ ​human​ ​in​ ​between​ ​or​ ​where​ ​there​ ​is​ ​never​ ​a​ ​human​ ​to​ ​guide​ ​him.

Then​ ​we​ ​must​ ​address​ ​the​ ​dilemma​ ​from​ ​another​ ​position,​ ​statistical​ ​data​ ​offer​ ​us​ ​this​ ​new
solution,​ ​assuming​ ​a​ ​high​ ​objectivity​ ​and​ ​neutrality​ ​of​ ​the​ ​data​ ​entered​ ​so​ ​that​ ​artificial
intelligence​ ​has​ ​a​ ​basis​ ​from​ ​which​ ​to​ ​make​ ​decisions​ ​in​ ​the​ ​present​ ​with​ ​a​ ​view​ ​to​ ​a​ ​future,
we​ ​can​ ​proceed​ ​to​ ​think​ ​that​ ​the​ ​next​ ​decision​ ​of​ ​artificial​ ​intelligence​ ​can​ ​give​ ​us​ ​a​ ​benefit,
but​ ​that​ ​only​ ​happens​ ​in​ ​the​ ​best​ ​of​ ​cases,​ ​which​ ​is​ ​when​ ​the​ ​data​ ​from​ ​the​ ​machine​ ​can​ ​infer
that​ ​we​ ​need​ ​to​ ​take​ ​a​ ​positive​ ​direction,​ ​at​ ​worst​ ​In​ ​some​ ​cases,​ ​it​ ​can​ ​make​ ​harmful
decisions​ ​for​ ​the​ ​group​ ​that​ ​it​ ​tries​ ​to​ ​guide.​ ​However​ ​the​ ​notion​ ​of​ ​what​ ​is​ ​good​ ​and​ ​what​ ​is
bad​ ​is​ ​a​ ​subjective​ ​construction​ ​that​ ​can​ ​be​ ​individual​ ​or​ ​collective,​ ​a​ ​machine​ ​does​ ​not
understand​ ​what​ ​is​ ​good​ ​or​ ​what​ ​is​ ​bad,​ ​it​ ​simply​ ​adjusts​ ​to​ ​reach​ ​the​ ​objective​ ​that​ ​is
proposed​ ​.​ ​And​ ​this​ ​is​ ​how​ ​we​ ​arrive​ ​at​ ​a​ ​Machiavellian​ ​goal.​ ​Is​ ​the​ ​phrase​ ​"The​ ​end​ ​justifies
the​ ​means"​ ​true?
Conclusion

It​ ​is​ ​now​ ​impossible​ ​to​ ​determine​ ​whether​ ​artificial​ ​intelligence​ ​represents​ ​a​ ​risk​ ​or​ ​a​ ​benefit
until​ ​a​ ​rigorous​ ​and​ ​globally​ ​accepted​ ​definition​ ​(as​ ​a​ ​standard)​ ​of​ ​what​ ​is​ ​good​ ​and​ ​what​ ​is
bad​ ​is​ ​reached,​ ​in​ ​terms​ ​of​ ​human​ ​well-being​ ​and​ ​that​ ​in​ ​turn​ ​do​ ​not​ ​have​ ​a​ ​conflict​ ​with​ ​the
particular​ ​interests​ ​of​ ​each​ ​nation.

Vous aimerez peut-être aussi