Vous êtes sur la page 1sur 3

These notes on the Democratizing AI Session of SOCML 2016 were provided by Jack

Crawford. If you attended the session and would like to add some notes, request edit
permission from Ian.

The democratization of AI session ran for just under three hours, and most of the
participants remained throughout.

Early in the discussion, the moderator asked each person for their perspective on
the desired outcomes of AI democratization. The views presented fell into a few
categories:

1) It means that AI tools will become easier to use, allowing many new developers
to join in the creation of AIs that benefit others. This would allow for scale and
acceleration of the achievements we seek.

2) It means that the beneficiaries of AI would increase to all levels of the economic
spectrum without limit by a few holders of capital. This would counteract possible
control of AI by entities with large resources. The balance of humanity would have
the freedom to attain AI benefits in accordance with their individual desires and
needs.

3) It means that AIs and the data used to create them, would be transparent and
subjected to inspection to verify their intent and potential for operating outside of
desirable actions and outcomes. In particular, this would include present risks of
bias due to race, gender, sexual orientation, economic status, etc. Examples
include human resource practices and profiling for other undesirable purposes.

After a discussion among the participants of what AI democratization means to


them, the group was moderated to spend more time in each area; and, to identify
potential approaches and solutions to the challenges for each category.

Some proposed solutions were technically based, along with research that would
consider the limitations present today and how to remediate them in the future.
Other proposals were for various forms of change in society, governance, and
education.

Here are a few of them.

- Democratize data gathering via crowdsourcing or other approaches

- Closely investigate profiling for law enforcement, human resource purposes,


and/or commercial advantage. Transparency is sought to expose bias and allow for
societal change to reduce the use of AIs for the unfair profiling of humans.

- Consider how data is labeled. We want to see what an AI calls the data. Then we
can correct the AI and improve the meaning that may arise within the AI.

- Consider "user-ship" vs "consumption" in AI. A focus on the individual could


improve the value from AI. An example would be how video games are now
gaining training data from users to improve their individual experience, albeit with
some possible unintended consequences.

- A team at one of the leading companies (not mentioned here to maintain some
privacy of the views shared), is working on a "new" journal to make research more
accessible. Today, the skill needed to understand and apply academic papers
requires years to develop well. This new approach may extend the access of AI to a
much broader part of humanity.

Finally, the discussion spent much time on the disruption that AI may have on our
society. The speed of change is a big concern. An example was that a large number
of commercial drivers may suddenly put out of work and create a crisis for our
society. Some predict that such employment affects may arise precipitously in the
very near future. Some participants suggested the institution of a fair base income
so that the unemployed or financially impacted would be able to pursue other
personal interests and avoid depression and other undesirable personal impacts.
These notes are from the moderator's personal recollection and perspective, and
may not accurately represent other participants' views on how the session took
place. Comments and corrections are welcomed.

Vous aimerez peut-être aussi