A global study finds several new
categories of human jobs emerging, requiring skills and training that
will take many companies by surprise.
The threat that automation will eliminate a broad swath of jobs
across the world economy is now well established. As artificial
intelligence (AI) systems become ever more sophisticated, another wave
of job displacement will almost certainly occur. It can be a distressing picture.
But here’s what we’ve been overlooking: Many new jobs will also be created — jobs that look nothing like those that exist today.
In Accenture PLC’s global study of more than 1,000 large companies
already using or testing AI and machine-learning systems, we identified
the emergence of entire categories of new, uniquely human jobs. These
roles are not replacing old ones. They are novel, requiring skills and
training that have no precedents. (Accenture’s study, “How Companies Are
Reimagining Business Processes With IT,” will be published this
summer.)
More specifically, our research reveals three new categories of
AI-driven business and technology jobs. We label them trainers,
explainers, and sustainers. Humans in these roles will complement the
tasks performed by cognitive technology, ensuring that the work of
machines is both effective and responsible — that it is fair,
transparent, and auditable.
Trainers
This first category of new jobs will need human workers to teach AI
systems how they should perform — and it is emerging rapidly. At one end
of the spectrum, trainers help natural-language processors and language
translators make fewer errors. At the other end, they teach AI
algorithms how to mimic human behaviors.
Customer service chatbots, for example, need to be trained to detect
the complexities and subtleties of human communication. Yahoo Inc. is
trying to teach its language processing system that people do not always
literally mean what they say. Thus far, Yahoo engineers have developed
an algorithm that can detect sarcasm on social media and websites with
an accuracy of at least 80%.
Consider, then, the job of “empathy trainer” — individuals who will
teach AI systems to show compassion. The New York-based startup Kemoko
Inc., d/b/a Koko, which sprung from the MIT Media Lab, has developed a
machine-learning system that can help digital assistants such as Apple’s
Siri and Amazon’s Alexa address people’s questions with sympathy and
depth. Humans are now training the Koko algorithm to respond more
empathetically to people who, for example, are frustrated that their
luggage has been lost, that a product they’ve bought is defective, or
that their cable service keeps going on the blink even after repeated
attempts to fix it. The goal is for the system to be able to talk people
through a problem or difficult situation using the appropriate amount
of understanding, compassion, and maybe even humor. Whenever Koko
responds inappropriately, a human trainer helps correct that action —
and over time, the machine-learning algorithm gets better at determining
the best response.
Without an empathy trainer, Alexa might respond to a user’s anxieties
with canned, repetitive responses such as “I’m sorry to hear that” or
“Sometimes talking to a friend can help.” With the right training, Alexa
becomes much more helpful. The following is a verbatim transcription of
how Alexa with Koko responds
to a person who says he’s worried that he’ll fail an upcoming exam:
“Exams are really stressful, but a little anxiety can help us succeed.
It sharpens our minds. … It’s your body’s way to prepare itself for
action. It’s actually a really good thing. I wonder if you could think
of your nerves as your secret weapon. Easier said than done, I know. But
I think you will do much better than you think.”Explainers
The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases. Many executives are uneasy with the “black box” nature of sophisticated machine-learning algorithms, especially when the systems they power recommend actions that go against the grain of conventional wisdom. Indeed, governments have already been considering regulations in this area. For example, the European Union’s new General Data Protection Regulation, which is slated to take effect in 2018, will effectively create a “right to explanation,” allowing consumers to question and fight any decision made purely on an algorithmic basis that affects them.
Companies that deploy advanced AI systems will need a cadre of
employees who can explain the inner workings of complex algorithms to
nontechnical professionals. For example, algorithm forensics analysts
would be responsible for holding any algorithm accountable for its
results. When a system makes a mistake or when its decisions lead to
unintended negative consequences, the forensics analyst would be
expected to conduct an “autopsy” on the event to understand the causes
of that behavior, allowing it to be corrected. Certain types of
algorithms, like decision trees, are relatively straightforward to
explain. Others, like machine-learning bots are more complicated.
Nevertheless, the forensics analyst needs to have the proper training
and skills to perform detailed autopsies and explain their results.
Here, techniques like Local Interpretable Model-Agnostic Explanations
(LIME), which explains the underlying rationale and trustworthiness of a
machine prediction, can be extremely useful. LIME doesn’t care about
the actual AI algorithms used. In fact, it doesn’t need to know anything
about the inner workings. To perform an autopsy of any result, it makes
slight changes to the input variables and observes how they alter that
decision. With that information, the algorithm forensics analyst can
pinpoint the data that led to a particular result.
So, for instance, if an expert recruiting system has identified the
best candidate for a research and development job, the analyst using
LIME could identify the variables that led to that conclusion (such as
education and deep expertise in a particular, narrow field) as well as
the evidence against it (such as inexperience in working on
collaborative teams). Using such techniques, the forensics analyst can
explain why someone was hired or passed over for promotion. In other
situations, the analyst can help demystify why an AI-driven
manufacturing process was halted or why a marketing campaign targeted
only a subset of consumers.
Sustainers
The final category of new jobs our research identified — sustainers —
will help ensure that AI systems are operating as designed and that
unintended consequences are addressed with the appropriate urgency. In
our survey, we found that less than one-third of companies have a high
degree of confidence in the fairness and auditability of their AI
systems, and less than half have similar confidence in the safety of
those systems. Clearly, those statistics indicate fundamental issues
that need to be resolved for the continued usage of AI technologies, and
that’s where sustainers will play a crucial role.
One of the most important functions will be the ethics compliance
manager. Individuals in this role will act as a kind of watchdog and
ombudsman for upholding norms of human values and morals — intervening
if, for example, an AI system for credit approval was discriminating
against people in certain professions or specific geographic areas.
Other biases might be subtler — for example, a search algorithm that
responds with images of only white women when someone queries “loving
grandmother.” The ethics compliance manager could work with an algorithm
forensics analyst to uncover the underlying reasons for such results
and then implement the appropriate fixes.
In the future, AI may become more self-governing. Mark O. Riedl and
Brent Harrison, researchers at the School of Interactive Computing at
Georgia Institute of Technology, have developed an AI prototype named
Quixote, which can learn about ethics
by reading simple stories. According to Riedl and Harrison, the system
is able to reverse engineer human values through stories about how
humans interact with one another. Quixote has learned, for instance, why
stealing is not a good idea and that striving for efficiency is fine
except when it conflicts with other important considerations. But even
given such innovations, human ethics compliance managers will play a
critical role in monitoring and helping to ensure the proper operation
of advanced systems.
The types of jobs we describe here are unprecedented and will be
required at scale across industries. (For additional examples, see
“Representative Roles Created by AI.”) This shift will put a huge amount
of pressure on organizations’ training and development operations. It
may also lead us to question many assumptions we have made about
traditional educational requirements for professional roles.
manufacturing and other professions. But where and how these workers will be trained remain open questions. In our view, the answers need to begin with an organization’s own learning and development operations.
On the other hand, a number of new jobs — ethics compliance manager, for example — are likely to require advanced degrees and highly specialized skill sets. So, just as organizations must address the need to train one part of the workforce for emerging no-collar roles, they must reimagine their human resources processes to better attract, train, and retain highly educated professionals whose talents will be in very high demand. As with so many technology transformations, the challenges are often more human than technical.
About the Authors
H. James Wilson is managing director of IT and business research at Accenture Research. Paul R. Daugherty is Accenture’s chief technology and innovation officer. Nicola Morini-Bianzino is global lead of artificial intelligence at Accenture.
This article was originally published on March 27, 2017. It has been updated to reflect edits made for its inclusion in our Summer 2017 print edition.
No hay comentarios:
Publicar un comentario