But AI research and development is already carried out by so many different actors, both in academia and in the business sector, and in many countries around the world, that putting a lid on it seems highly impractical. Moreover, one must take into account the great benefits of AI.
If we really want to keep AI from straying into nefarious territory, we need more of it to supervise the technology we already have. For instance, by introducing measures that alert drivers when they are getting too close to other cars, AI is already saving tens of thousands of lives, and soon many more. AI is assisting doctors through the use of robotic surgery, and it helps pilots in many thousands of flights every day to reach their destination.
Indeed, we should ask ourselves why certain AI programs were not available when we badly needed them. When the reactors in Fukushima, Japan started to melt down in the aftermath of the April 2011 earthquake and tsunami, the staff had to leave before they could shut down the reactors. Had an AI robot been in place at the time, it could have taken over and prevented the calamity that followed.
Let AI supervise itself
If we really want to keep AI from straying into nefarious territory, we need more of it to supervise the technology we already have. After all, AI may be autonomous, but it has no intentions or motivations of its own unless humans program those intentions in. So long as we ensure that programming for smart machines is subject to accountability and oversight, there is no reason to fear they will choose evil goals on their own.
We are now calling upon the AI community to develop a whole new slew of AI oversight programs that can hold accountable AI operations programs. This effort is known as AI Guardians.
AI operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience, and to be able to render at least semi-autonomous decisions. However, all operational systems need some boundaries—both in order to not violate the law and to heed ethical guidelines. Oversight here can be relatively flat and flexible, but not avoided.
This oversight system can help determine who or what was at fault when AI is involved in a situation that causes harm to humans—say, when a driverless car crashed into another. Was the crash attributable to the programmer’s mistakes or ill intent, or to decisions made by the autonomous AI operational system of the car?
Ethics bots can instruct cars whether they should drive at whatever speed the law allows or in ways that conserve fuel. AI enforcement mechanisms are also needed to ensure that AI operations systems adhere to legal and ethical guidelines—for example, avoiding any discrimination against minorities when it comes to how search engines display jobs, credit and housing information.
One solution is ethics bots, which we need to inform the operational AI systems of the values that owners and operators want to honor. These bots can instruct cars whether they should drive at whatever speed the law allows or in ways that conserve fuel, or if they should stay in the slower lanes when children are in the car. They can also signal when it’s time to alert humans to a problem—such as, say, waking up a sleeping passenger if the car passes a traffic accident.
In short, there is no reason to introduce unnecessary and restrictive oversight into the AI world. However, there is plenty of room for guidance. The time has come for the industry to receive guidance that will ensure AI operational systems adhere to our legal and moral values—and that robots don’t come after us while we sleep.
We welcome your comments at email@example.com.