Robots could soon say “no” to humans as engineers are developing a functionality that could make artificial intelligence disobey their operator in circumstances that a human action is dangerous to itself or others. The technology could significantly help car manufacturers develop self-driving cars that could allow owners to avoid unintended, dangerous decisions on the road.

Researchers from the Human-Robot Interaction Lab at Tufts University are teaching robots to decide whether to follow or reject instructions based on the authority of a person and the potential result of the instructed action. The idea comes from the concept that robots could override default programming when a person giving instructions is not authorised to do so and to avoid actions that are dangerous to itself or others.

The technology could help improve car production. The robot would simply follow the instructions of its owner, but if a human is at risk of harm on the road, the machine will be capable to decide to stop its current activity and make the necessary action.

Tufts researchers used the “Felicity conditions” that commonly occur in the human brain for the new technology. These conditions occur when people were asked to do something and the brain processes a number of considerations before doing the action.

People subconsciously think of questions like: do I know how to do this? Can I physically do it, and do it right now? Am I obligated based on my social role to do it? And, does it violate any normative principle to do it?

Robots programmed to ask the same questions could adapt to unexpected circumstances that would occur, according to ScienceAlert.

Other researchers developing self-driving cars are also aiming to solve the dilemma of a driverless car caught in a situation where it would hit several people on the road. Researchers ask if it would intentionally crash itself into a wall to kill the person inside, or it should drive towards a crowd of pedestrians and potentially kill many more?

"Surveys suggested people generally agreed that the car should minimise the overall death toll, even if that meant killing the occupant - and perhaps the owner - of the vehicle in question," Pete Dockrill said in an earlier report by ScienceAlert.

A new paper has been published online by the Tufts researchers discussing the similar ideas. The paper indicates that "humans reject directives for a wide range of reasons: from inability all the way to moral qualms."

"What is still missing... is a general, integrated set of architectural mechanisms in cognitive robotic architectures that are able to determine whether a directive should be accepted or rejected over the space of all possible excuse categories (and generate the appropriate rejection explanation)," it added.

Contact the writer at feedback@ibtimes.com.au or tell us what you think below