Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail

As artificial intelligence continues to borough its way into a variety of different industries, the impact it will have on humans has become a growing topic of discussion. Although we are only just beginning to utilize A.I. in recreation, transportation, and other facets of our day-to-day life, the implications need to be seriously considered before the out-of-control humanoid robots decide our fate.

Science-fiction films are at least partially responsible for fueling some of the paranoia surrounding the threat that A.I. poses to mankind, the risk of such technology displacing us from our jobs, and perhaps even jeopardizing our safety, is very real. For example, the rise of self-driving cars is forcing researchers to make tough decisions about how to implement standards of morality and safety into these autonomous vehicles in a way that allows them to serve the greatest good. The problem is, doing so requires programming the cars to make real-time decisions about whether to prioritize the safety of the passengers over pedestrians or other drivers on the road in hazardous situations.

Any car manufacturer looking to stay competitive over the next decade has already begun research and development on self-driving cars and is surely grappling with this moral dilemma. While it would seem intuitive that these cars should be programmed to make decisions that benefit the greatest number of people possible, research has shown that passengers would unsurprisingly rather prioritize their own safety, and that of any loved ones that may be riding with them, over anyone else.

It’s possible that manufacturers will allow the owner to customize their personal preferences unless there is a government-mandated degree of morality. The option of choosing to buy a car that will save your life at the expense of others may be highly desirable, although when an accident inevitably occurs, it is not clear how liability would be split between the car owner and the manufacturer.

Self-driving cars are not the only A.I. that pose a threat to humans, the possibility of armed U.S. military drones being equipped with this technology would enable them to make decisions about whether or not to take a life on their own. Autonomous weapons are legal, but they still require a certain degree of human judgment in their usage, as should be the case with any technology with the capacity to kill.

A team of researchers from five of the world’s most prominent tech companies have agreed to try and create a standard of ethics around A.I. with the goal of making sure it is always beneficial to humans and never unintentionally harmful. Although tech companies could be resistant to new rules and regulations about their products being forced on them, safety absolutely needs to be the top concern going forward. However, regulating A.I. technology is easier said than done.

A group from Stanford University committed to evaluating the impact of A.I. on society every five years reports that regulation may be impossible with the sheer number of applications and inherent risks involved. The report recommends increasing awareness and expertise about A.I. by government officials at all levels, allowing them to make more informed decisions about the technology’s trajectory and usefulness in its fast-paced environment.

Although A.I. is an extremely exciting technology that will no doubt have immeasurable value in the future, the importance of proceeding with caution when it comes to human safety cannot be understated. Implementing a code of values into today’s technology is no small feat, but with a delicate human touch and careful consideration over time, the possibilities are endless.

Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail