The technological advances made by artificial intelligence, especially over the past 15 years, has rekindled the longstanding question about whether such intelligent machines ultimately put humanity at risk. While the entertainment industry has played out such scenarios in films such as “The Terminator,‘ the real debate continues in government, academia and the technology industry over how to regulate AI so society can benefit without undue risk.
What is Artificial Intelligence?
Computer scientist John McCarthy coined the term “Artificial Intelligence” at a conference at Dartmouth College in 1956. McCarthy defined Artificial intelligence or AI as the science and engineering of making intelligent machines that could reason, learn, perceive, solve problems and understand language Research in AI grew slowly over the following decades. It wasn’t until improvements in computer hardware and software at the turn of the century, along with the explosion of the Internet and its big-data capability, did AI technology begin to make more noticeable gains.
Today, AI technology has made big strides in some industries. Companies such as Google are testing automated vehicles, while car brand Tesla is building electric cars in a state-of-the-art automated plant. The automotive industry, in general, has reached a point where there’s new car technology that warns you of collisions, alerts you of cars in your blind spots, automatically brakes to avoid front-end crashes and parallel parks your car. Connected to these vehicles are our smart phones and smart devices, which now boasts digital virtual assistants such as Siri, who can perform various tasks through voice-activated prompts.
What are the Dangers?
With exciting new technology coming online using AI, are there risks for humanity? According to educators, one of the major fears is that the speed of develop of AI technology might pass a point where humans can no longer control these machines. If an intelligent machine becomes self-aware and knows it is smarter than its human creator, it may decide not to allow people to change its programming.
Along the same lines, there’s concern that intelligent machines would also determine they are not obligated by the same ethical, legal and social rules that govern humanity and serve as the foundation for civilization. If AI machines lack such common sense, they may execute their programmed goals differently than expected. This raises concerns about possibly unintended actions and negative consequences for us.
Such concerns may be premature, but there are a few examples that exist today. Lethal autonomous weapons have been developed by the military, including automatic defense systems, flying drones with armament and other defense-related equipment. The idea that such AI-driven machines can select and engage targets without, or with limited, human intervention, raises concerns.
Because of this, some international groups have demanded a ban on the further development of autonomous weapons. This proposed ban is being discussed at the United Nations Convention of Certain Conventional Weapons.
Who Says Humanity is at Risk?
Not only are scientists and educators sounding the alarm about the growth of intelligent machines. Members of Congress and leaders of prominent private-sector companies say something must be done now. Recently, Elon Musk, Chief Executive Officer of Tesla and Space X, reiterated his call for proactive regulation of artificial intelligence.
Musk believes AI-technology poses a fundamental risk “to the existence of civilization” and that there are potential dangers that are more science fact than science fiction. He fears humanity will one day become second-class citizens in a future dominated by artificial intelligence. Said Musk: “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
The AI Caucus
Echoing Musk’s concerns is Congressman John K. Delaney of Maryland. In July 2017, Delaney wrote an opinion piece in The Hill on artificial intelligence as it relates to the future economy and regulation. He contends that Congress needs to look at the challenges associated with AI very thoughtfully to ensure the economy remains strong and that society remains safe from these changing needs.
To this end, he founded the Artificial Intelligence Caucus, a bipartisan initiative so that Congress can become better informed about the social, technological and economic impacts of AI. The group will bring together experts from the private sector, government, and academia to learn and debate new AI technology, the opportunities they may bring and the implications that come with it.
Delaney says the use of powerful cognitive computing could help find new cancer treatment, improve crop yields and make oil rigs and other structures safer. AI helps robots to perform tasks deemed too dangerous for humans, he said, adding AI can augment fraud protection programs to combat identity theft.
Lawmakers need to start such conversations now and take a hard look at AI issues, including the potential for a loss of job and start preparing the country for this next wave of innovation. “The emergence of AI is also another reminder of making sure that our social safety net programs will be able to meet the needs of the future,” he said. “AI will also create new ethical and privacy concerns, and these are issues that need to be worked out.”
A More Optimistic View
Another prominent private-sector leader has taken a more optimistic view about the potential threat of artificial intelligence. Mark Zuckerberg, chairman and chief executive officer of Facebook, said he had some strong opinions about what Musk had said. Zuckerberg said he doesn’t understand why people who are naysayers try to drum up these “doomsday scenarios” and calls such actions irresponsible.
He believes that AI will end up having much less dystopian applications and will help save lives through technology such as disease diagnosis and self-driving cars. AI has the potential to eliminate deaths by lowering the number of car accidents through automated vehicles, he noted.