The idea of humankind creating an intelligent machine that could think for itself has been a topic of discussion for centuries but the technology to do arrived in the mid-20th century. Today’s 21st-century supercomputers, smart phones and smart cars are just the next phases of what some say is the 4th industrial revolution: artificial intelligence.
What is Artificial Intelligence?
Computer scientist John McCarthy was said to have coined the term “Artificial Intelligence” at a conference at Dartmouth College in 1956. McCarthy had defined Artificial intelligence or AI as the science and engineering of making intelligent machines that could reason, learn, perceive, solve problems and understand language.
McCarthy and other scientists and researchers wanted to know if machines could truly be made to think for themselves, beyond their ability to process logic that it was programmed to do. Although McCarthy came up with the name for this new discipline, research on it had already been progressing decades before.
Why is AI Important?
Artificial intelligence combines several disciplines including philosophy, mathematics, economics, computer engineering and statistics, making it an interesting field for various studies. After many years of slow but steady progress over the past decades, the expanding focus on artificial intelligence by the research community and various industries has raised expectations and excitement about how advances in AI could benefit fields ranging from health care and education to transportation and the space industry.
The move toward this greater use of AI, however, comes with risks and challenges for government, the private sector, the scientific community and the general public. Issues such as the loss of jobs, safety, regulations, law and security are just some concerns that will need to be considered as we push the AI envelope.
One of the first names mentioned that remains tied to the history of artificial intelligence is English mathematician Alan Turing, who was called the father of modern computer science, according to this university report. Born in 1911, Turing graduated from college in the 1930s and soon after came up with the idea of a machine capable of computing any computable function. His paper had proposed ideas for what would later be called the “Turing Machine,” a device that could essentially execute an algorithm that makes it appear the machine could think.
Turing’s real opportunity to test his theories came during the World War II, when he focused his efforts on creating a machine to break cryptography used by the Germans in its communications with other Axis powers. He designed and built an electromechanical machine that could break what was called Enigma codes using the same algorithms he first proposed, thereby translating German messages that helped in the war effort.
After the war, Turing eventually created what’s known as the Turing Test in 1950. His goal was to ask not whether computers can think but to see if intelligent machines could pass a behavior test for intelligence. The test consisted of a person, known as a judge, who would conduct a written conversation for five minutes with someone unseen. The judge would have to identify if the person he or she was talking with was a computer or a human.
Also in the 1950s, Allen Newell and Herbert Simon created what was considered by many as the first artificial Intelligence program. “The Logic Theorist” program would try to solve problems using a tree model, which became a stepping stone for AI development.
Artificial Intelligence took another step forward in the 1950s and 1960s with the board game of chess. Claude Shannon, an American electrical engineer, mathematician and cryptographer, had proposed in a paper that it was possible to create computer programs that could play chess and beat human players. Two distinct programs were created by scientists. One used brute force by examining thousands of chess moves using an algorithm while the other used specialized heuristics and strategy to win matches.
In the 1970s, brute force programs became the dominant chess programs available. In the mid-1980s, the growth of artificial intelligence faced some challenges but regained public interest in 1991 when the U.S. military successfully used artificial intelligence in military systems that were part of the Desert Storm war.
In 1997, AI was back in the news with chess when IBM’s Deep Blue computer challenged and defeated then-World Chess Champion, Gary Kasparov. Deep Blue used a brute force program known as Type-A to analyze thousands of possible chess moves during the match.
Other notable dates in the history of artificial intelligence, according to the University of Denver Sturm College of Law, include:
- 1998 – Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot
- 2000 – Cynthia Breazeal of MIT develops Kismet, a robot that could recognize and simulate emotions
- 2000 – Honda’s ASIMO artificially intelligent humanoid robot walks as fast as a human, delivering trays to customers in a restaurant setting
- 2009 – Google starts secretly developing a driverless car. In 2014, this car passed a self-driving test in the State of Nevada
- 2009 – Computer scientists at the Intelligent Information Laboratory at Northwestern University develop Stats Monkey, a program that writes sports news stories without human intervention
- 2011 – Watson, a natural language question-answering computer, competes on the game show Jeopardy! and defeats two former show champions.
In 2016, the White House released a report on Artificial Intelligence, Automation, and the Economy that suggests to policymakers and politicians should prepare for five economic areas that could be affected by the growth of artificial intelligence. The report also suggests three broad strategies for addressing the impacts of AI-driven automation across the entire U.S. economy.
The document describes how continued engagement between government, industry, technical, policy experts and the public can help move the nation toward policies that can unlock the creative potential of artificial intelligence.
Said the report: “AI-driven automation will transform the economy over the coming years and decades. The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the economic effects of AI.”