The human brain is the most sophisticated and powerful of all organisms. At a mere three pounds, it has about 100 billion nerve cells running it along with other types to support its everyday functions. Modeling the human reasoning has presented science with a virtually insurmountable challenge? Can a computer replicate thinking?

What Is Computational Intelligence?

Computational intelligence (CI) describes the study of learning how intelligent behavior works. It seeks to understand the flexibility and logic in behavior. It makes the assumption that reasoning is computation. After all, the brain is often thought of as an integrated computer, so it only makes sense.

The concept of recreating this thinking ability in a machine has intrigued scientists for decades. It has been the subject of philosophers who debated the questions of epistemology and the theory of knowledge. In modern times, researchers have sought to harness its power in a more concrete way.

The History of Computational Intelligence

Computational Intelligence (CI)

We can trace the contemporary history of CI to the work of English mathematician and inventor, Charles Babbage. Known as the father of the computer, he first proposed the concept of an analytical machine. It would use the concepts of human reasoning to solve arithmetic problems to more complicated ones. Babbage’s work opened the door to the modern field of CI.

Fast forward to the 1950s and the work of Alan Turing. The ambitious computer scientist considered the more complex concepts of algorithms culminating with his Turing Test as a measure of intelligence. He worked with the British government in World War II on code breaking techniques against the Germans.

Turing brought a human reasoning component to his work with his interest in mathematical biology. His main focus was morphogenesis or the development of shapes in living things. He theorized that an analytical process drove biology and morphology. Later, in the 1960s and 1970s, scientists began focusing on the sophistication of intelligence to create more robust models.

The Move Toward Human Reasoning

Early work in this field centered more on Artificial Intelligence (AI). It went hand-in-hand with the development of computers as machines with hard coding to create a certain output from an input of data. CI recognizes that human reasoning often follows a different path that includes flexibility, adaptation, and learning from past experiences.

The goal with CI then became one to replicate how humans use their intelligence to answer questions and solve problems. Research began in earnest with the formation of the IEEE Computational Intelligence Society. CI has moved from a perplexing theoretical biology problem to the technological development of practical applications.

How Computational Intelligence Works

One of the stumbling block with AI to mirror human reasoning is its rigidity. It uses standard (boolean) true or false answers for problem-solving when the answers are often more nuanced with an array of gray areas in between. It also misses out on the potential for adaptation and learning. CI can overcome these barriers with new applications.

That is where it offers more possibilities than AI. To address these issues, it uses five main technologies. These are:

  • Neural Networks
  • Fuzzy Systems
  • Evolutionary Computation
  • Learning Theory
  • Probabilistic Methods

Neural Networks

Computational Intelligence (CI)

The concept of creating neural networks is of itself human-like. The brain can form new neural connections all through life, a process known as neuroplasticity. It can also respond to injury and disease by reorganizing itself. CI strives to replicate it with the development of plausible neural networks for more intelligent analysis and learning.

Fuzzy Systems

Fuzzy logic is a stark departure from AI because it recognizes that some problems don’t involve black or white solutions. It embraces the concept of partial truths to better mirror human reasoning. Philosophy itself has identified this flawed thinking in a concept called the false dichotomy. It is the logical fallacy of only considering two options to a question.

Evolutionary Computation

Evolutionary computation capitalizes on the processes of adaptation to refine responses based on observations in nature. The behavior of a system evolves over time. In some ways, it might supersede human reasoning by giving CI the ability to avoid making repeated mistakes.

Learning Theory

Learning theory replicates human reasoning because it adapts behavior based on past experience. The sources of these influences include social, environmental, and cognitive stimuli. It fosters the development of new knowledge for better decision making. It has the potential to avoid the fallibilities of human logic with its pitfalls of fallacies, heuristics, and groupthink.

Probabilistic Methods

Probabilistic methods recognize the roles of uncertainty and randomness in inputs and models. It aims to identify these sources and to find ways to reduce them for more accurate predictions based on prior knowledge. In many ways, it replicates Bayesian statistical analysis. Even the act of acknowledging randomness is sometimes a superior trait.

Pulling It All Together

Each of these applications seeks to bring the flexibility of human intelligence to CI. Learning is crucial to survival. Humans adapt to changes in their environment by gaining new knowledge or skills which drive neuroplasticity. CI attempts to use these same methods through the development of technologies. But it remains a vast undertaking.

Reproducing the activity of a human brain would take the energy of a small hydroelectric plant producing 10 megawatts of power, explains computer scientist, Kwabena Boahen, of Stanford University. But through the research into CI methods, science has gained a greater understanding of how the human brain operates.

The mechanisms for learning and adaptation are better understood today. And with a more complete knowledge base, there exist the possibilities of applications in other fields such as medicine and disease prevention. Often, the biggest obstacles that research has is the identification of the mechanisms of health risks. CI can provide this vital data.

There are more than 1,000 identified brain and neurological disorders, affecting more than 50 million Americans.The potential of CI to fully understand how the brain works and learns offers so much more to the global community. It all begins with opening the doors of knowledge. From there, comes understanding and hope.

Pin It on Pinterest

Share This