On Intelligence

8/7/2005 1:03:22 AM

On Intelligence

Over my lifetime, I developed a set of unintuitive tenets from studying various disciplines such as psychology, economics, biology, statistics, computer science and so on. These tenets hold that from simple, unintelligent forces can emerge efficient, complex, and seemingly intelligent behavior. Obvious examples include the “invisible hand” in economics and natural selection in biology. In AI, there are the classic examples of neural networks and genetic programming.

I believe also that the notion of human intelligence similarly is derivable from simplistic phenomena. From personal experience, I developed from practice the ability to obtain perfect scores from some common standardized tests, yet, when I do so, I surprisingly do not feel intelligent because I am simply, explicitly, and mechanically recognizing and applying fairly simple rules.

Recently, I skimmed through two books, which have something to say about the origin of human intelligence…

I looked at On Intelligence by Jeff Hawkins briefly, which I will probably purchase in the future… The book deals with artificial intelligence and the human brain. One section on the Human Brain in page 43 caught my eyes. Jeff notes that a brain specialist Mountcastle observed that the cells in different areas of the brain for different activities like vision and motion were fundamentally similar.

… the neocortex is remarkably uniform. The same layers, cell types and connections exist throughout. It looks like the six business cards every. The differences are often so subtle that trained anatomists can’t agree on them. Therefore … all regions of the cortex are performing the same operations. The thing that makes the vision area visual and the motor area motoric is how the regions of the cortex are connected to each other and to other parts of the central nervous system.

In fact, … the reason one region of the cortex looks slightly different from another is because of what it is connected, and not because its basic function is different. He concludes that there is a common function, a common algorithm, that is performed by all the cortical regions. Vision is no different from hearing, which is no different from motor output. He allows that out genes specify how the regions of cortex are connnected, which is very specific to functions and species, but the cortical tissue itself is doing the same thing.

Jeff found that observation surprising given that sight, hearing and touch seemed very different with fundamentally different qualities. He concludes the that human brain is fundamentally memory-driven machine using pattern recognition techniques—essentially a rules-based machine.

Another related book that I looked at A New Kind of Science by Stephen Wolfram, who developed Mathematica and spent ten years of his life writing his tome. I had put off reading his book earlier, because of the size, the singular focus on cellular automata and some of the lukewarm or harsh critical reviews. Wolfram came across as arrogant, while the content was often deemed narrow and insignificant. Kinder critics agreed that, while the content was intellectually impressive, Wolfram usurps ideas discovered by others such as Church-Turing thesis and claims them as his own.

Wolfram’s book expends most of the book on simple cellular automata in order to push transformation rules, not traditional equations, as a basis for a new kind of science. This is not surprising, since Mathematica is founded on rules. Before his book was even released, I had anticipated his focus of rules as I was similarly mesmerized by their power and simplicity in his product, so much so that my own AI product is partly reliant on a similar system.

Mathematica is a sophisticated computer algebra system that can manipulate mathematical expressions containing symbols (eg, variables, functions, symbolic constants, …) just as easily as those containing numbers. Despite objections from one of my readers (optionsScalper), these systems are generally considered an area of AI. Mathematica uses a declarative style of programming of adding new rules. Mathematica evaluates expression continuously by applying transformations from a dictionary of rules using symbolic pattern matching. One can create a new function and Mathematica can automatically deduce the derivate in a number of ways ranging from simple algebraic rules to the complicated procedure of applying the limit to the function. I have been very impressed by the language. I especially like the way that subexpressions can remain unevaluated due to undefined symbols yet the whole expression can still be evaluated and transformed.

The proper way to view this book is as his extrapolations about the world based on his experience designing Mathematica. One of these extrapolations involves human intelligence… (pgs 626–629)

But what about about the whole process of human thinking? What does it ultimately involve? My strong suspicion is that the use of memory is what in fact underlies almost every major aspect of human thinking… Capabilities like generalization, analogy, and intuition immediately seem very closely related to the ability to retrieve data from memory on the basis of similarity.

Mathematica manipulates mathematical expressions without using any Prolog-like logical inferencing which suggests that symbolic pattern-matching is a more general approach, yet extensive pattern-matching is very rare in software and commercial programming languages. My own software does include inferencing, which I believe is valuable but not as much as the pattern-matching approach. About logical reasoning , Wolfram remarks…

But what about capabilities like logical reasoning? Do these perhaps correspond to a higher-level of human thinking?

In the past it was often thought that logical might be an appropriate idealization for all of human thinking. And largely as a result of this, practical computer systems have always treated logic as something quite fundamental. But it is my strong suspicion that in fact logic is very far from fundamental, particularly in human thinking.

For among other things, whereas in the process of thinking we routinely manage to retrieve remarkable connections almost instantaneously from memory, we tend to be able to carry out logical reasoning only by laboriously going from one step to the next. And my strong suspicions is that when we do this we are in effect again just using memory and retrieving patterns of logical argument that we have learned from experience.

In addition, the way rules are specified mirrors closely how humans would articulate such rules. My own insight is that the same system of rules can also work well just as with natural language not just mathematical expressions. Indeed, this is what he mentions in his book …

In modern times, computer languages have often been though of as providing precise ways to represent processes that might otherwise be carried out by human thinking. But it turns out that almost all of the major languages in use today are based on setting up procedures that are in essense direct analogs of step-by-step logical arguments…

As it happens, however, one notable exception is Mathematica. And indeed, In designing Mathematica, I specifically tried to imitate the way that humans seem to think about many kinds of computations. And the structure that I ended up with for Mathematica can be viewed as being not unlike a precise idealization of the operation of human memory.

For the core of Mathematica is the notion of storing collections of rules in which each rule specifies how to transform all pieces of data that are similar enough to match a single Mathematica pattern. And the process of Mathematica provides considerable evidence for the power of that kind of approach.

He also makes the following conclusions on the nature of human intelligence …

There has been in the past a great tendency to assume that given all its apparent complexity, human thinking must somehow be an altogether fundamentally complex process, not amenable at any level to simple explanation or meaningful theory.

But from the discoveries in this book we now know that highly complex behavior can in fact arise even from very simple basic rules. And from this it immediately becomes conceivable that there could in reality be quite simple mechanisms that underlie human thinking. … And it is in the end my strong suspicion that most of the core processes needed for general human-like thinking will be able to be implemented with rather simple rules.

Comments

 

Navigation

Categories

About

Net Undocumented is a blog about the internals of .NET including Xamarin implementations. Other topics include managed and web languages (C#, C++, Javascript), computer science theory, software engineering and software entrepreneurship.

Social Media