Will Machines Become Conscious?

1/29/2004 12:40:37 PM

Will Machines Become Conscious?

I came across this link "Will Machines Become Conscious?" in KurzweilAI.NET.

Ray Kurzweil is a leading expert on artificial intelligence and wrote the book the "Age of Intelligent Machines." One of the companies that Kurzweil founded is ScanSoft, which develops scanner technology to convert images to text. It is a very strong piece of technology.

Kurzweil believes that computers will eventually reach human intelligence ("strong AI") and he debunks many of the existing criticisms levied against strong AI. He also makes important distinctions between some AI programs (like Deep Blue, the chess-playing program) which rely on exhaustive brute-force searches versus real AI programs that more closely simulate the way humans think. Critics tend to point to the former, when discussing limitations of AI programs, when they really should be concerned with the latter. Newer chess-playing programs, with only a fraction of the horse-power of Deep Blue but rely on human-like techniques instead of brute force, are better chess players.

I do believe that machines will someday reach and surpass the intelligence of humans, a belief that I did not originally have, but I have developed after studying how humans think and how computers operate.

As for developing consciousness, I am not quite as sure, since we are moving into nebulous territory. Although it is a provocative question, would this even be desirable or useful? For me, a "conscious" machine would refer to a machine having sensory capabilities such as sight and hearing, some emotion-like capacity to infer threats and opportunities ("fear" and "lust"; "pain" and "pleasure") from the environment, an ability to learn from the environment and to communicate or react to it, and an awareness of one's self in the environment. Even if that definition doesn't quite equate with human "consciousness," it would be hard to tell the difference. I have a hunch it will happen, if humans don't destroy themselves first, but probably not in my lifetime.

I looked at an AI book a few days ago that estimated that the neural capacity of the human brain by multiplying the number of neurons by the speed of each individual neuron was equivalent to a CPU speed of 10^17, which would require a few decades for computers to catch up to, if Moore's law maintains its current rate. Of course, computers can potentially be more efficient than humans at utilizing its processing speed, so they would never need to reach that level of power. (I don't necessarily think that this line of reasoning is valid.)

As you probably know if you have read my blog in the past, I am developing a software company that produces common commercial desktop applications employing artificial intelligence. My software won't be ready for another year.

Part of the reason why we don't see smart applications today is because there are no existing libraries available for AI in popular platforms like Windows and Mac. Few applications used gradients or alpha blending, before Microsoft offered direct API support in Windows; it's no different for even more complex technology like AI. (Longhorn will ship with a Natural Language API, but it will be limited to tasks such as spell-checking; there will be no ability to parse natural language text or convert it into semantic forms.) There are other types of AI as well besides natural language processing.

Another reason is that many of professionals languages and tools we are using don't work well with AI. Garbage collection, I believe, is essential for example, since much of AI relies on complex cyclic graphs and searching algorithms, which makes manual memory management more difficult. Hierachical list data structures more closely maps to the way we think than class objects.

A final reason is memory requirements. In addition to libraries, operating systems would have to ship databases consisting of world knowledge, such as the semantic relationships between words. My current software has an in-memory database over 10 megabytes, which several years ago was unthinkable because that was more memory than most computers had.

Some of my MBA friends reminded me of the AI debacle in the 1980s. AI is a broad term, and I don't believe the bad investments of the past relate to anything I am doing today. I am providing a radical improvement on existing applications. The field is much more mature and advanced than it was today, and the hardware on today's desktop is more powerful than the mainframes of that era.

Comments

 

Navigation

Categories

About

Net Undocumented is a blog about the internals of .NET including Xamarin implementations. Other topics include managed and web languages (C#, C++, Javascript), computer science theory, software engineering and software entrepreneurship.

Social Media