12/25/2006 7:38:26 AM


Source code analysis is not really the focus of my software company. I am just trying to build a fast and intelligent reasoning system to support the development of my other applications. I decided to develop NStatic as a testbed for my core AI technology. It was actually supposed to be a short-term diversion, but I sensed substantial interest for this tool, the problem itself was intriguing, and my AI was improving substantially.

Verification is a strong word. I am trying to introduced a source code analysis tool, which is an alternative to unit-testing. Verification as used by Floyd, Hoare, Djisktra, etc imply total correctness—ie, the software terminates and is consistent with its annotated specifications, a form of double-accounting in which the same program is essentially written twice in a different declarative language (typically, FOPL). Some industries with heavy reliability needs require such verification, so much so that even the verifier program needs to be verified.

I am still learning quite a bit about this field. A few days ago, I learned about model-checking, a fascinating approach where one can statically track execution of programs over time and, using temporal logic, find errors like “after the dialog comes up, the user cannot not hit the button three times.” Not having any background in this area, I also rediscovered a number of techniques. For example, in translating imperative code into its functional representation, I reinvented denotational semantics. I uncovered a free online book from the 1980s that save me the trouble of uncovering the remaining loose ends. Canonicalized if-expressions and binary decision diagrams, another of my re-inventions, were also in used by the ACL2 and possibly other theorem provers for over a decade.

My limited knowledge in this area, until recently, was a purposeful attempt to come up with original algorithms and avoid being influenced by other software technology. This has paid off well as I recently searched issued patents the past week and only made minor modifications to my software.

I previously made the erroneous assumption that the popular commercial software packages currently out in the market represented the state-of-the-art. I was a bit surprised at the lack of advancement in the technology. I saw a pyramid, which fell into the several categories. In the bottom rung were a large number of style checkers and syntax-based tools, which didn’t seem to find substantial errors. The next few rungs in order of increasing sophistication are the data flow analysis tools, simulators, and limited theorem-provers. The more sophisticated tools find serious bugs, but still had a sizable false-positive rate, a modest find rate, and/or non-interactive analysis times. The class of errors found also seemed restricted to a limited class of errors such as pointer errors and bound checking. Specification tools like ESC/Java and Spec# are new and interesting; however, they only perform modular analysis—a weak form of interprocedural analysis. I wondered if all of these issues be simultaneously addressed?

The state of the art is actually far more advanced. There apparently are substantial proprietary efforts in various industries (military, hardware, medical, etc) for guaranteeing ultra-reliable software built on top of free industrial-strength theorem provers like HOL, reFLect, and ACL2. In fact, there is a field called formal methods, which includes writing specifications to establish the correctness of programs in conjunction with said theorem provers. Verification might actually be a reality for those companies who absolutely must have reliability and are able to afford it. (By the way, these provers are quite similar to my own AI system in their heavy use of computer algebra and other functional techniques. My previous critique was leveled on traditional provers that were primarily based on predicate logic.)

My software can’t make guarantees of correctness. Its underlying use of computer algebra may even be unsound. However, my software genuinely attempts to understand the source code and to detect a wider class of errors using a fresh, more symbolic approach.






Net Undocumented is a blog about the internals of .NET including Xamarin implementations. Other topics include managed and web languages (C#, C++, Javascript), computer science theory, software engineering and software entrepreneurship.

Social Media