High Level Languages & Performance
I suspect that the performance advantage of procedural languages does not necessarily exist over newer exotic languages. Over time, this should the performance gap should lessen. An ecosystem of processor features, developer tools, and algorithms have evolved over the past quarter century that favor the Algol-based languages like C and C++. Fortran once regularly outperform C, probably in part, because it was developed and popularized much earlier.
A large part of performance advantages in a language is due to the quality of the compiler and tools , which is usually dictated by the popularity of a language, its commercialization, and the resulting positive feedback of revenue into the compiler's continued development. New languages are perpetually at a disadvantage as a result.
Logic programming (primarily, Prolog) has always been regard as slower, but there have been a few efforts to make improvements in this area. The Aquarius project aimed to implement a fast compiler for Prolog; the researchers presented their results in Can Logic Programming Execute as Fast as Imperative Programming? Researchers of the Mercury language attempted an even faster industrial strength implementation with a more general logical programming language that supports full first-order predicate logic, not just the Horn subset, and recently added constraint logic programming. Developers can provide determinism modes for each predicate defined, and the compiler will reorder terms appropriately. The compiler automatically recognizes loops and performs aggressive inlining.
New chips are typically targeted at the prevailing language of the day. One article, Next Generation Stack Computing, describes how stack-based chip architectures are superior in performance to conventional register-based cousins in a number of important areas such as faster procedure calls and reduced complexity (short pipeline and simpler compilation) but was accidentally missed in an expensive detour by the computer industry. The approach reminds me of Intel's original design for its 64-bit chip, which Intel forwent for greater 32-bit compatibility. I wonder if such an approach might actually have favored functional languages.
David Chisnall of InformIT debunks the Myth of High-level Languages being slower. Nate criticizes Dave's assertion that high-level languages like Java provide more information to the compiler to do its magic. I think that Dave might have been closer to the truth if he had expanded his discussion to declarative languages with much more optimization opportunities.
Even with the advantages of being first, low-overhead and low-level languages like C and C++ are still being challenged by languages that ought to be slower (see the Computer Language Shootout). C# and Java use the combination of dynamic JIT compiler with runtime information and a very efficient garbage collector with a superfast allocator optimized for locality. Strict functional compilers, like OCAML, sometimes outperform C++ in benchmarks. Recently, Haskell are able to use its laziness to avoid wasted work. New multicore processors tend to favor declarative languages which are more easily parallelizable.
I haven't even touch on the debate of whether relative language performance is important any more or not. Dynamic languages like Ruby and Python are proving to be especially popular and acceptable performance-wise for delivering commercial GUI applications.