Decimal Performance

3/13/2004 3:57:17 AM

Decimal Performance

I did a quick test to compare the performance of basic mathematical operations on the different numeric types provided by the runtime. My machine uses an Intel Pentium 4. The numbers below are expressed in terms of the amount of time it takes to perform one integer addition.

.DivisionMultiplyAdditionBitwise Or
Double40.62.41.6N/A
Int3259.37.51.01.0
Int6473.741.39.32.2
Decimal438.2128.7147.9N/A

Some conclusions...

  1. Results for single-precision and double-precision were essentially the same.

  2. Although addition is faster for 32-bit integers, multiplication and division are faster for floating point numbers

  3. Floating point numbers provide superior performance to 64-bit integers

  4. Long multiplication is slow on 32-bit machines

  5. Division is essentially an order of magnitude slower than multiplication for both integers and doubles. If possible, it is much faster to multiply by the reciprocal

  6. Decimal arithmetic is implemented in software; hence, performance is two orders of magnitude slower than the hardware performance exhibited by the native data types. The lopsided ratio is similar to that of using reflection over virtual method calls.

  7. Decimal multiplication is actually faster than addition

Here is my simple profiling code that I used with the Snippet Compiler . I simply replaced T with the numeric type I was interested in as well as the mathematical operators used in the inner loop. Parse statements were used to prevent constant propagation. Prior to scaling the results, I subtracted the time used by the empty loop.

Comments

 

Navigation

Categories

About

Net Undocumented is a blog about the internals of .NET including Xamarin implementations. Other topics include managed and web languages (C#, C++, Javascript), computer science theory, software engineering and software entrepreneurship.

Social Media