## Decimal Performance

I did a quick test to compare the performance of basic mathematical operations on the different numeric types provided by the runtime. My machine uses an Intel Pentium 4. The numbers below are expressed in terms of the amount of time it takes to perform one integer addition.

. | Division | Multiply | Addition | Bitwise Or |

Double | 40.6 | 2.4 | 1.6 | N/A |

Int32 | 59.3 | 7.5 | 1.0 | 1.0 |

Int64 | 73.7 | 41.3 | 9.3 | 2.2 |

Decimal | 438.2 | 128.7 | 147.9 | N/A |

Some conclusions...

- Results for single-precision and double-precision were essentially the same.
- Although addition is faster for 32-bit integers, multiplication and division are faster for floating point numbers
- Floating point numbers provide superior performance to 64-bit integers
- Long multiplication is slow on 32-bit machines
- Division is essentially an order of magnitude slower than multiplication for both integers and doubles. If possible, it is much faster to multiply by the reciprocal
- Decimal arithmetic is implemented in software; hence, performance is two orders of magnitude slower than the hardware performance exhibited by the native data types. The lopsided ratio is similar to that of using reflection over virtual method calls.
- Decimal multiplication is actually faster than addition

Here is my simple profiling code that I used with the Snippet Compiler . I simply replaced T with the numeric type I was interested in as well as the mathematical operators used in the inner loop. Parse statements were used to prevent constant propagation. Prior to scaling the results, I subtracted the time used by the empty loop.