A Fast, Compact Approximation of the Exponential Function
N. N. Schraudolph. A Fast, Compact Approximation of the Exponential Function. Neural Computation, 11(4):853–862, 1999.
Download
175.3kB | 71.2kB | 157.8kB |
Abstract
Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This paper describes how exponentiation can be approximated by manipulating the components of a standard (IEEE-754) floating-point representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact.
BibTeX Entry
@article{Schraudolph99, author = {Nicol N. Schraudolph}, title = {\href{http://nic.schraudolph.org/pubs/Schraudolph99.pdf}{ A Fast, Compact Approximation of the Exponential Function}}, pages = {853--862}, journal = {\href{http://neco.mitpress.org/}{Neural Computation}}, volume = 11, number = 4, year = 1999, b2h_type = {Journal Papers}, b2h_topic = {Other}, abstract = { Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This paper describes how exponentiation can be approximated by manipulating the components of a standard (IEEE-754) floating-point representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact. }}