where x is real and k is positive constant. Its graph is for k=4 shown bellow.
This function can be used for example as activation function in neural networks. When we use it as activation function in neural network a code evaluating it will be very stressed. So our approach should be to optimize it.
For simplicity we will assume k=4 for rest of this post. Classical implementation in C# may look like this simple code:
private static float Sigmoid(float signal) { return 1f / (1f + (float)Math.Exp(-4f * signal)); }Unfortunately there're not any single precision Math class in .Net. If we need only single precision values it's inefficient to calculate all values in double precision. The first idea how to solve this problem could be usage of C language where are single precision mathematical functions. But how it was shown in last post about calling unmanaged code from .Net it wouldn't be the best solution due to high frequency of calling unmanaged code.
The slowest part of this small code is of course evaluation of exponential function. In lot of situations we do not need so precise results and we can use some kind of approximation. This is also a case of neural networks mentioned before. We will base our approximation on well known formula
We can take as an approximation of e^x formula (1+x/n)^n for some finite n. The higher n we select the closer approximation we will get. On the graph below we can see sigmoid function with approximated exponential for n=16.
private static float Sigmoid(float signal) { if (signal >= 4f) return 1f; float tmp = 1f - 0.25f * signal; tmp *= tmp; tmp *= tmp; tmp *= tmp; tmp *= tmp; return 1f / (1f + tmp); }This code looks more complex than original version but it's approximately 78% faster. I was testing the speed of both methods with numbers from interval (-0.5, 0.5). If we try some interval which contains numbers bigger than our boundary value 4.0 it should be even faster.