--

--

--

--

--

--

--

--

Project 10

Completed (24 days)

Completed (57 days)

Completed (26 days)

Completed (47 days)

Completed (19 days)

Completed (14 days)

Completed on 16-Nov-2016 (24 days)

The first thing to highlight is that I came up with the ideas in this and the next section completely by my own; I saw certain patterns while performing some tests meant to improve the fractional exponentiation algorithm. I didn't do any research on this front and am not aware about any theory on these lines.

With "exponential proportionality", I refer to the common trends underlying all the results generated by the power of certain real number (e.g., 2^3.2, 5.5^3.2 and 100000^3.2 being somehow related). Logically, these trends will never have a linear behaviour and, technically speaking, these values aren't proportional; on the other hand, "proportionality" seems to intuitively provide a very clear picture about this behaviour.

Validating the aforementioned ideas with NumberParser is quite straightforward. For instance, consider the following C# code:

This code calculates certain power (3) of a given value (5) by analysing (second degree polynomial fit) the way in which the surrounding values (3, 4, 6 and 7) behave. This specific calculation is very accurate (i.e.,

In any case and even by assuming that reliable trends could easily be found for any possible scenario, the resulting outputs would be unacceptably inaccurate. Bear in mind that the fastest exponentiation approach systematically delivering errors (0.1% or 0.0001%) wouldn't be acceptable; much less here, where accuracy is the top priority. But there is a situation which can be benefitted from these not-too-accurate results: the important determination of the initial guess in the Newton-Raphson method.

That initial guess expects a good enough estimate for the n-root calculation (i.e., the inverse of power, which also shows the described behaviour) of any positive number. In principle, these requirements seem quite far away from the aforementioned ideal conditions, but what if the number of potential n values could be highly reduced? In that case, wouldn't it be possible to create a limited number of trends to account for all the input scenarios? The answers to these and similar questions can be found in the next section.

With "exponential proportionality", I refer to the common trends underlying all the results generated by the power of certain real number (e.g., 2^3.2, 5.5^3.2 and 100000^3.2 being somehow related). Logically, these trends will never have a linear behaviour and, technically speaking, these values aren't proportional; on the other hand, "proportionality" seems to intuitively provide a very clear picture about this behaviour.

Validating the aforementioned ideas with NumberParser is quite straightforward. For instance, consider the following C# code:

`decimal exponent = 3m;`

NumberD res = Math2.ApplyPolynomialFit

(

Math2.GetPolynomialFit

(

new NumberD[] { 3m, 4m, 6m, 7m },

new NumberD[]

{

Math2.PowDecimal(3m, exponent), Math2.PowDecimal(4m, exponent),

Math2.PowDecimal(6m, exponent), Math2.PowDecimal(7m, exponent)

}

),

5m

);

Number res2 = Math2.PowDecimal(5m, exponent);

NumberD diff = Math2.Abs(res - (NumberD)res2);

This code calculates certain power (3) of a given value (5) by analysing (second degree polynomial fit) the way in which the surrounding values (3, 4, 6 and 7) behave. This specific calculation is very accurate (i.e.,

`diff`

is virtually zero), what isn't the case in quite a few other scenarios. In fact, the aforementioned implementation only works acceptably well with small values and exponents. Nevertheless, the restricted applicability of this implementation is exclusively provoked by the simplistic trend-finding methodology and doesn't affect the validity of the proposed ideas.In any case and even by assuming that reliable trends could easily be found for any possible scenario, the resulting outputs would be unacceptably inaccurate. Bear in mind that the fastest exponentiation approach systematically delivering errors (0.1% or 0.0001%) wouldn't be acceptable; much less here, where accuracy is the top priority. But there is a situation which can be benefitted from these not-too-accurate results: the important determination of the initial guess in the Newton-Raphson method.

That initial guess expects a good enough estimate for the n-root calculation (i.e., the inverse of power, which also shows the described behaviour) of any positive number. In principle, these requirements seem quite far away from the aforementioned ideal conditions, but what if the number of potential n values could be highly reduced? In that case, wouldn't it be possible to create a limited number of trends to account for all the input scenarios? The answers to these and similar questions can be found in the next section.