About varocarbas.com

--

About me

--

Contact me

--

Visit customsolvers.com, another side of my work

--

Valid markup

--

Valid CSS

--

© 2015-2017 Alvaro Carballo Garcia

--

URL friendly

--

Optimised for 1920x1080 - Proudly mobile unfriendly

R&D projects RSS feed

All projects in full-screen mode

PDFs:

Project 10

Project 9

Project 8

FlexibleParser code analysis:

UnitParser

NumberParser

Tools:

Chromatic encryption

(v. 1.3)

Pages in customsolvers.com:

Upcoming additions

Failed projects

Active crawling bots:

Ranking type 2

(
)

FlexibleParser raw data:

Unit conversion (UnitParser)

Compound types (UnitParser)

Timezones (DateParser)

Currently active or soon to be updated:

Domain ranking

FlexibleParser (DateParser)

NO NEW PROJECTS:
Project 10 is expected to be the last formal project of varocarbas.com. I will continue using this site as my main self-promotional R&D-focused online resource, but by relying on other more adequate formats like domain ranking.
Note that the last versions of all the successfully completed projects (5 to 10) will always be available.
PROJECT 10
Completed (24 days)

Newton-Raphson method >

Exponential proportionality

Completed (57 days)
Completed (26 days)
Completed (47 days)
Completed (19 days)
Completed (14 days)
Non-floating-point fractional exponentiation approach
Completed on 16-Nov-2016 (24 days)

Project 10 in full-screenProject 10 in PDF

The first thing to highlight is that I came up with the ideas in this and the next section completely by my own; I saw certain patterns while performing some tests meant to improve the fractional exponentiation algorithm. I didn't do any research on this front and am not aware about any theory on these lines.

With "exponential proportionality", I refer to the common trends underlying all the results generated by the power of certain real number (e.g., 2^3.2, 5.5^3.2 and 100000^3.2 being somehow related). Logically, these trends will never have a linear behaviour and, technically speaking, these values aren't proportional; on the other hand, "proportionality" seems to intuitively provide a very clear picture about this behaviour.

Validating the aforementioned ideas with NumberParser is quite straightforward. For instance, consider the following C# code:
decimal exponent = 3m;
NumberD res = Math2.ApplyPolynomialFit
(
   Math2.GetPolynomialFit
   (
      new NumberD[] { 3m, 4m, 6m, 7m },
      new NumberD[]
      {
         Math2.PowDecimal(3m, exponent), Math2.PowDecimal(4m, exponent),
         Math2.PowDecimal(6m, exponent), Math2.PowDecimal(7m, exponent)
      }
   ),
   5m
);
Number res2 = Math2.PowDecimal(5m, exponent);
NumberD diff = Math2.Abs(res - (NumberD)res2);

This code calculates certain power (3) of a given value (5) by analysing (second degree polynomial fit) the way in which the surrounding values (3, 4, 6 and 7) behave. This specific calculation is very accurate (i.e., diff is virtually zero), what isn't the case in quite a few other scenarios. In fact, the aforementioned implementation only works acceptably well with small values and exponents. Nevertheless, the restricted applicability of this implementation is exclusively provoked by the simplistic trend-finding methodology and doesn't affect the validity of the proposed ideas.

In any case and even by assuming that reliable trends could easily be found for any possible scenario, the resulting outputs would be unacceptably inaccurate. Bear in mind that the fastest exponentiation approach systematically delivering errors (0.1% or 0.0001%) wouldn't be acceptable; much less here, where accuracy is the top priority. But there is a situation which can be benefitted from these not-too-accurate results: the important determination of the initial guess in the Newton-Raphson method.

That initial guess expects a good enough estimate for the n-root calculation (i.e., the inverse of power, which also shows the described behaviour) of any positive number. In principle, these requirements seem quite far away from the aforementioned ideal conditions, but what if the number of potential n values could be highly reduced? In that case, wouldn't it be possible to create a limited number of trends to account for all the input scenarios? The answers to these and similar questions can be found in the next section.