About varocarbas.com

--

About me

--

Contact me

--

Visit customsolvers.com, another side of my work

--

Valid markup

--

Valid CSS

--

© 2015-2017 Alvaro Carballo Garcia

--

URL friendly

--

Optimised for 1920x1080 - Proudly mobile unfriendly

R&D projects RSS feed

All projects in full-screen mode

PDFs:

Project 10

Project 9

Project 8

FlexibleParser code analysis:

UnitParser

NumberParser

Tools:

Chromatic encryption

(v. 1.3)

Pages in customsolvers.com:

Upcoming additions

Failed projects

Active crawling bots:

Ranking type 2

(
)

Currently active or soon to be updated:

Domain ranking

FlexibleParser (DateParser)

NO NEW PROJECTS:
Project 10 is expected to be the last formal project of varocarbas.com. I will continue using this site as my main self-promotional R&D-focused online resource, but by relying on other more adequate formats like domain ranking.
Note that the last versions of all the successfully completed projects (5 to 10) will always be available.
PROJECT 7
Completed (47 days)
Completed (19 days)
Completed (14 days)
First contact with open .NET
Completed on 13-Feb-2016 (47 days) -- Updated on 19-Nov-2016

Project 7 in full-screenProject 7 in PDF

The option of measuring both methods together (i.e., ParseNumber_Test.exe) was quickly proven too unreliable, mainly with big sets of inputs. That's why I relied on ParseNumber_Test2.exe during most of the testing process. Nevertheless, this program passed through various relevant modifications:
  • My initial intention when trying two different programs (i.e., one for New.ParseNumber and another one for Old.ParseNumber) was to get more insights into the unstable behaviour of ParseNumber_Test.exe. Additionally, I wanted to know whether a more efficient testing program (i.e., running ParseNumber under more demanding conditions) might be more beneficial for one of the versions.
  • While testing this new version with PerfView.exe (i.e., expressly recommended in the CoreCLR documentation to measure performance variations), I realised that the comparisons might be based upon the outputs of this profiler (e.g., process or CPU time). And this is where the second stage of ParseNumber_Test2.exe started: I removed all its internal time measurements and relied on the PerfView.exe outputs. ParseNumber_Test2.exe became much more efficient and I could confirm that New.ParseNumber performs better under more demanding conditions.
    Curiously, this change occurred at the same time than a tiny-but-influential bug appeared in the New class (i.e., a '\0' in one of the MatchChars overloads was replaced with a '0'). This bug provoked the new version to be notably faster, a variation which I assumed that was provoked by the relevant modifications in ParseNumber_Test2.exe. As a consequence of this curious episode, I relied on the PerfView.exe-based approach for some days (i.e., longer than what would have happened in other scenario) and published wrong information in social media (i.e., in my Twitter and GitHub accounts).
  • After the aforementioned bug was fixed and the new-old gap dropped drastically, I tried to further-optimise the testing program and the first decision was removing PerView.exe from the picture; also replaced the old time measurements with the simplistic end-minus-start-times. This is precisely the last version which I used in the final tests referred below.
For all the final performance tests, I used the conditions described in the previous section. Main ideas:
  • 10000 iterations of the main loop in ParseNumber_Test2.exe (i.e., finalMax = 10000); and 20000 records in inputs.txt, generated by ParseNumber_Gen (i.e., totInputs = 20000).
  • Both programs new.exe (i.e., accounting for New.ParseNumber) and old.exe (i.e., accounting for Old.ParseNumber) were run three times, one after the other, and all the final values (i.e., sw.ElapsedMilliseconds) stored.
  • The aforementioned measurements were input into ParseNumber_TestCalcs.exe to determine the final results (i.e., averDiff, the difference between the averages of both sets of values as %), by assuming that the measuring process is valid (i.e., averageGapNew and averageGapOld below 1%). Note that these minimum conditions of validity have always been met with the aforementioned inputs.
Even despite the numerous attempts and relevant testing efforts, I am still not in a position to deliver an absolutely valid (i.e., suitable to be easily tested anywhere else) result about the new-old differences, other than: the new one is certainly quicker. If I could set my own computers as an absolute reference of validity I would say the following:
  • On the less powerful computer (computer 2), you can easily (i.e., under the proposed conditions; but even under less strict ones, like 5000 iterations & 10000 inputs) get a 6.3-6.5% difference.
  • The most powerful computer (computer 1) used to deliver 7.0-7.5% with a previous version. Now, it should be able to reach 8% and above; although a problem with one of the last Windows 10 updates has made this computer too unstable to confirm this assumption (by bearing in mind the aforementioned statement: the exact value isn't too relevant).
On the other hand, a notable increase in the input conditions (e.g., 50000 inputs) might provoke the aforementioned values to be notably bigger; or even by using a different approach (i.e., the old ParseNumber_Test.exe). Same conclusions when using PerfView.exe: the new version will always be notably better in all the aspects (i.e., lower CPU/process time and CPU usage), but the exact values will change depending upon the computer and the input conditions.