About varocarbas.com


About me


Contact me


Visit customsolvers.com, another side of my work


Valid markup


Valid CSS


© 2015-2017 Alvaro Carballo Garcia


URL friendly


Optimised for 1920x1080 - Proudly mobile unfriendly

R&D projects RSS feed

All projects in full-screen mode


Project 10

Project 9

Project 8

FlexibleParser code analysis:




Chromatic encryption

(v. 1.3)

Pages in customsolvers.com:

Upcoming additions

Failed projects

Active crawling bots:

Ranking type 2


Currently active or soon to be updated:

Domain ranking

FlexibleParser (DateParser)

Project 10 is expected to be the last formal project of varocarbas.com. I will continue using this site as my main self-promotional R&D-focused online resource, but by relying on other more adequate formats like domain ranking.
Note that the last versions of all the successfully completed projects (5 to 10) will always be available.
Completed (47 days)
Completed (19 days)
Completed (14 days)
First contact with open .NET
Completed on 13-Feb-2016 (47 days) -- Updated on 19-Nov-2016

Project 7 in full-screenProject 7 in PDF

As already explained, finding out the best testing environment wasn't easy and that's why I tried different approaches. Roughly speaking, I used (and created) two programs: ParseNumber_Test.exe and ParseNumber_Test2.exe. The first one accounts for both old (i.e., original ParseNumber code) and new (i.e., modified ParseNumber, expected to be notably quicker) versions; and the second one only deals with one version every time (i.e., two different executables have to be generated); additionally, the second version is much more efficient (e.g., minimalist time tracking).

Regarding ParseNumber and related methods (i.e., MatchChars and IsWhite), there have always been two different classes: New for the improved versions and Old for the original ones. I have added slight-and-not-affecting-performance modifications in both of them to minimise the code size; it has to be noted that the original version of ParseNumber (i.e., the one used inside mscorlib.dll) relies on internal classes which cannot be accessed via Visual Studio.

The basic structure of the two aforementioned programs is formed by the following three nested loops:
  • The main and most external loop performs a finalMax number of iterations. The version (i.e., Old.ParseNumber or New.ParseNumber) which is considered by all the loops below is defined here (i.e., the specific type of method to consider: 0 for New.ParseNumber and 1 for Old.ParseNumber).
  • The second loop iterates through the contents of inputs.txt. This file includes randomly-varying numbers (one per line) and is generated by ParseNumber_Gen.exe. Four different types of inputs can be generated: decimal, int, long and double; all the tests were focused on decimal and that's why this feature is only required by ParseNumber_Validation.exe. An additional issue to bear in mind is that the values from inputs.txt might be altered on account of the NumberStyles value under consideration, as explained below.
  • The third and most internal loop iterates through a list of NumberStyles and calls the corresponding ParseNumber version. Initially, it accounted for all the possible values (i.e., NumberStyles enum members) with no further modification; for example: calling Old.ParseNumber (or New.ParseNumber) with the inputs "5.5" & NumberStyles.AllowDecimalPoint, "5.5" & NumberStyles.AllowParentheses and so on. Later, I started to account for special cases, where the given NumberStyles value is associated with certain modifications in the input string; for example: "5" & NumberStyles.AllowDecimalPoint, but then "(5)" & NumberStyles.AllowParentheses. Note that ParseNumber is used under many different input conditions, like the Parse/TryParse methods of all the numeric types (e.g., decimal, int, double, etc.), and that not all the NumberStyles values are always effective; for example: NumberStyles.AllowHexSpecifier and NumberStyles.HexNumber only make sense*** with integer types (e.g., int or long). On the other hand and after confirming the similarities among different types, the performance tests were completely focused on decimal; the remaining main numerical types were exclusively considered during the final validation via CoreRun.exe (i.e., by using ParseNumber_Validation.exe). Nevertheless, I did enough tests to conclude that the input conditions don't have a relevant effect on the observed new-old performance differences; anyone is welcome to confirm this point by taking advantage of the easily-modifiable structure of all the involved codes.

    *** NOTE: despite not being supported, the situation is valid in appearance (i.e., Visual Studio doesn't show any kind of warning or error for these erroneous inputs). For example: decimal cannot deal with hexadecimal inputs and, consequently, decimal.Parse("FFFFFFFF", NumberStyles.AllowHexSpecifier) triggers an error; what doesn't happen with supported alternatives, like long.Parse("FFFFFFFF", NumberStyles.AllowHexSpecifier). These situations represent, in my opinion, an additional reason for creating the new decimal.TryParseExact (I am planning to work on it during the next months and, most likely, make it part of the new Project 9); even for redefining the parsing actions of all the numeric types (at least, the NumberStyles usage).
The conditions under which the tests were performed also varied in a quite relevant way. The configuration proven to deliver the most stable measurements is defined by the following:
  • Closing all the running applications which might affect the measuring process.
  • Running the given program three (or even five) times, one after the other, by storing the final values (e.g., sw.ElapsedMilliseconds in the last version of ParseNumber_Test2.exe). With ParseNumber_Test2.exe, this process needs to be repeated with the two programs (i.e., considering New.ParseNumber and Old.ParseNumber respectively).
  • Inputting all the aforementioned values in ParseNumber_TestCalcs.exe to get the final measurements. Note that this program outputs three main variables: averageGapNew & averageGapOld which indicate the % variation of each new(old) value with respect to all the remaining new(old) values; and averDiff which is the % difference between the averages of all the old/new values. If the measurements are stable enough (i.e., averageGapNew & averageGapOld below 1%), the final result would be given by averDiff; otherwise, the process would have to be repeated under different conditions (i.e., higher finalMax and/or elements in inputs.txt).
I performed all the tests on two different computers with the following configuration:
  • Computer 1: 3.4 GHz and 12 GB of RAM. Windows 10 Pro 64-bit.
  • Computer 2: 2.27 GHz and 3 GB of RAM. Windows 8.1 N 64-bit.
The results from the tests were always different on account of the computer under consideration; although they were very consistent for each computer. I spent a relevant amount of time without being able to find a set of conditions minimising the differences between both machines; my conclusion is that reaching this goal might be possible, but only under extreme conditions (i.e., relevant number of inputs/iterations, what would also provoke disproportionately high old-new differences). As explained in the next section, the most important result of the tests has been getting very consistent and easily-reproducible measurements (where the new version has always been faster), rather than specific values.