About varocarbas.com

--

About me

--

Contact me

--

Visit customsolvers.com, my main website

--

Valid markup

--

Valid CSS

--

© 2015-2017 Alvaro Carballo Garcia

--

URL friendly

--

Optimised for 1920x1080 - Proudly mobile unfriendly

R&D projects RSS feed

All projects in full-screen mode

PDFs:

Project 10

Project 9

Project 8

FlexibleParser code analysis:

UnitParser

NumberParser

Tools:

Chromatic encryption

(v. 1.3)

Pages in customsolvers.com:

Upcoming additions

Failed projects

Active crawling bots:

Ranking type 2

(
)

FlexibleParser raw data:

Unit conversion (UnitParser)

Compound types (UnitParser)

Timezones (DateParser)

Currently active or soon to be updated:

Domain ranking

NO NEW PROJECTS:
Project 10 is expected to be the last formal project of varocarbas.com. I will continue using this site as my main self-promotional R&D-focused online resource, but by relying on other more adequate formats like domain ranking.
Note that the last versions of all the successfully completed projects (5 to 10) will always be available.
PROJECT 9
Completed (57 days)
Completed (26 days)
Completed (47 days)
Completed (19 days)
Completed (14 days)
Critical thoughts about big data analysis
Completed on 02-Jul-2016 (57 days)

Project 9 in full-screenProject 9 in PDF

This project is the main output of my recent intention of getting insights into the differences between my numerical modelling expertise and big data conditions. I took part in various open challenges, although only spent a relevant amount of time and effort in the one described below these lines.

In this appendix, I am including my impressions about my participation in Kaggle's Expedia Hotel Recommendations. Most of the information associated with this challenge isn't public, that's why I may only share certain bits (e.g., data description).

Note that, since the very first moment, I took this challenge as the ideal benchmark to understand the aforementioned differences; also to eventually build a reliable set of applications, even to come up with a whole proceeding, helping me to quickly and efficiently face future big-data problems. It seemed that focusing on the test dataset (i.e., making many submissions) was the best way to accomplish such a goal. That's why my numerous submissions in this challenge, what shouldn't occur under normal conditions.