Critical analysis of the main premises of special relativity: Lorentz & Minkowski
Completed on 14-Jun-2015 (14 days)
The validity of the Lorentz transformation might be questioned even before analysing the multiple problems in its mathematical derivation
, just by looking at one of its basic assumptions: the Constancy of the speed of light.
When this theory was proposed, the available technical means were much more limited than the ones nowadays. And even today we are still not in a position to accurately measure what happens with objects at speeds approaching the speed of light (i.e., light itself). Such a statement can easily be confirmed by analysing the currently accepted theory for the nature of light: sometimes it behaves as a wave and sometimes as a particle. Where "sometimes" should be interpreted as we are "still not able to adequately understand what is exactly happening"; we can just approximately measure some of the outputs usually generated within a given range. That is: the empirical confirmation/dismissal of the proposed idea was impossible when this theory was created and is impossible even 100 years later, when we are still not in a position to perform the required actions (i.e., set of reliable measurements under a wide variety of different conditions to undoubtedly conclude whether the speed of light should be considered constant or not).
By bearing in mind the aforementioned limitations in our understanding of this phenomenon, the proposed Constancy of the speed of light shouldn't have ever been considered as a reliable assumption because of the following reasons:
The fact of being almost a "blind shot", as far as the two possible results (i.e., being constant or not being constant) were equally probable on account of the limited available information. For example: after measuring the average speed of the runners at a given random point of a marathon, you shouldn't try to deduce the final positions. Or, at least, you shouldn't give the medals right away on account of such assumptions. On the other hand, why assuming constancy precisely on the boundaries of our understanding (i.e., we would have problems to see anything travelling faster than light)? For example: if we have an object which might travel at 10, but our tracking device can only measure up to 5, shouldn't we plainly accept the fact that we cannot know the speed of that object (at least, not above 5)? What would be the point of assuming that the speed of the object may not be faster than 5?
Even in case of having fully accepted the reality (i.e., theory built on pure intuition), why assuming conditions which are very unlikely to be true according to our remaining experiences? For example: in nature, a big proportion of shapes, and even structures of somehow-coordinated elements, are circular (spherical); and, consequently, assuming that the shape of a not-clearly-seeable object is circular would be quite reasonable. But why assuming that a not-properly-understood natural phenomenon is constant when pure constancy virtually never occurs in nature? In fact, it is a commonly-used theoretical "trick" whose aim is precisely helping understand real, much more complex and actually-not-constant natural phenomena. On the other hand, considering a given behaviour as constant might be a valid (practical) solution to model certain reality under very specific conditions (e.g., assuming that the values of a given property are constant from the 10th decimal position onwards, because the maximum sensitivity of our technology is below the 5th decimal position). But the situation here was completely different: the constancy evolved into an ultimate truth, which was later used as the starting point to derive generally-applicable conclusions.
In summary, no theory with real applicability may be built on top of an assumption like the Constancy of the speed of light, a very exceptional behaviour which cannot be validated.