**A non-****Pythagorean**** ****i****nterpretation**** ****of physical laws**

Yvon Provençal

*Département de Philosophie, Cégep de Granby – Haute-Yamaska, Granby, Québec, Canada*

This paper criticizes the current interpretation according to which certain basic physical laws are perfectly exact until proof to the contrary is given. This interpretation is called here “Pythagorean” (PI). A “non-Pythagorean” interpretation (NPI) is defined according to which every current physical law is approximate until proof to the contrary is given. It is indicated how it would be possible to make experimental tests in order to check the Pythagorean or non-Pythagorean character of theories. It is shown that this new interpretation implies that such law’s capacity for representing physical reality is only partial and, likely, even very partial. Next, it is shown how one can use ancient or recent history of scientific research in order to uphold the NPI. Finally, certain experiments in progress are shown to be capable of establishing shortly the non-validity of the PI. A few consequences of the NPI will be considered, including especially those of clearly strengthening the credibility of a hypothesis which was made in order to solve the famous problem of hidden mass in the Universe and reconsidering on new theoretical bases the problem of the unification of the four basic interactions.

I. Introduction

As the Pythagoreans early did, the physicists readily believe again that certain symmetry principles underlie the matter and Universe structure. This means in their view that the basic physical laws must be considered as perfectly exact. Thus, the great conservation principles, such as that of energy or electric charge conservation, are considered as perfectly exact until proof to the contrary is given. And the physicists believe that the three types of non-gravitational forces exist because nature fundamentally complies with the corresponding gauge symmetries. It is the same as regards gravitational force, even though modalities of its formalism are different.

An interpretation of basic laws is called *Pythagorean* (PI) if it allows that:

— basic physical principles or laws must be seen as perfectly exact until proof to the contrary is given,

— basic mathematical models in physics must be seen as faithfully representing physical reality until proof to the contrary is given.

Any other interpretation of basic models and principles would be seen, according to several physicists, if not all of them, as tantamount to the science breakdown itself.

I will try to show, in this paper, that the present state of physical science is not supporting the PI and that it would be more useful and consistent with the reality of the research to consider, in addition to the PI and in an opposite way to it, that which will be called the *non-Pythagorean* interpretation, or NPI, of physical laws and principles. According to this new interpretation:

— the existing physical laws and principles must be seen as not being perfectly exact, but only approximate, until proof to the contrary is given,

— the existing physical models must be seen, in their acknowledged fields, as being only a partial representation of physical reality, until proof to the contrary is given.

I propose that principles, laws, and theories consistent with both statements of the former interpretation be called “Pythagoreans”, and those consistent with both statements of the latter interpretation be called “non-Pythagoreans”.

It may be remarked that, since scientific research has started, the researchers have continually taken for granted that their best current theories must have been Pythagorean and, as soon as they found out other better ones, they have persisted in taking for granted that the new theories must then, in their turn, be Pythagorean, submitting only then to seeing the previous ones as not Pythagorean.

It is essential to know that experimental tests are likely to be contemplated, that will allow one to eliminate either of both interpretations. I will try to show here that one can settle the Pythagorean or non-Pythagorean character of any particular theory (or law, or principle) according to the following logical modalities:

— if the theory (or law, or principle) is Pythagorean, one could know it only in the long-term research, that is when the means of observation will be much more powerful or accurate than they are presently and, more precisely, when these will have attained a certain threshold which will be identified in a mathematically exact way in the following (see the theorem of localization of the potential refutability line, in section II, C);

— if the theory (or law, or principle) is not Pythagorean, one could know it in the short-, mean- or long-term research, that is as soon as the means of observation will be powerful or accurate enough to ascertain it.

The NPI involves very notable changes in the physicist’s understanding of his own research activity and results he obtains from it. As theories and laws are considered as only approximately and partially valid until proof to the contrary is given, scientific research should be viewed as a long historical process which is not ended and can last still a long time. Research should be seen as an outstanding process that provides continually a richer and more complex representation of reality. Likely, this process has not produced yet a matured knowledge. I will try to show, in this paper (sections II and III), that taking NPI into account will have important consequences for the present and future research orientations.

Some of the most important inferences from the NPI are summarized in the following. They will be explained thereafter:

**A. Concerning the degree of precision of theories in general:**

i) All theories are approximate, including basic physical theories; even the so-called “fundamental” theories, including quantum mechanics, are approximate.

ii) Basic physical theories, well-known as being “exact”, are better described as *outstripping theories*, that is, theories of which predictions exceed in precision that of the most precise theories until then; likewise, theories known as “approximate” are *outstripped theories*, that is, theories of which level of precision has been exceeded by other theories.

iii) When an experimental test applied to a theory is giving a negative outcome, that does not mean that this theory must be rejected, but this means that one knows more about its refutability points and exact degree of precision.

iv) No present-day basic theory can be considered as definitely valid; this is not tantamount to science breakdown, but this implies that scientific research will continue in an important way in the future.

v) The future developments of scientific research comprise potential theoretical developments which will overpass the present-day capacities of theorization.

**B. Concerning more particularly the present-day theories:**

vi) The exact degree of precision of the existing theories could be determined in the future research if the means of observation continue to develop as they did in the past.

vii) The present-day physical theories make descriptions of theoretical objects such as the big bang, black holes and punctual particles which are only *theoretical extrapolations* beyond the limit of observability that is linked to the present state of research.

viii) Quantum mechanics is not qualified to set definitive limits of validity of scientific theories in general (including future ones); therefore, Heisenberg’s uncertainty relations set only provisional limits of theories validity.

ix) It is likely that the standard model of particle physics comprises sets of theoretical artefacts that do not provide a valid representation of physical reality.

x) Basic theories – essentially, general relativity and quantum mechanics – are not mutually inconsistent in the observable domain, but only in their extrapolations beyond the present observability limit.

**C. Concerning the theoretical models efficiency:**

xi) The efficiency of mathematics in physics does not mean that basic theories describe physical reality in an absolutely exact way, but rather that *a*) the refutability threshold of these theories is temporarily beyond the present observability limit, and *b*) new theories are apt to occur of which refutability threshold will exceed that of previous theories.

xii) All correspondence rules – not only that of quantum mechanics (that is the principle of reduction of the wave packet), but of physical theories in general – are in fact undeterministic rules, that is to say that theories predictions are only approximate and uncertain.

xiii) A new way of research will consist of systematically exploring the refutability line of basic theories; this exploration will aim especially at better understanding the mechanism of theories efficiency in explaining or predicting observations that are carried out.

We must consider all these statements true or probable until proof to the contrary is given. Here are a few precisions or comments about them.

The statements (i) and (ii) directly follow from the NPI. They straightway mean that the so-called fundamental theories are only major or central theories, valid as such at a certain time of research. Thus classical dynamics was a major theory until the modern era. The present-day basic theories will remain such until one will discover better basic theories. In particular, we must consider that quantum mechanics is approximate because the onus of proof lies upon those who claim it is absolutely exact. It is inconsistent to say that a theory is exact and then it becomes approximate as soon as it is overpass by another theory. Rather we must say that a theory is to be considered approximate at the beginning and it may by thereafter overpass by another theory of which predictions appear as more precise. The statement (iii) can be illustrated by the Newtonian theory of gravitation. The latter has never been rejected by physicists, who use it again in several ways. However, since the Newtonian theory has been overpass by general relativity, we have not only a proof that the former was approximate, but moreover we know something more about its domain of application and conditions of validity. The statement (iii) will be explained in section II; the statements (iv) and (v) will be exemplified and explained in section III. The term “potential”, that is used in (v) will be clarified below, in section II, A.

The statements (vi) and (vii) lie upon the idea that the means of observation (instruments, methods, or experimental apparatus) probably will continue to develop. The concept of theoretical extrapolation, mentioned in statement (vii), will be defined in section II, B, *1*. The statement (viii) refers to quantum indeterminism. We will see below (section II, D) that another formal type of indeterminism is likely to affect notably future research. The statement (ix) concerns the capacity of basic theories to accurately represent physical reality (see section II, E). Even though this capacity is generally assumed by researchers, it is to be questioned according to the NPI. As regards the statement (x), it concerns the famous problem of theoretical unification that consists of explaining by a unique theory the four basic interactions. According to the NPI, this problem should be understood and formulated in a new way. This will be explained in section III, D, *2*.

Parts A and B above, that is, statements (i) to (x), imply the general idea that scientific research at a certain epoch is limited by existing conditions at the experimental (available experimental methods and instruments) as well as at the theoretical level (available mathematical and conceptual tools). A few principles of limitation of research (at a given moment of the general development of science), that are tightly linked to the NPI, will be formulated in section II.

The three last statements, (xi) to (xiii), concern the understanding of the mathematical efficiency in physics. The NPI brings some enlightenment on it. This will be explained in section III. Roughly speaking, the efficiency of mathematical models does not consist of their capacity to describe physical reality in a completely exact way, but rather their capacity to progressively increase levels of precision in the predictions. Therefore, it is possible that one at last finds, in the future, a theory in accordance to the PI. However nothing justifies believing that the present-day models already do. Moreover, the NPI implies a better and better understanding of that which makes effective a certain mathematical model as theoretical research progresses. Thus modern theories contributed to determine and understand the exact degree of precision of the predictions which were made on the basis of classical model. In order to illustrate better the most general efficiency of mathematics in physics, an historical survey will be made of past research which will begin by displaying certain formal characteristics (in addition to the PI itself) that were already present in ancient Greek models and that subsist in the most recent theoretical elaborations of modern science. At last, we will see how certain particular approaches of today’s research can be regarded as going towards an exploration of the precision levels of present theories as described here, that is, systematic exploration assuming that theories have thresholds of refutability (see III, C, *1* et *2*).

II. The evaluation of the precision level of theories

A distinction is made above between theories that which can be validated in the short- and the long-term. So it will be required to consider a short and a long time of future research. The *short time* will be defined, first in a rough manner, as that which is required to produce an experiment with experimental means that already exist or are on the point of existing. This can be a few years or decades. We will sometimes distinguish, for convenience, a short time that is a few years, and a mean time that is a few decades. The *long time* will be defined as the delay to innovate or produce either experimental or theoretical, wholly new instruments or methods, when the degree of newness or originality is such that one is not able yet to foresee their use and, in certain cases, the very possibility of their use. It can be a relatively long period of time, until one or more centuries. The distinction between short and long time is set here in an intuitive and informal way. It will be stated more precisely and its usefulness will appear progressively in the following.

We will admit that a *moment* of historical research is tantamount to an epoch, this being itself defined as a mean or long period of time.

**A. ****The actual or ****potential capacities of the researchers in general**

It is concerned here with distinguishing, at any given moment of science development, between that which is *actual*, that is realizable with experimental or theoretical means which are already available or likely to be available in a short or mean period of time, and that which is *potential*, that is realizable only with experimental or theoretical means not yet existing, but could become available in the long time.

For instance, length and time are considered in physics as fundamental quantities and standards are defined in order to be able to measure them. In this connection, ideal standards must be perfectly invariable and, also, they must be accessible. The physicists know what these terms signify. They know in particular that an accessible standard is one which is actually utilizable and not only potentially. He could also easily admit, furthermore, that some future standards are not yet actually utilizable but they may become so and, therefore, they are potentially utilizable.

In order to conceptualize the potential of scientific development to come, the two following definitions will have an important part. We will distinguish between *actual* and *potential* capacities in that which concerns the production of theoretical concepts and experimental tools.

*Actual* capacities are defined as those which, at any epoch of research history, are in a position to be actualized in the short- or mean-term. *Potential* capacities are defined as those which will supposedly become actual in the short-, mean-, or long-term at this time of research. Those capacities are those of individual or groups of researchers, who can in principle be identified with humanity as being considered at this moment of the history of scientific research.

For instance, Newton– as some ones contemporaneous with him – had *actual *capacity of creating and developing differential and integral calculus. It is well-known that this was the mathematical tool that allowed researchers to conceive classical dynamics and celestial mechanics. Doubtlessly, however, he and contemporaneous researchers did not have an *actual* capacity of conceiving neither the general relativity theory nor modern cosmology. They lacked several essential mathematical bases (for example, group theory and tensor calculus) and several experimental results (for example, those brought by great telescopes and radio astronomy). On the other hand, it would be right to say that the XVII^{th} century’s humanity had a *potential* capacity of developing in such a way that modern cosmology actually become realizable thereafter.

One can see, as a result, that the above distinction between short and long time corresponds with the one between actual and potential capacities. In a similar way, physicists already imply this type of distinction, for example, when they are distinguishing between *classical* and *modern* theories. Accordingly quantum theory such as it is known today only existed in its potential state at the time where classical conceptions of continuity of energy or action predominated.

There is obviously not yet a mathematical theory about historical complexification of mathematical and scientific research. However, since acquired attainments have accumulated during centuries and will likely continue to do again in the long-term, we can acknowledge the relevance of a formal concept of difference between that which, at a given epoch, is actually available and that which is only potentially.

According to the PI, basic theories are called “fundamental theories” and are not seen clearly and openly as “potentially” refutable in the meaning given here, but only, let us say, as “in principle refutable”, this being ordinarily understood meaning that they must be perfectly exact until a contrary proof is given. The word potentially here does not mean subjectively and vaguely possible, but prospectively and likely. According to the NPI, basic theories are clearly and openly – even though potentially – refutable, in principle and in fact, and it is quite foreseeable that they be refuted in the long run of research.

**B. ****Principles of**** limitation**

Scientific research globally appears as an ample historical development going on during centuries. If one considers the state of science at a certain moment of history, the course of research can be seen as that which makes science progress further. Research then represents a progression pace which appears, at a given moment of its development, as limited in several ways. A few general so-called limitation principles can be formulated. They are likely to help to guide research and appraise its results. One of these principles concerns the actual limitation of realizable observations at a given moment. This limitation is about both aspects of the precision of measures and variety of observations. Another principle concerns the limitation of theories in which regards their potential refutability, that is the possibility that they be refuted in the short- or long-term future. In both of these cases, we consider the state of research at a given moment (that is a given epoch) that may be past as well as present or future. Both limitation principles will be designated as follows:

*1.* the *principle of limitation of actual observability* ;

*2.* the *principle of potential refutability*, valid for all particular scientific theories or principles, including symmetry principles.

The first one of these principles is tightly linked with any particular moment of research history. This is indicated by the word “actual”. As regards the second principle, it concerns an intrinsic limitation in any theory, law or principle, and the particular moment of research history matters little. However, if it is concerned with a theory considered “refuted”, for example, the Newtonian dynamics, it means that the potential refutability of this theory has become in part (as we will see below) an actual refutability. It is important to notice that these limitation principles concern the science development pace at one time and do not mean that science itself, as a multi-secular research and discovery enterprise in the long-term, must be seen as limited in its most ambitious goals of understanding and knowledge.

*1. T**he principle of limitation of actual observability*

This principle is stated as follows:

*At every moment (or epoch) of research, the actual observability is limited by the existing observational means.*

The relevance of this principle comes from the NPI because this consists of acknowledging that basic physical theories are approximate and, therefore, any outstripping of the actual observability limit is capable of entailing the refutation of these theories. According to the PI, on the contrary, basic theories are presupposed exact even beyond this limit and one does as if this limit would not exist.

In general, the efficiency and diversity of observation instruments and methods have propensity for increasing in historical time. Some observation instruments or methods become obsolete and are replaced by others which, generally, will have greater efficiency. Furthermore, original means appear related to new fields of research and allow ones to do new types of observation. There exist, at every given moment, a degree of precision of measures of a certain type which appears as the best possible at this moment.

It is noteworthy that the precision of observations is limited by that of measure units. In general, the precision of any standard is tantamount to the value of the actual observability limit related to a corresponding type of measure.

The actual observability limit can be described as an *actual observability line* proper to a moment (or epoch) of scientific research. It can be seen as an interface, or a kind of frontier, that researchers have reached owing to their most improved tools at the moment and that they are constantly seeking to outstrip.

Therefore a theory which has always been validated is apt to be invalidated by new observation means. However, as generally the PI is prevailing in an implicit way, one will have tendency for considering perfectly exact the predictions of the theory even when nothing proves it. The implicit PI is at the origin of the fact that one often presupposes that a theory continues to be right beyond the domain where it has been experimentally validated and, sometimes, even far beyond this domain. As a matter of fact, if a theory predicts phenomena that are in principle measurable, but not yet actually measurable, one will has tendency for presuming that it is all the same valid. Therefore, a theory is considered generally exact even though it cannot in practice be validated beyond the limitation of actual observability. And, if an error is observed with respect to the value predicted by this theory, one will try first to attribute it to some disturbances. A noteworthy example is the very concept of “dark matter”; one can say that it originates from the PI [1].

*The theoretical extrapolations*

The expression “*theoretical extrapolation*” will be used here to designate the values of measures predicted by a theory when these values are from a scale non-accessible to actual observation or when these measures relate to phenomena that are not actually observable. For instance, astrophysical theories on stars evolution allow one to calculate certain values of density and temperature at the centre of stars that outstrip all values actually observable in laboratory. These are theoretical extrapolations that are usually considered normal. Another example, pertaining to another range, is the standard model of cosmology, which permits one to calculate values of density and temperature of the Universe at its first moments; these values outstrip by far all that one has possibly observed in laboratory until now. In this case, it is concerned with what will be called here “*exceeding extrapolations*” from basic theories of the model. One could admit that exceeding extrapolations in general are those which often concern phenomena that are not actually observable, but maybe are potentially, if one considers the long future time.

According to the NPI, we must consider in a critical way the theoretical extrapolations in general and designate them as such and not, for example, as normal applications of the theories. It may happen, of course, that extrapolations at a certain moment become worth being normalized thanks to new observational data; this can then represent a significant scientific advancement.

*The non-limitation of potentially available experimental and theoretical means*

It is noteworthy that, logically, the principle of limitation of actual observability is tantamount to a **principle of non-limitation of potential observability**. This straightly follows from the above definitions of that which is actual or potential. For convenience, however, it has been preferred first to lay stress on the principle of limitation in order to exploit the concept of observational limitation, which thereafter allows one to define theoretical extrapolation and express more simply other statements that will be made below.

In the case of *theoretical* tools that are available at a given moment, there exists also a limitation of actual means, which is tantamount, as in the observational case, to a non-limitation of potential means. In both cases, that are observational and theoretical, one happens to neglect the potential of future development, which is reduced to that which is actually available.

Like experimental instruments and methods, at a certain moment of research history, actually available theoretical means are limited. They are limited in quantity – mainly the actual diversity of mathematical sectors, varieties of mathematical concepts and models, and methods of calculus – and in quality – mainly the actual capacity to formulate concepts and theories in a rigorous manner and conceive complex and powerful mathematical structures.

In the theoretical case, as in the experimental one, there will be again, very probably, developments of mathematical models and concepts, including important ones, and even essentially original ones, in the future. It is then advisable, in this case, to formulate a **principle of theoretical non-limitation**:

*Scientific research is not limited for ever to actually existing theoretical tools but, on the contrary, it is strongly apt to develop, in the long term, quite new ones that will be more and more effective to describe or explain observations that will be made.*

This principle leads us to transform markedly present and future prospects on research. For it follows from it that theories to come are durably or repeatedly open to providing more and more precise and testable predictions, and deeply modifying the existing scientific representation of reality.

As in the case of observational means, theoretical means have remarkably grown more and more complex during history. Thus, for example, mathematicians in antiquity have developed concepts of Euclidian geometry and, later, modern mathematicians drew inspiration from them to develop those of non-Euclidian geometries. Of course, they did it after about two thousand years, but nevertheless a specific continuity exists under the form of references to previous mathematical concepts and theories. Likewise classical conceptions of absolute space and time have preceded those of relativity theory. And the elaboration of calculus, then analysis, and that of group theory have preceded the formulation of gauge theories.

Consequently, even though the set of actually available theoretical tools grown considerably during history since the very beginning of research until nowadays, nothing justifies supposing that this historical development is ended today, or even on the point of ending. Mathematical research appears, in the actual time, to be at least as productive and flourishing as in the past. It is therefore plausible to suppose that the principle of theoretical non-limitation will continue to apply in the future, in the short-, mean- and, likely, long-term.

*2. The principle of potential refutability, applicable to all particular theories and principles, including symmetry principles*

The principle of potential refutability supposes the NPI and is stated as follows:

*All theories and principles in general, including symmetry principles, are potentially refutable, that is to say, they will become actually refutable as soon as the development of observational means will allow one to do so.*

* *

This second principle of limitation of scientific research is, in some respects, the most important of both ones. Physicists have the habit to consider that physical theories are “exact” in the sense that they are apt to predict results that coincide “exactly” with experimental data. Now it is only according to the PI that one understands this fact as meaning that physical theories are perfectly exact, in a similar way to that of purely mathematical theories. However, according to the NPI, this exactness should be understood as *physical exactness* and not *mathematical exactness*, because there is in fact exact coincidence only until a certain point, which is the degree of precision of a particular observation. Therefore the limitation of actual observability is involved here. And, as this depends on the moment of research (since the precision of measuring instruments and methods has tendency for increasing), the potential refutability of existing theories, including the best ones, can always become actual refutability, in certain future conditions of research.

*The potential refutability line of a theory*

The potential refutability of any theory (or law, or principle) takes the form of a *line of potential refutability* of which several points are actual refutability points, or at least can become such ones at any moment. The word “line” is used here in the meaning of an interface that can involve more than one dimension. For example, let us suppose that, at some epoch of the research, a theory is predicting the position of a heavenly body (in the coordinates of space and time admissible in this theory) and an observation is made by means of the best instruments and methods available at this moment. A line of actual observability that is function of this moment of the research is thus superposed to the predictions of this theory. The concerned researchers are in a position to ascertain by means of their observational tools and methods if the prediction is validated or invalidated at this moment. This line of actual observability has tendency for moving in course of time as the precision of tools increases. This is why, if the theory succeeds in doing tests at some time, it may fail later. When this happens, this means that the line of actual observability is arrived at the level of the line of potential refutability of this theory and outstripped it in one of its points.

We must notice that, when one says that a theory has been refuted by failing an experimental test, this is in most cases an inaccurate manner to express what it happened. Some ones could conclude – too fast – that the theory must be rejected. In fact, the theory can remain valid in part because its line of potential refutability has only been outstripped on one point (or maybe one portion) of its length. This is why, for example, the Newtonian theory of gravitation was not abandoned after the tests that it failed. One still considers it valid in many situations, where the classical or non-relativistic conditions are satisfied.

The NPI consists of recognizing that all existing theories possess a line of potential refutability and, therefore, are thus intrinsically limited, until proof to the contrary is given. Their limitation is called intrinsic, not because of a contradiction or inconsistency in their formulation, but because their potential refutability follows from the partial incompatibility of their structure with physical reality. This situation entails that one cannot pronounce about the truth of any theory which did not fail an experimental test and, in particular, it entails that one cannot affirm that this theory constitutes a valid representation of physical reality. In other respect, this does not prevent it from being a theory of several uses in research or diverse practical applications.

One could rise as an objection that this intrinsic limitation of theories is only “potential” and, accordingly, nothing substantiates that it applies to a theory already validated many times by experience. One must understand that, on the contrary, the onus of proof lies upon the one who claims that the theory is *definitely the good one* and therefore it is useless to search for another one. This attitude should be seen as lacking scientific rigor and prejudicial to research.

*The exact degree of precision of basic theories is presently unknown*

This principle of limitation, however, does not at all lead to completely overthrow the theories and concepts, or result in the breakdown of physics. We must understand it as indicating in a constructive way the possible or probable shortcomings of theories and concepts that are outstanding in present-day science. First, it means that the *exact* degree of precision of basic theories remains unknown.

As a matter of fact, however, in the case of former basic theories which were refuted once, the exact degree of precision is known in part. For instance, in the case of the Newtonian theory of gravitation, the line of refutability is known, thanks to Einstein’s theory of gravitation, in one point, that is a point corresponding to speeds too fast [2]. Therefore, Einstein’s theory allows one to localize a part of the line of refutability of the Newtonian theory and helps to understand why there is refutability at this place. It is important to notice that the refutability line of Einstein’s theory has a common portion with Newton’s, to be precise, the portion which corresponds to speeds small enough with respect to the speed of light. Accordingly, neither Einstein’s theory nor any other theory allow us presently to localize and understand the refutability line of the Newtonian theory of gravitation on all its length. There is presently no *exact theory* allowing one to determine the threshold of intrinsic refutability of basic theories. Maybe such a theory will never be conceived. We do not know.

*The case of quantum mechanics*

Someone could be tempted to rise as an objection that, in the case of quantum mechanics, the limit owing to the potential refutability has already been recognized and it coincides, in a way, with the indeterminism inherent in the principle of reduction of wave packet. For, one would say, this introduces a limit to the precision of measures and that limit is intrinsic. Now the quantum indeterminism is not at all tantamount, in fact, to the limitation owing to the potential refutability. Quantum theory is seen itself here as limited in its validity, as is general relativity or any other existing theory.

** **

**C. Endotheoretical limits of observability**

The endotheoretical limits of a theory or model are defined as observability limitations that arise from the basic principles of this theory or model. All the most fundamental present-day theories or models (*i.e.* the most outstripping present theories or models as regards the precision of predictions) lay down endotheoretical limits. Obviously, the presence of endotheoretical limits is independent of the particular interpretation, PI or NPI, according to which one regards them.

Thus, the standard model of cosmology lays down a limit of validity at the time of Planck (10^{‑43 }*s* after the moment zero). This time-limit, that is a consequence of the principles of quantum mechanics, is seen as the extreme limit of validity of present theories. This is an endotheoretical limit and, as such, it is nothing to do with the limitation owing to the potential refutability of theories. The latter limitation is intrinsic in theories but, by definition, not predicted by them.

Quantum mechanics lays down an endotheoretical limitation which takes the form of Heisenberg’s uncertainty relations. These involve theoretical limitations well connected together. For instance, the law of energy conservation, that is one of the most fundamental principles in physics, may be briefly contradicted, according to quantum mechanics, when two particles interact.

Both previous cases show that one can consider quantum mechanics as a more fundamental theory (that is a more outstripping theory) than the other ones and it is capable of laying down validity limits of other theories or principles in general [3]. An endotheoretical limit can in fact apply to a whole set of theories. Any theory can be endotheoretically limited from its own principles or from another, more fundamental, theory.

Notice that endotheoretical limits are not some kinds of theoretical *incompleteness*. For these limits are explained by theories themselves, when theoretical incompleteness represents a deficiency of the theory.

The following proposition concerns any theory (or principle, or law) that has been tested in due form and lays down one or more endotheoretical limits; it can be any presently existing basic theory (or principle, or law). This proposition is called ** theorem of localization of the potential refutability line**:

The threshold (or line) of potential refutability of a theory (or principle, or law) is necessarily located in the area situated between the line of actual observability and the endotheoretical limit.

This proposition can be demonstrated as follows. On the one hand, if the theory has been tested in due form and never invalidated by observations, its refutability threshold is necessarily beyond the line of actual observability. On the other hand, if one would try to test the theory beyond one of its endotheoretical limits, this would suppose that an observation could be done beyond this limit and, therefore, this would mean that the endotheoretical limit itself would be invalid and, thereby, the refutability threshold would be situated at this same limit or beneath it [4].

A ** corollary **follows which concerns the

**:**

*localization of a, real or factitious, upper boundary of the line of potential observability*

A boundary can be laid down to the progress of the actual observability in the future, therefore, to the potential observability, and this boundary coincides with the endotheoretical limits of basic theories.

One should note that, if the theory would be Pythagorean, this boundary could be seen as an absolute limit of experimental observability. In this case, when this boundary would be reached (supposing that the observational means would progress until then), and only then, one would have reached the threshold allowing one to know that the theory is Pythagorean.

Moreover, in the case of any Pythagorean theory, when this boundary would be reached, the values that would be actually measurable would have reached their absolute maximal degree of precision.

On the other hand, if the theory is non-Pythagorean, it follows from the theorem of localization that this theory will fail before the boundary could be reached. In this case, the boundary – which, by the corollary statement, is endotheoretical – should be considered factitious, that is unreal, and the limit of actual observability could in principle continue to progress even beyond this boundary. This is true, of course, if there is no other endotheoretical limitation, beneath this boundary, which would be laid down by another theory, known later.

The known endotheoretical limits concern some of the variables implied in either existing theory. The so-called Heisenberg’s uncertainty relations play this part, in quantum mechanics, in the case of variables such as that of position, time, linear momentum or energy. In the case of gravitation theory, an endotheoretical limit exists that coincides with the horizon of black holes.

Heisenberg’s uncertainty relations can be seen as the validity limit of classical concepts. Therefore they indicate a part of the refutability line of classical dynamics at the very time when they are representing an endotheoretical limit of quantum mechanics. We saw above that a part of the potential refutability line of the Newtonian theory of gravitation has been determined by general relativity. Likewise, a part of the potential refutability line of classical dynamics is in this case determined by quantum theory. And, as in the case of the Newtonian theory of gravitation and general relativity (as regards the speeds small enough with respect to the speed of light), a common portion of the refutability line exists in the case of classical dynamics and quantum theory (as regards values, for example, of action large enough with respect to the constant of Planck).

**D. The question of determinism**

Another consequence of the principle of potential refutability is deeply modifying the existing conceptions of determinism. One currently defines determinism as the characteristic of a theory that allows it to predict in a firm and certain way the evolution of a physical system from the data of its initial conditions [5]. In the particular case of classical determinism, one specifies that the state of the system (defined in the coordinates of position and speed of the particles), when it is known at any particular instant, actually knowledgeable in a unique way at every instant. In the case of quantum systems, this type of determinism is replaced by quantum determinism, according to which the state vector of the system, seen as an isolated system, is actually knowledgeable at every instant if it is known at a prior instant.

The principle of potential refutability means that the state of a system, whether it is a classical or quantum system, is actually knowledgeable only in an approximate way. Therefore this principle implies in fact a kind of indeterminism. Moreover it reveals that determinism such as it is usually understood (classical or quantum), that is considered perfectly exact, is in fact a theoretical extrapolation. In other words, this is a theoretical extension that was never experimentally established as such; one should understand it as an approximate and partial representation of reality. In particular, the picture of determinism according toLaplaceis a case of exceeding extrapolation [6].

It is proposed here to essentially distinguish between *endotheoretical *and *exotheoretical determinism*. The former is defined as an internal characteristic of a theory and predicted by it. This determinism is falling short as soon as the theory itself is failing. The latter is not predicted by the concerned theory and it could be established (as a knowledge modality of physical systems) only if one would be able to prove that the theory is Pythagorean, that is, totally exact. The exotheoretical determinism is an ideal trait of the potentialities of future science. Therefore it is compatible with the principle of potential refutability, when applied to present-day theories. Endotheoretical determinism remains suitable as an approximate description of physical systems.

According to the non-Pythagorean interpretation of determinism, the present-day basic theories, as well general relativity as quantum mechanics, must be seen as normally indeterminist in the exotheoretical meaning.

The NPI implies that all the correspondence rules in physical theories must be understood as non deterministic, independently from quantum indeterminism or any existing indeterminism. As soon as a theoretical prediction is involving a value of any variable, the corresponding measure has a degree of precision which is limited by the intrinsic refutability line of the theory.

Because quantum theory and general relativity each have a potential refutability line, they will be subject to falling short when they will be tested in the future. However it is important that one does not mistake the refutability lines of both these theories for the validity limitations that they predict. Yet most theorists often are making this confusion (see, for example, superstring theorists, section III, D, *1*).

**E.****The partial validity of theories as representations of reality**

** **

The principle of potential refutability does not amount to recognizing that theories are not “quite right yet” [7]. It does not imply only that one acknowledges the approximate character of theories, but also their partial validity – and, maybe, very partial validity – as representations of physical reality (again until proof to the contrary is given). It means that the present theories cannot be considered as giving quite valid representations of reality even if they have been corroborated up to a considerable degree of precision.

The belief in the *definitive* reliability of the models that have not yet failed as representations of physical reality is a general characteristic of the history of research until the present time. This belief seems all the stronger as the model longer resists to observational tests (or, more generally, to observational experience). For instance, there is no doubt that the belief in the geocentric model has been very strong. Even though this type of model appears as little relevant to modern physicists, it is nonetheless one that has been well confirmed by the available data during more than a millenary. Likewise the Newtonian physics resisted to observational tests during a few centuries. Until the twentieth century, one has generally considered this physics definitely true. Yet it has been contradicted by facts. It has failed tests and better theories have been found.

There is little reason to suppose that, during the next centuries, no new mathematical theory will appear which will be capable of providing a model that will resist better to tests and criticisms (including new possible types of tests or criticisms). If such will be the case, the present-day theories will suffer the same fate of theories they have surpassed. Then one could see them as theories which, though they will have been useful to the continuance of research, will have remained unsuited to provide a right representation of physical reality. Some of their elements could be admittedly false as are, for example, the geocentric assumption, circular motion of planets, or else action at a distance, ether and several concepts of classical physics, such as classical determinism or classical particle.

Nobody can, at the present time, identify with certainty what will be the elements of the present so-called fundamental theories which will be seen in the long run as elements genuinely true and those completely false.

In other respects, it would be misleading to think that, in the famous geocentric model, all was wrong. One can see in it, still today, certain elements of truth related in particular to appearances or usage of this representation. Though, nowadays, when one is using the geocentric representation, one take care to point out that one is making a “supposition” for a useful purpose. The best present-day theories could actually some time suffer such a fate.

**F. Mathematical p****olymorphism in physics**

The expression “mathematical polymorphism” in physical theories is sometimes used to mean that several different mathematical formulations constitute distinct models predicting the same set of physical phenomena [8]. For example, the Newtonian formulation of the theory of gravitation, which uses the concept of instantaneous action at a distance, has been found equivalent to the Lagrangian formulation. However the Lagrangian formulation but not the Newtonian allowed one to make a simple and, in a way, obvious conversion towards the theory of relativity. Furthermore, thanks to mathematical polymorphism, a number of significant theoretical developments have been realized in modern physics. This so productive feature of theoretical physics has not been scientifically explained in a general way.

When several different mathematical formulations are capable of describing the same set of phenomena, some of these formulations, but not necessarily every one, could lend themselves to future transformations which could accurately describe new data, obtained by means of new observational instruments or techniques. These formulations will not necessarily give absolutely exact theories but, according to the NPI, theories of which refutability threshold is markedly in progress with respect to that of prior theories. Therefore the mathematical formulations of physical theories have hidden peculiar, and fruitful, qualities which distinguish them and reveal that they are equivalent only in semblance. No general conceptualization exists yet about these “types” of formulations which seem to be able, as it were, to “predict” theoretical developments that are on a par with observational developments. This characteristic of formulations is closely linked on to that which is called the efficiency of mathematics in physics [9].

III. The efficiency of mathematics in physics

** **

That which is called the efficiency of mathematics in physics is an essential characteristic of scientific research. It concerns the way in which advancements of theoretical means, at a given epoch, are capable of describing, to a considerable extent, observational improvements at the same epoch and sometimes to raise up them, and so prepare the following epoch. To say it more specifically, there is a recurring efficiency of mathematics in physics which means that the line of potential refutability has propensity for constantly going forward beyond the limit of actual observability. Scientific progress can be described as a gradual improvement by fits and starts, sometimes slackening, of the line of actual observability and, beyond this, the line of potential refutability. A development is progressively effected, involving interactively a development of experimental and theoretical tools. This development will be called here the *joint development*.

For certain specific reasons, we will here consider that the joint development has begun with the mathematical models that have been elaborated by the Greek mathematicians and astronomers, towards the V^{th} century BC. We can consider that the efficiency of mathematics to describe phenomena has appeared at this moment. Even if these models are often seen nowadays by astronomers or physicists in general as having lost any relevance, they are significant in order to understand better, in a general way, that which this efficiency of mathematics in physics consists of.

The Greek researchers have elaborated a series of models in order to explain celestial phenomena from a basic symmetry which was that of the perfect circle (or sphere). These models have succeeded in describing and, in some sense, explaining more and more phenomena. For they were conceived in order to predict or describe the observed phenomena until a certain degree of precision and their authors were aware that some models could be more suitable than other ones to this end. The Greek scientists, however, were unable to explain scientifically the efficiency of their models and they were content with appealing to the “perfection” of the basic symmetry [10].

Now one can explain, at least in part, this efficiency of the Greek models by the fact that the planetary orbits are mostly, because of complex dynamical reasons, fairly close to the circular form. So, even though the Greeks did not know the form of these orbits that are acknowledged as elliptical since Kepler, they were able nonetheless to elaborate relatively efficacious models in order to describe the available observations. Moreover, even if their models were approximately and partially valid – and even little valid – as representations of reality, it is undeniable that, during a period that spread on a number of centuries, the Greek astronomers have much learned about mathematics and celestial phenomena.

One can show that the Greek astronomers have in this manner leaded the way to Copernican model, then setting up Kepler’s laws and, later on, all classical science. Several of their assumptions among those which they considered most fundamental – for example, the principle of perfect sphere or circle, or geocentrism – had to be discarded afterwards. Even so they made progress theoretical concepts (for example, by their invention of the number theory, geometry, and conic sections) and observational tools (for example, by their systematic usage of the sundial, armilla or other unidentified instruments, that allowed them to constitute the first catalogues of celestial bodies [11]) so that abstract models effectively describe recurrently the known observational data.

One could be tempted to raise the objection that one cannot establish an epistemic continuity between the astronomical developments of Greek researchers and those of modern scientists because of all that which separate them in terms of world conception and, more specifically, types of mathematical models or research organization [12]. However these points, as relevant they may be to the study of the historical developments of knowledge, do not contradict that which is said above in terms of observability and refutability. The PI is presupposed as well in the Greeks of Antiquity as most recent researchers in spite of all the differences that disconnect them in other respects. For our approach shows that the non-Pythagorean interpretation of laws and principles in general – as well the ones of the Greeks as modern physicists – can have a very general meaning.

One of the common characteristics of scientific research in its most general meaning is precisely the efficiency of mathematics in physics. The Greek astronomers used a symmetry which, in their sight, was fundamental and revealed itself surprisingly productive. And it is the same, for example, in the case of Kepler’s laws, which were set up in a phenomenological way, that is, without any support from a validated basic theory. In spite of their deep dissimilarity, these models met on one point: they were able to describe in an extraordinarily exact way the best observations which were then available. In fact – we know it from classical dynamics – Kepler’s laws are not completely exact but only approximate.

AndNewton’s laws repeat the same pattern of joint development, in terms of observability and refutability. These are approximate and partially valid laws. While not being wholly exact (that is, they are exact only until a certain degree of precision), these models were useful to describe actual observations. Researchers were allowed to progress by their means even though they falsely believed them absolutely exact.

At our time, the most successful theory is quantum electrodynamics. We can observe a formal similarity between the bases of this theory, that are gauge symmetries, and the laws and principles which in the past have revealed themselves the most rewarding in which concerns the capacity of describing phenomena, that are the principle of circle in the Greeks astronomers and Kepler’s laws. In both the latter cases, one attributed the efficiency to fundamental characteristics of Nature without being able to explain it more. The same holds true in respect of gauge symmetries. Now we know, in the cases of the Greek’s and Kepler’s models, that their descriptions of Nature were approximate and the representations of physical reality ensuing from them were only partially valid. In the case of gauge symmetries, we do not know yet [13].

**A. The advancements of the observability limit and refutability line**

When the limit of actual observability is progressing, it may result in refuting theories, but also, constructively, promoting a new hypothesis. The Greeks had already been acquainted with this fact. Their observations of planetary motions, variations in brightness, and annular eclipses of the sun, which were initially less firmly established than other ones, such as ordinary eclipses or lunar phases, seemed to validate the theory of deferents and epicycles to the detriment of the theory of homocentrics. This situation shows that the discovery of new phenomena (or their new taking in consideration) may result in preferring one hypothesis to the detriment of another one without the favoured hypothesis being itself veracious.

The theories of classical physics have firstly well met experimental discoveries. Even they have stimulated them especially until about the end of XIX^{th} century. This means that the potential refutability lines of classical theories were first situated far beyond the limit of actual observability. However the latter progressed so much that one happened to find out local outstrippings. One tried first of all to solve them by means of acquired theoretical tools, that were so rewarding until then. For example, this happened when one firstly wanted to explain the abnormal amount of rotation of Mercury’s orbit. Then one made the disturbances caused by an unknown planet intervene. It seems that the divergence between the theory and observation could have been considered as a shortcoming of the Newtonian theory only when another, better, theory has been able to explain more precisely this amount of rotation [14]. One drew the conclusion that the better theory, general relativity, was not only a better but also completely exact theory. This seems altogether prompted by analogous or recurring causes, that is, the mere fact that one has not yet at his disposal a still better theory in the same field and neglects the potential development of new theoretical tools.

It results from the implicit PI of researchers that they have strong tendency, at every epoch, to believe that the available theory must be completely exact. They have predisposition to believe that they have conceived a definitive theory in spite of the fact that several phenomena remain unexplained and theoretical explanations stay poorly integrated. For example, the best models of Greek researchers did not manage to describe the observation of comets. Situations of this type have regularly recurred during the joint development of research. The classical theory managed to explain the periodical return of comets thanks to the concept of eccentricity of elliptical orbits [15]. Of course, more enquiries remained to be done in order to understand the origin of comets. At the present time, models explain in part the origin of comets but, for instance, no model yet correctly explains the existence of cosmic rays (which have been discovered in the years 1930) and especially the high-energy ones. And, in the case of phenomena theorized but remaining to be integrated in a single theory,Newton, then Maxwell, for example, made science significantly progress. However present-day science lacks a common theoretical framework concerning the four so-called fundamental interactions. These facts illustrate that the situation of modern researchers present several formal traits of similarity to those of the past which follow significantly from the Pythagorean implicit supposition.

**B. The idea of theoretical simplicity**

The ancient Greek researchers had in some way the same credo as today’s physicists. Like the latter, they considered that basic laws should be perfectly exact and fundamental, in addition to being harmonious and simple. However the very idea of simplicity has much evolved. The theoretical and observational tools of the earliest researchers now look most rudimentary. In the sight of primary Greek researchers, the simple resided in most simple geometric forms, such as the triangle or circle. For example, the principle of perfect circle which was a purely geometric symmetry would not be worthy of being called “principle of symmetry” at the present time.

Nowadays, most simple principles are those related to symmetry groups, for example, Lie groups [16].

If one would ask a general characterization of simplicity, the researchers would not easily agree. For instance, in what sense was Copernicus’ model simpler than Ptolemy’s? In fact, the former allowed one to simplify certain explanations and gave a more harmonious global feeling than the latter. Yet Copernicus’ model was itself very complicated in the details [17].

One would not be wrong in saying that today’s standard models are still very complicated in the details. In fact, there are much adjustments and put-up jobs in order to describe phenomena. In the antiquity as well as in Copernicus or models of the present time, the simplicity condition is hard to understand and formulate. And the historical development of the idea of simplicity seems itself very little simple.

That which prompts researchers to call simple the basic principles, from the outset of the joint development, seems to be connected to the ability of these basic principles to describe a good many phenomena by means of computations simple enough, when the available mathematical tools at this time are being taken into account. An outline of research in the history until today suggests formulating the conditions of the production of theoretical tools considered “simple” as follows: a) theoretical tools should be comprehensible and above all utilizable by a sufficiently large number of intervening individuals schooled according to the standards of the time; b) these theoretical tools should lend themselves to feasible computations by means of numerical methods of the time. These conditions obviously are not adequate to clarify the idea of simplicity itself, but they enlighten to a slight extent its evolution.

The development of theoretical models unavoidably is done, in practice, by selecting relatively simple principles and theories. That which is contentedly simple, at a time, has good chance to be seen too simple, at a later time. The past manner of describing phenomena looks thereby artificial, *ad hoc *[18]. So the symmetry principles of the Greeks are not at all, as such, relevant to modern scientific research.

Still, if one will discover, one day, a new type of mathematical model which is able to clarify why the gauge symmetries, for example, are only approximate (though all the same relatively very precise according to the existing standards) and stand for an approximation of a still more precise theory that will allow one to explain their refutability threshold, then one could realize, perhaps, that they were theories almost completely arranged in an *ad hoc* way. It is indeed likely that, if the development of theoretical tools still continue during a long enough time in the future, the present-day theories will later appear like simplistic representations of a much more complex physical reality — and, at the same time, much more “*simple*” in a still unknown meaning of the word — that one is presently inclined to believe.

Therefore it seems that one of the joint development “laws” since the antiquity is producing relatively simple and effective models, which yield to other, progressively more complex, models — though “*simpler*” in incessantly renewed meanings — and more effective. As in a learning process, there would be very progressive steps which, at each time, would be simple enough to be passed through, and effective enough to keep on development and lead to the next step.

This is why scientific research would represent a discovery process, but it would be firstly a discovery by researchers of their own capabilities of theoretical and observational development. As regard the discovery of physical reality as such, independently from the reality and capabilities of researchers, no knowledge is yet acquired in a sure way. During this process, theories may long remain effective and simple-looking, but only approximate and little found as representations of reality.

When one is finding out new regularities and formulating them as symmetry principles or other types of mathematical relations, these may reveal very productive, allowing one to discover new phenomena and, sometimes, raising unexpected theoretical developments. It is nonetheless difficult even unachievable to know with certainty if something real has thus been found. The objects that one thinks he has found which perhaps are pseudo-objects arise from *endotheoretical* developments. Objects predicted by a theory may be in certain cases as well physically real objects as simple artefacts. In this connection, there exists a kind of productivity of theories which we can call *ad hoc*.

Thanks to their symmetry principle of the sphere, the Greeks have found a method to compute the size of the earth, after one has supposed it to be spherical (Eratosthenes). As for them, thanks to Lorentz symmetry, modern researchers have found relativistic effects, as the contraction of length or dilatation of time. However, the Greeks in addition “have found” the homocentrics, deferents and epicycles, all artefacts which they were inclined to see as real. Modern researchers, one knows it, have found, thanks to symmetry groups (for example, SU(2) or SU(3)), several families of particles and other phenomena (for example, “neutral current”). Are these objects physically real or different kinds of artefacts? One could workably know it only when future experimental or theoretical developments will happen, that is, once one will have assessed the endotheoretical limits of the model or discovered a theory which will allow one to localize and explain the refutability threshold of the model.

** **

**C. ****The research of the refutability line of basic theories**

** **

The NPI can help to evaluate researches, reorient them and raise new ones. As a matter of fact, one has never proven that basic theories are perfectly exact, even when one has experimentally validates them. Therefore one does not yet know, though it is a current expression, if they are “fundamental” or not. It is essential to carry on making observational developments since, as long as the refutability line of each one of the existing theories (together with those already refuted on a portion of their refutability line) will not be sufficiently explored, one could not know if either of these theories is really exact and fundamental. It is essential also to carry on searching new theoretical models. For one has good chance to find out again, in the long run, more powerful mathematical theories than those existing at present, that is, more complex and integrative ones and, therefore, more capable of helping to understand the complexity of the development of scientific research and its results. In addition, it is to researchers’ interest to carry on conceptual or methodological study in order to better understand the difficulties related to correspondence rules (especially the principle of reduction of wave packet in quantum mechanics). This exploration is required for a better understanding of the present-day theories and, no doubt, for future developments of research.

Two types of research will be considered here in order to illustrate either of these theoretical issues in present-day research. First, we will see that Alan Kostelecky’s recent research program is directed towards the recognition of the exact location of the potential refutability threshold of the theory of relativity. I will lay stress on the importance of carrying on this type of research while widening it significantly. In another connection, I will insist also on the potential consequences of Mordehai Milgrom theoretical approach. He set forth, in1983, amodification of the Newtonian (and as a result relativistic) dynamics on galactic and extragalactic scales in order to originally solve the famous enigma of hidden mass. As in the previous case, a fundamental theory is therefore challenged but, additionally, the issue is a deep change in our representation of physical reality.

*1. **A new type of refutability research*

According to Alan Kostelecky, the special theory of relativity could be a mere approximation of the laws of Nature and its validity could depend on the spatiotemporal scales of length. He thinks that certain bases of physics are wrong. So he raised new experimental researches that are done in a systematic way, each one with the aim of refuting certain principles considered until then as fundamental. Also experimental results already obtained but let unexploited can be used in this program [19].

The latter point is tightly linked to the aforesaid fact that, if one observes discrepancies with respect to theoretical predictions and no better theory is yet at one’s disposal capable of providing predictions more in accordance with observations, then one has tendency of omitting to see the divergence as a refutation and attributes it to manipulation errors or unknown disturbances (see [14]).

Thanks to the research program of Kostelecky, several tests of the principle of relativity (either the Galilean or Einsteinian one) could be realized by the existing methods and instruments, but have never been, for want of sufficient motivation. And, if the results are not conclusive in the short run, they could be in the mean or the long run, when more powerful methods or instruments will become accessible. One has yet no observational data on length scales smaller than 10^{-19 m} and, in most other cases, smaller than about 10^{-8} or 10^{-10 in the appropriate international unit. This constitutes our present actual observability limit. However we know that, as past experience has shown, this limit is doubtless provisional.}

This researcher, therefore, happens to suppose that relativity theory is not a Pythagorean one. If one so acknowledges that one does not know physics on every level of precision, new prospects are coming into view. Kostelecky’s program comes out as a first performance in the history of science. As a matter of fact, it is the first time that one so systematically explores the refutability line of a theory considered fundamental. What is more, his research program leads to ascertain the observational development and, in some way, promote it to make it progress faster and, therefore, actualize faster that which is still only potential. He will doubtless contribute to urge forward this development so as to bring the limit of actual observability nearer the potential refutability threshold of a theory which has yet never been refuted. In the history of research, the joint development has been generally groping its way, an unexpected theoretical development rising up a correlative observational development, or vice versa. This time, the researcher does not wait for an observational development being motivated by the emergence of a new theory; he chooses making it progress faster by inventing, not new instruments, but a new kind of research program.

According to the non-Pythagorean interpretation of current research, the existence of such a research program means that the joint development *to come* will be done more intentionally and deliberately than in the past and, thereby, probably more efficiently. This allows us to contemplate more specifically future research. We can predict that even if the particular program of Kostelecky would fail to ripen, there would be assuredly other similar programs that would follow it. And if it would succeed, its very success would warrant its extension under different forms, that is, involving other basic theories or symmetry principles, that is, all the so-called fundamental theories, therefore, main conservation principles (energy, linear momentum, electric charge, etc.) and gauge theories in general, then other principles which will follow them in the future and so on, until the possible emergence of one theory that would be at last in accordance with the PI and then could be considered, maybe, as altogether exact and definitive.

*2. **Challenging the present-day representation of physical reality*

One has observed that, according to the available astronomical data, that which is called invisible matter (or “hidden mass”) is not distributed at random but increases approximately in a proportional way to the distance with respect to the centres of galaxies or galactic clusters. This coincidence is disconcerting especially as invisible matter seems to interact very little with ordinary matter.

Founding upon these facts, Mordehai Milgrom hypothesizes that Newton’s laws do not apply on the Universe scale and have to be modified in certain conditions. He proposes a modification of the Newtonian theory that he called MOND (*Modified Nonrelativistic Dynamics*). He explains that this new theory can be interpreted in two distinct ways: either this theory is seen as modifying the law of inertia, or it is seen as modifying the law of gravity. According to him, the special theory of relativity is not directly implicated because the speeds concerned remain small enough. He admits that the MOND theory is partial and cannot be considered a well established theory. It can nonetheless be seen as a generalization of the laws of the Newtonian mechanics, valid when the accelerations are very weak (smaller than about 10^{-8 cm s-2}) [20].

Milgrom’s position therefore consists of assuming that, not only the theory of gravitation (either the Newtonian or Einsteinian one) is not a Pythagorean theory, but also, its refutability threshold has been outstripped by observations already made. His enquiries can be considered concerning the Newtonian theory’s refutability threshold – this would so be refuted (though this is not yet actually acknowledged) on another part of its potential refutability line than where it was already refuted –. The MOND theory is elaborated from astronomical data gathered some time ago (since the 1930’s). These data are about motions subjected to very small accelerations, so that they could never until now be gathered in a laboratory either on the earth or near it. This part of refutability line of the Newtonian theory would so be outstripped only when one is making observations of the speeds of certain stars or galaxies in general. It follows that the refutability threshold of the theory of relativity would itself be outstripped since it shares the same threshold as the Newtonian mechanics’ concerning the speeds (and accelerations) of this range.

The MOND theory, as Milgrom clarifies it, is in fact a phenomenological law, that is, it is formulated in order to expressly describe particular phenomena and without any significant mathematical-theoretical innovation. It is therefore right away conceived as an approximate and *ad hoc* law, in the meaning given here to this expression [18].

It is noteworthy that, in the history of research, phenomenological laws often have revealed more durably valid than the theories deemed fundamental. As a matter of fact, at several times from the outset of the joint development, it happened that laws of this type, that is, an approximate theory, offer much greater resistance to later tests than the theories deemed fundamental and, therefore, regarded as perfectly exact. Thus Kepler’s laws remained valid as describing on a good level of approximation planetary motions in a stellar system (or even stars motions with respect to groupings of stars), when Newton’s laws were invalidated in various ways. Of course,Newton’s laws stays valid themselves as approximate laws. Then again it is precisely the claim to make them universal and fundamental laws which is appearing as ill resisting subsequent observational developments in the long run.

**D. The problem of theoretical unification**

We will see now that the superstring theory replaces an exceeding theoretical extrapolation by another one. In fact, punctual elementary particles are there replaced by elementary non-punctual “strings”. But the latter’s size is small enough to be far beyond the actual observability limit. I will outline an evaluation of this theoretical research on the base of the NPI. At last, the pertinence of the famous problem of the unification of the four so-called fundamental interactions will be reconsidered in taking into account the non-Pythagorean character of the theories supposed to be so unified.

*1. The superstring theory: from one extrapolation to another one*

* *

The superstring theory is the most celebrated of theories which are today put forward** **in order to unify the four fundamental interactions in physics. The four dimensions of space-time are replaced by ten dimensions (one dimension for time, three for spatial “extended”, that is, visible dimensions and six for spatial “curled up”, that is, hidden dimensions). As a result, this theory provides an explicative and more general framework than any established theory. It allows one, at least in principle, to confront questions which are not considered by the established theories such as for example the determination of elementary particles masses and coupling constants in addition to the number of dimensions of space-time. Certain superstring theorists think that the enigma of quantum mechanics could be reformulated and solved by this theory. However the theory displays certain defects which, in the sight of a number of observers, mar it seriously. Thus, the determination of the equations of the theory from the basic principles reveals itself most difficult, so much that one must be content with very approximate versions. The superstring theorists recognize that new theoretical tools must be developed to achieve this task [21].

One of the main advantages of this theory surely dwells in the very concept of elementary string defined as being of a finite size, as opposed to the elementary particles in the standard model, which are seen as punctual. According to string theorists, this allows one to stay away from the infinites which have imposed the use of renormalization methods. Thereby the concept of elementary string allows one to set a limit to quantum fluctuations that arise on subplanckian scales. Furthermore it even allows one to avoid the problems raised by singularities such as those of black holes or big bang.

The adepts of the superstring theory sometimes show their implicit Pythagorean assumption by implying that the superstring theory could quite be the *definitive* physical theory. If it would be the case, this would mean that quantum theory, or the standard model of particle physics, would not have any potential refutability threshold and the superstring theory obviously would inherit this providential state of affairs. The conviction of some researchers that they are on the verge of having the definitive theory (also called “theory of everything” or “ultimate theory”) as soon as the problems of punctuality or singularity will be surmounted means that, in their mind, there is neither potential refutability to consider nor exceeding extrapolation, and the so-called fundamental theories must be regarded as perfectly verified even where they have never been tested. Since they do not clearly distinguish the limitations of refutability from endotheoretical limitations, they unsurprisingly take for granted that the intrinsic limitations of quantum mechanics intermingle with the limitations born of Heisenberg’s uncertainty relations and, in particular, the limitation connected with Planck length.

Superstring theorists often assert that the superstring theory includes no adjustable parameters. This is right except if new models that will arise from the theoretical developments in the long run could be capable of implicating new parameters that are still unknown or even inconceivable in the short run.

In other respects, the superstring theory is not vain and without any relevance. This theory is capable of making research advancing. In fact, it has already begun to do it. It has given new theoretical tools which, though they do not allow one yet to make testable predictions, have made understanding progress on certain points. In another connection, we can see that, even if this theory is not refutable in the usual meaning, it can be outstripped. Thanks to the future developments of research, other theoretical models could likely surpass it, either on the level of experimental testability or theoretical capacity of explaining. Maybe new models will be able to explain the intrinsic shortcomings of the present superstring theory and, for example, localize its potential refutability line.

*2. The relevance of the present-day problem of unification*

One can regard the theoretical unification in physics as the elaboration of a theory which would include the full description of the four fundamental forces in addition to key ideas such as the concept of spin or gauge symmetries. That means, one thinks, that any inconsistency between fundamental theories and principles would at last be surmounted. Particularly, it is believed that, if one would succeed in surmounting the inconsistencies between the general theory of relativity and quantum mechanics, this would be adequate to theoretical unification being effective.

This conception of theoretical unification involves some confusion. It is misleading because it supposes that if one elaborates such a theory it would be enough to consider the theoretical production coming to an end or nearly so. In fact, nothing demonstrates that this theory would be straight off free from any refutability threshold. Consequently, even though it would be without any internal inconsistency, it would not be assured of being able to describe and explain later observations and, thereby, to offer a valid representation of physical reality.

If one assumes that observational tools will keep on developing as they have done since the beginning of the joint development of research, the actual observability limit will continue to progress and, probably, science will continue to be productive and fruitful in observing new phenomena. It would therefore be quite possible that the refutability threshold of a unifying theory (such as described above) will be one day reached and outstripped. Furthermore if one supposes, as it is predictable, that new theoretical concepts and models will come out, this means that this unifying theory could itself be surpassed by other theories. This unifying theory would therefore be nothing else than a synthesis of theories presently seen as fundamental – general relativity and quantum mechanics – with their own internal refutability thresholds.

It is likely therefore to consider that the problem of unification such as formulated above is not relevant from the viewpoint of scientific research and even that it is incoherent. As a matter of fact, according to the actually available observational data, the present-day basic theories do not contradict each other. They do only beyond the observability limit. This means, among other things, that one does not know yet which one of both basic theories will be firstly refuted by future advancements of observational means. It is quite possible that, in the long run, both will be refuted. One can consider that the basic theories of present-day physics are jointly compatible since logically it would be enough that one of both would be approximate to physical inconsistencies likely vanishing. There is not necessarily inconsistency unless one supposes that both are Pythagorean theories. So, the contradiction is more in the Pythagorean interpretation itself than in the two physical theories.

More generally in research history, one can observe that, as soon as one is considering them as Pythagorean theories, best available theories at a given time are not compatible with later ones. This was the case of Greek models, of course, but also classical theories. This will likely be the case of the present basic theories. Therefore, research in the long run makes the best of theories which are incomplete or, at some time, reveals themselves as being mathematically incompatible between them or with the following ones. They altogether can be used to make science progress. And it could be that future theories, even if they are very dissimilar to present basic theories, will be once more mathematically incompatible, mutually or with respect to other ones, that will arise still later.

We showed above that Alan Kostelecky’s research program consists of deliberately developing the observational research so as to locate the refutation threshold of a principle that has been deemed fundamental until now. A more general program of the same type would consist of extending this kind of enquiry to all present basic theories and principles. And another program, in some way symmetrical to the previous one, would instead concern the development of theoretical tools. For example, it would aim at looking for new theoretical models capable of describing the present observations (taking into account their levels of precision) together with various settings of future developments pertaining to potential observability. This could be a proficient way to exploit the mathematical polymorphism of physical theories while searching for extending it. This idea means, in other terms, that theorists will have to search for new models in which the present symmetry principles will be seen as approximate ones, in accordance to the presently known observations, but with divergences (with respect to the corresponding perfect symmetries) that will appear only beyond the present observability limit.

We can expect that the theorists will regard the theories to come in the near future as only approximate and partial, so as to keep a good scope for researchers of following generations. This is so, for example, that the superstring theory, like the competitor theories, should be envisaged and not, in particular, as definitive ways to surmount once and for all the theoretical inconsistencies altogether.

If, one day, one achieves at last a really fundamental all-encompassing unification theory, it will be concern with a *state of science* that will allow one to explain what is reality and, also, enlighten what is science itself, that is, able to explain the nature of the joint development which science comes from. At the moment, and present epoch, we will have probably to accept the idea (which is non-Pythagorean) that the present-day theories and those which will follow them during a good while, in the future, will be only approximate and partially valid and, in sum, we should regard the existing theories as less true than the critical discourse that one can tell about them.

_______________________________________

[1] The community of researchers has in fact propensity for presuming that the inconsistency of theoretical predictions in the case of certain galactic or extragalactic observations can be explained by the presence of matter having unusual properties, the “dark matter”. However we will see that Mordehai Milgrom (section III, C, *2* in this paper) thinks he is able to solve the problem of dark matter by modifying certain basic physical laws. The law that he proposes is aimed, according to him, at replacing the Newtonian law for accelerations of which values go down a certain threshold. *Cf.* M. Milgrom, *Astrophys. J.*** 270**, 365 (1983) ; **270**, 371 (1983) ; **270**, 384 (1983).

[2] At some moment (of the historical time of research), one can calculate precisely the speed value such as with any speed value greater, the Newtonian theory predicts values of the gravitational force that are no longer exact. This supposes that one is taking into account, not only the Einstein’s theory, but also the limitation of actual observability, that is, the highest degree of precision one can have at this moment. It is noteworthy that the refutability line of the Newtonian theory stays unidentified in a large part, in fact, nearly all its length.

[3] Brian Greene, for example, writes: “*Everything* is subject to the quantum fluctuations inherent in the uncertainty principle – even the gravitational field”; especially, the general relativity theory breaks down to quantum fluctuations. *Cf.* Brian Greene, *The Elegant Universe, Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory*, New York, London, W.W. Norton & Company, 1999, p. 127. As a superstring theorist, Greene happens, saying this, to state the endotheoretical limit of the superstring theory itself.

[4] The expression “tested in due form”, in the theorem’s statement, is taken in the sense that the theory has been tested in a systematic way until the actual observability limit.

This definition can cause difficulties inasmuch as systematic testing means, or should mean, that one has carried on tests without any use of special means such as a hypothesis of unknown disturbance in case where tests would not have been always positive.

[5] One can also define determinism as that which, in the reality, allows and warrants one that it is possible to predict something. However, the above-mentioned definition is more current in physics.

[6] Pierre Simon de Laplace describes, in his *Philosophical Essay on Probabilities* (*Essai philosophique sur les probabilités* (1814), translated by Andrew I. Dale, New York, Springer-Verlag, 1995), an “intelligence” to whom “nothing would be uncertain”, as well in the past as in the future only provided that he knows the state of universe at “a given instant”. One can say that this is an exceeding extrapolation of classical mechanics, far beyond its threshold of refutability. The latter has been shown in part, since then, thanks especially to general relativity and quantum mechanics.

[7] For example, Richard Feynman wrote: “we know it [the law of gravitation] is not quite right yet, because we have still to put the quantum theory in. That is the same with all our other laws – they are not yet exact. There is always an edge of mystery, always a place where we have some fiddling around to do yet. This may or may not be a property of Nature, but it certainly is common to all the laws as we know them today. It may be only a lack of knowledge” (Richard Feynman, *The Character of Physical Law*, Cambridge, Massachusetts, and London, England, The M.I.T. Press, p. 33). So Feynman does not seem to agree completely with the PI. Yet we cannot say that he clearly embraces the NPI since he seems to suppose that the basic theories, even though they are “not quite right yet”, are nonetheless close to exactness and constitute good enough representations of physical reality. According to the NPI, we do not know yet if the present-day theories are valid in this way.

[8] The expression “mathematical polymorphism” has been used especially by the physicist Jean-Marc Lévy-Leblond (for example, in his article “Physique et mathématiques”, in *Penser les mathématiques*, Paris, Seuil, 1982, pp. 195-208). One can observe that the number of distinct formulations differs with respect to the physical field. In this connection, Richard Feynman wrote: “psychologically [these formulations] are different because they are completely unequivalent when you are trying to guess new laws” and “every theoretical scientist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping they will give him different ideas for guessing” (*The Character of Physical Law*, Cambridge, Massachusetts, and London, England, The M.I.T. Press, pp. 53 and 168). Feynman seems to understand that these representation are psychological, philosophical or, maybe, mathematical.

[9] It is well known that mathematical theories can be entirely expressed in an axiomatic form. But physical theories cannot, at least, on the presently known basic formalisms. This fact will perhaps become explainable in the future theoretical research. Now, we can see that a part of the explanation could be suggested by NPI. This engages to formally consider the time of research itself. The reality of research differs from that which the existing mathematical models can describe because of the part that is played by the time *t*, where *t* is a variable generally expressing past, present, and future moments of experimental and theoretical research. This means the following. If one would suppose that a physical (non-Pythagorean) theory would be completely formalized in axiomatic form, it would sooner or later be outstripped by the movement of research that it is supposed to predict. This seems bound to happen except, maybe, if the formalization includes the time *t* of research. If it would be the case, that would mean that the future science would become in part *reflexive*, that is, able to partially describe itself, and would use, in order to express theories, a formalism which would include this variable *t*, as in the one that is outlined here.

[10] The basic principle of Greek research was that the motion of celestial bodies must be perfectly circular and uniform. It is stated in Plato’s work: “The Creator’s […] made the world in the form of a globe, round as from a lathe […] the most perfect and the most like itself of all the figures” (Plato, *Timaeus*, 40, A. *Cf. *Benjamin Jowett, *The Dialogues of Plato*, 3^{rd} ed. London: Oxford University Press, 1892, III, 452). Plato took probably his inspiration from the Pythagorean and Eleatic schools. One usually attributes the idea of the earth’s sphericity to both of these. (*Cf.* Antonie Pannekoek, *A History of Astronomy*, London, Barnes and Nobles, 1961, 1969, p. 99-100). Pythagoras and Parmenides (VI^{th }– V^{th} century BC) would have realized that the existence of different climatic zones on the earth resulted from its sphericity. Later Bion of Abdera would have even understood as a consequence of the principle that there are regions on earth with days and nights both lasting six months (*ibid*., p. 100). Anaxagoras of Clazomenae (*c.* 500-428 BC) would have been the first one to clearly state that the moon reflected the sun’s light. Thereafter one concluded that the moon should be a sphere (Anaxagoras would have only supposed it) (*Cf.* Ludwik Marian Celnikier, *Histoire de l’astronomie occidentale*, Paris, Technique de documentation – Lavoisier, 1986, p. 46). At this epoch, Greek researchers were capable of giving on this same basis a complete qualitative explanation not only of moon’s phases, but also moon’s and sun’s eclipses. The mathematician Eudoxus of Cnidus (406-355 BC) was the author of the first notable attempt to systematically describe the celestial observations on the same basis of circular symmetry. He and his disciple, Calippus of Cyzicus, succeeded in approximately reproducing the displacements of every planet, including the apparent retrograde motions of the planets by means of spheres in uniform rotation called “homocentrics”. This system did neither explain the planets’ variations of magnitude nor the difference between total and annular eclipses of the sun. However other researchers made it thereafter by means of different models, also based on the principle of perfect circularity. Aristotle transformed Eudoxus’ model in adding to it the concept of real crystalline shells in order to explain better the “physics” of motions. In this context, at the III^{rd} century BC, Eratosthenes of Cyrene determined in a relatively very precise way the dimensions of the earth by likening it to a sphere, using trigonometry, and doing a few simple measures. He would so have found a value of terrestrial circumference barely of 2% lower than the value known today. But there is an uncertainty about the value of the used measure unit (the stadium), so that the error oscillates between 1% and 5% (*Cf.* Antonie Pannekoek, *op. cit.*, p. 124; Jean-René Roy, *L’astronomie et son histoire*, Québec, Presses de l’Université du Québec, Paris, Masson, 1982, p. 98). Apollonius of Perga (III^{rd} century BC) used major epicycles and excentres and Hipparchus of Nicea (II^{nd} century BC) added up minor epicycles, and elaborated a more general excentres theory. Ptolemy (Claudius Ptolemaeus, AD II^{nd} century) adjoined after the equant. Nobody knows the one (or ones) who was the author of the concepts of epicycle and deferent. An epicycle is a small circle revolving in uniform motion around a point that is positioned on the circumference of a second revolving circle, a deferent, of which centre coincides with the earth. The Greek astronomers adjusted the periods of rotation and diameters of circles so as to describe the planets’ motions and, qualitatively, differences in brightness. It seems that this system explained only the observations already done of planetary trajectories, but it has undeniably allowed the research of new concepts to progress. According to Thomas Kuhn, the epicycle-deferent system had “power and versatility […] as a method for ordering and predicting the motions of the planets” (*Cf.* Thomas S. Kuhn, *The Copernican Revolution. Planetary Astronomy in the Development of Western Thought*, Cambridge, Harvard University Press, 1957, p. 64). This description already was “exact » in a peculiar physical sense which is still in force today. In other words, it was in accordance to the precision level of the feasible observations at this epoch. The outstanding inaccuracies were in principle explainable in an *ad hoc* way, that is, by including more elements to the system on the basis of the same fundamental symmetry in which one believed at that time. Nicolas Copernicus again founded upon this principle when he conceived his own heliocentric model. In his view, the equant, the most original element brought by Ptolemy, was not aesthetic and not in accordance with the principle of circular symmetry. The concept of equant consisted of setting that a celestial body, for example the sun, was related to a certain point, the *punctum equans*, by a radius vector which described equal angles in equal times, this point being located at appropriate distance from the central point occupied by the earth (*Cf. *Antonie Pannekoek, *op. cit.*, p. 138). So it is possible that the equant has guided Copernicus to heliocentrism and then, Kepler, to his second law. One should note that Nicolas of Cusa, and not Copernicus, heralded the so-called “Copernican revolution” by negating the separation between the region of aether and the sublunary sphere (*De docta ignorantia*, 1440). It was in fact a more revolutionary view than Copernicus’ itself.

[11] The “armilla” or “armilla sphere” (from the latin “*armilla*”, bangle) consists of rings for equator and meridians; the earth is figured in it as a sphere at the centre of circles featuring planetary motions. Nobody knows the instruments which Hipparchus or Ptolemy used to determine the position of fixed stars (*Cf.* Antonie Pannekoek, *op, cit.*, pp. 91, 129).

[12] Alexandre Koyré is one of the authors who have most insisted upon the revolutionary character of modern science (*Cf.* Alexandre Koyré, *From the Closed World to the Infinite Universe*, Baltimore, John Hopkins University Press, 1957; see also Antonie Pannekoek, *A History of Astronomy*, chapter 22: “The Struggle over the World System”, p. 222-234). As a matter of fact, the Newtonian dynamics was radically innovative with respect to Aristotelian physics. However radical transformations seem altogether to be a part of the normal development of scientific research, as, for example, the advents of relativity theory and quantum mechanics has shown. And nothing prevents us from thinking that there will likely be other ones, in the long term research of the future.

[13] It should also be stated, in the cases of classical theories, including the Newtonian dynamics and gravitation theory, that we are not yet able to completely explain that which has make them effective. We know only it on classical scales of value (that is, neither relativistic nor quantum) or, at least, we know their degree of exactness on these scales.

[14] In the nineteenth century, Urbain Le Verrier made the hypothesis that a small planet, which he called “Vulcain”, was responsible for the irregularity in the motion of Mercury’s orbit. Now we know, on the base of the Newtonian theory, that the abnormal amount of Mercury’s orbit rotation is 42”per century and, on the base of general relativity, it is 1”. Considering the probability of a small error in the observed measurements, it is taken as a proof of the correctness of general relativity (*Cf.* Colin A. Ronan, *Discovering the Universe. A History of Astronomy*,London, Heinemann Educational Books Ltd, 1971, p. 158).

[15] Isaac Newton found a theoretical base to explain the trajectories of comets. He showed that elliptical orbits of large eccentricity could be approximately described by means of a parabolic trajectory on the visible part of a comet’s flight. In 1716, Edmund Halley then found on this theory to predict the coming again of the comet going by his name (*Cf.* A. Pannekoek, *op. cit.*, p. 268-269).

[16] The theorem of Emmy Noether (1918) establishes the connection between conservation laws and known symmetries. Not only a single formalism allows one to extend quantum field theory to the weak and strong interactions, but also all the physical laws and particle properties are so derived from such symmetries.

[17] For example, Antonie Pannekoek writes about Copernicus’ model: “Thus, the new world structure, notwithstanding its simplicity in broad outline, was still extremely complicated in the details” (*op. cit.*, p. 198).

[18] An *ad hoc* adjustment is defined here as a theoretical adjustment capable of describing certain observations and sometimes making discover new ones (without allowing too much for their real or unreal character), but which imply no genuinely new theoretical basis and is not itself considered as a fundamental theory or concept, but rather a more or less artificial addition to these. This definition differs in part from other definitions of the same term. In fact, one habitually considers an *ad hoc* theory as a theory devoid of real usefulness. For example, Karl R. Popper asserted that *ad hoc* theories are “trivial” (*Cf.* *Conjectures and Refutations*, London, Routledge and Kegan Paul, 1972, p. 244). According to him, it resists all tests. He would surely have admitted, however, that an *ad hoc* theory is not always from the outset without any apparent interest and resists tests only during some time.

[19] Kostelecky strives for disproving two fundamental symmetry principles: Lorentz and CPT symmetries. *Cf.* V. A. Kostelecky, Phys. Rev. D **69**, 105009 (2004); Quentin G. Bailey and V. A. Kostelecky, Phys. Rev. D **74**, 045001 (2006).

[20] A simplified outline of Milgrom’s law can be stated as follows:

*m *: *a = F*,

where *m* is the gravitational mass of a body moving in an arbitrary static gravitational field *F* with acceleration *a*. For accelerations approximately equal to or smaller than *a*_{0}, : is equal to *a* divided by *a*_{0}, where *a*_{0} equals approximately 10^{-8 cm s-2}. For accelerations much larger than *a*_{0}, : is approximately equal to 1 and the Newtonian dynamics is reinstated. *Cf. *Mordehai Milgrom, *Astrophys. J.*** 270**, 365 (1983); **270**, 371 (1983); **270**, 384 (1983); *Acta Physica Polonica*, Vol. 32 (2001).

[21] In particular, Vumrun Vafa and Edward Witten think so. See Brian Greene, *The Elegant Universe. Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory*, New York, London, W.W. Norton & Company, 1999, pp. 167, 174, 186-187, 382).