Monthly Archives: February 2017

A few random items

My preprint “Interacting quantum field theories as relativistic statistical field theories of local beables” has just appeared on arxiv. Fortunately, even with the few bold claims that I made, I have not been flagged so far by the algorithmic crackpot detector and my paper went through without difficulty. As I had emphasized in the previous post, for once it is a work I am very proud of. I do not know if every single detail will hold closer scrutiny, but at least I am confident that the main message is correct and should be of interest to people working in quantum foundations (on the pure theory and phenomenology sides).

Last week, I had the pleasure to be visiting Lajos Diosi in Budapest. We made some progress on the semi-classical gravity front and on the quantum feedback front which will probably materialize in the form of preprints in the not so distant future. It was a great but tiring trip: discussing physics non-stop, from dawn to dusk, is really mind wrecking.

img_20170218_143956

Budapest

As an aside, I went from Munich to Budapest and back in bus (Flixbus) and it is a quite smart way to travel. Sure, it’s long, 8h30. But most of the time is productive. The bus had power outlets and excellent wifi and I thus managed to work, answer emails, watch movies. On my way to Budapest, I took the night option and slept almost all the way through, effectively teleporting myself instantly. It is also cheap and does not pollute much, so I would definitely recommend. In France, long distance bus lines have been authorized for only a few years which is why I have only realized this option existed recently. They were previously forbidden to provide SNCF with a monopoly. I think the argument was that a railway is essentially operated at fixed cost and, as a result, it may be optimal to nudge or even force people to use train to reach the point where the cost per passenger drops bellow that of all other means of transportation. This global optimum can be reached for fast lines (TGV) between Paris and big cities (France railway network is a star graph) but it is clearly impossible for two medium sized cities that are far from each other (say, Rennes and Toulouse). In that case, the railway monopoly was arguably harming the mobility of people. I think the situation is now closer to an optimum with two networks: a (mostly) star graph of fast TGV connections with Paris in its center, and a slower but fully connected graph of bus lines between medium sized cities.

rff

French train network and its trafic in 2007

Before going to Budapest, I spent two days in Paderborn, invited by Martin Kolb, a probabilist. He has been recently specializing in the study of one of the concepts of stochastic calculus I find the most subtle: the local time L_t. It is the time the Brownian motion B_t spends at a given point (typically 0). Of course, it is defined appropriately, as a rescaled limit of the time spent \varepsilon from 0 so that the result is not trivial. This is a concept I had been introduced to during my thesis when I was studying quantum spikes with Michel Bauer and Denis Bernard and I still find its properties quite counter-intuitive and sometimes mysterious (t(L) is a Levy process, so weird stuff happens when switching from real time to local time). Martin showed me interesting results on Brownian motions conditioned through constraints on its local time. The corresponding paper is published in Annals of Probability (also on arxiv), and the reader interested in probability theory should have a look.

img_20170203_092358

Paderborn

Now back in Munich and with my long-term work on foundations now packaged in a preprint and online, I hope to work a bit on subjects closer to the main interests of the group. My objective is to understand better tensor network methods in the continuum. The dimension 1 is very simple but has been widely studied already so I would like to attack the 2 dimension case. Brute force generalization of the discrete case is possible but super formal and does not seem to allow to compute anything, but maybe one can be smarter and construct continuous ansatz that have no discrete counterparts. In an unrelated subject and back in foundations, I am also thinking about writing a note on Lorentz invariant noise. It seems to me that there exists no article on what are the properties of Gaussian processes and point processes that have a Lorentz invariant probability distribution. I think I now have a crude classification of what one can get and so it might be helpful to write a short note about it. If anyone can provide existing references about this kind of stuff, a comment would be most helpful.

Quantum field theory as a statistical field theory

I have been working on an ambitious project for quite some time and, although there are still questions to be settled, I think the main lessons are now robust enough that they can be profitably discussed. I have a draft (modified 09/02/2017) which I will put on the arxiv (update: it’s now on the arxiv) once I get some feedback (and likely tame some bold claims that may not be as firmly grounded as I first thought). Comments are of course very welcome. As I have also made a first presentation of the results at the seminar of the group of mathematical foundations of physics at LMU, I have some slides that give a short (and thus inevitably provocative) overview of my claims.

So what is this all about? I think I have (at least partially) succeeded at constructing simple collapse models in a quantum field theory context. This would already be enough to make me happy, but what gets me really excited is the fact that this construction yields what I think are quite important lessons about both QFT and collapse models.

As everyone knows, quantum field theories are not about fields, at least not in the usual sense. There are no “tangible” fluctuating fields in QFT. One can perhaps write QFT as a dynamical theory of wave functionals on field configurations, but one certainly cannot see QFT as a statistical field theory on R^4. In its very formulation, QFT is an operational theory, that is, not a theory about microscopic “stuff”, but ultimately a theory that says things about the statistics of measurement results, ie of very macroscopic stuff. QFT, as well as other quantum theories, is agnostic about what the microscopic stuff could be (which sometimes leads people to think that there is no microscopic reality, whatever that’s supposed to mean, but this is obviously not a logical implication).

rdf2

Unfortunately not a quantum field

In non-relativistic quantum mechanics, there is a way (among other reasonable options) to make the theory about microscopic stuff: collapse models. The idea is to modify the Schrödinger equation a bit to collapse macroscopic superpositions without changing the predictions of the theory too much (but there is of course a modification involved). Collapse models give a stochastic evolution for the wave function that, although admittedly ad hoc, gives a behavior that looks more reasonable. Small things can be delocalized, big things, such as a measurement apparatus, are always well localized. The theory is still not about stuff in physical space but one can define “local beables” (that is some field/particle in physical space one takes to be real) and see collapse models as dynamical theories of this stuff. So, in a nutshell, collapse models allow to rewrite non-relativistic quantum mechanics as a theory that is specific about what the microscopic world is made of. The price to pay is that the empirical content of quantum mechanics (what it says about the statistics of measurements) is modified. For that latter reason, collapse models are currently under intense experimental scrutiny.

Now, what I have constructed changes the two previous stories in a substantial way. I propose a relativistic collapse model that has two important features:

  • It is naturally written as Lorentz invariant statistical field theory (i.e. a theory of random fields on R^4).
  • It is empirically equivalent to QFT.

That is, QFT can be written as a theory of fields in the usual sense after all, and collapse models can be completely “hidden” or embedded within an existing quantum theory. Better, it is the same model that does those two things.

To my mind, this has important implications for QFT and collapse models. First, for collapse models, one might want to slightly reconsider the efforts that have been invested in phenomenology (and thus I must respectfully disagree with the latest blog post of Sabine Hossenfelder, where she advocates for more phenomenology in this context: I think this time we need a bit more “talk” before.). Indeed, if collapse can reasonably be hidden in existing sectors of the Standard Model, it means that the effects we currently consider to be typical signatures of collapse just come from peculiar choices of non-relativistic models. Collapse models really can be seen as an interpretation of quantum theory (where interpretation is taken in the slightly improper sense it has nowadays, that is as a complete theory underlying the operational formalism of quantum theory without modifying its predictions, insofar as the latter are well defined).

One the QFT front this may also give interesting things. The reformulation of QFT I obtain takes the form of a statistical field theory on R^4, that is I have a probability measure (or an object that is formally a probability measure) and the field that one draws from this distribution can be considered to be all there is: it is the final output of the theory (and then, further analysis yields the operational prediction rules, but the latter are not primitive). The nice thing with a probability measure, and more generally about a theory that gives dynamics for “stuff”, is that it can be modified at will without becoming logically inconsistent. This is to be contrasted with an operational framework that may become self contradictory after the slightest modification. Why is this helpful for QFT? Because of the need for regularization. QFT nastily diverges. This is why one usually does computations with a regularization framework before sending the cut-off to infinity (which then requires to redefine the parameters and renormalize the theory, but renormalization is something in spirit that is very different from regularization). But why can’t we just consider regularized theories right from the start, say that they are the real thing, and then just say the cut-off is far so we devise approximation schemes to compute predictions? In this picture, renormalization would then simply be a method to relate the bare parameters of the model to what one measures, without any “fundamental” character. The issue is that, to my knowledge, regularized QFTs are never QFTs. To regularize a QFT one needs to cut the higher momenta. This can be done by putting a lattice (as in Lattice Gauge Theory), but then it’s neither a field theory nor a Lorentz invariant theory. To cut higher momenta in a covariant way, one would typically need higher derivatives in the Lagrangian. But because of Ostrogradsky’s instability, one cannot canonically quantize the corresponding theory. It is nowadays popular to say that QFTs are effective theories as a way to explain out the issues one usually encounters in the UV (such as the Landau pole of QED). However a QFT cannot be the effective theory of its regularized version as the latter cannot be fundamental. This explains (a small part) of the appeal of String theory that does have a UV cut-off without the need to break invariances.

Once QFT is defined as a classic probabilistic theory of fields, regularization is a perfectly legal thing to do. Cutting-off covariantly the highest momenta in the propagators, say by Pauli-Villars regularization, can be done at the fundamental level. In this new formulation, a regularized QFT is a QFT, and a thus a possible well defined foundation of physics.

Of course the draft is just of first exploration of this idea and I don’t want to hide the technical difficulties that are still in the way. There is certainly still a lot of stuff to be done, especially at the quantitative level, to know how fast exactly the reduction of superpositions through the collapse mechanism occurs. The level of mathematical well definiteness of the objects used in the theory, even after regularization, certainly needs to be discussed thoroughly as well. Nevertheless, I remain very happy about this result as it seems to me that it opens a whole range of new possibilities.

update 9/02/2017: I have put a newer version of the draft. I had made a mess in the functional integral representation trying to be clever with sources so I have now rewritten everything in terms of asymptotic fermionic states. Hopefully, it now makes more sense.

update 22/02/2017: The preprint is now on arxiv. I am glad I was not flagged by the algorithmic crackpot detector. Hopefully, this will provide me with feedback from a wider community