Quantum field theory as a statistical field theory

I have been working on an ambitious project for quite some time and, although there are still questions to be settled, I think the main lessons are now robust enough that they can be profitably discussed. I have a draft (modified 09/02/2017) which I will put on the arxiv (update: it’s now on the arxiv) once I get some feedback (and likely tame some bold claims that may not be as firmly grounded as I first thought). Comments are of course very welcome. As I have also made a first presentation of the results at the seminar of the group of mathematical foundations of physics at LMU, I have some slides that give a short (and thus inevitably provocative) overview of my claims.

So what is this all about? I think I have (at least partially) succeeded at constructing simple collapse models in a quantum field theory context. This would already be enough to make me happy, but what gets me really excited is the fact that this construction yields what I think are quite important lessons about both QFT and collapse models.

As everyone knows, quantum field theories are not about fields, at least not in the usual sense. There are no “tangible” fluctuating fields in QFT. One can perhaps write QFT as a dynamical theory of wave functionals on field configurations, but one certainly cannot see QFT as a statistical field theory on R^4. In its very formulation, QFT is an operational theory, that is, not a theory about microscopic “stuff”, but ultimately a theory that says things about the statistics of measurement results, ie of very macroscopic stuff. QFT, as well as other quantum theories, is agnostic about what the microscopic stuff could be (which sometimes leads people to think that there is no microscopic reality, whatever that’s supposed to mean, but this is obviously not a logical implication).

rdf2

Unfortunately not a quantum field

In non-relativistic quantum mechanics, there is a way (among other reasonable options) to make the theory about microscopic stuff: collapse models. The idea is to modify the Schrödinger equation a bit to collapse macroscopic superpositions without changing the predictions of the theory too much (but there is of course a modification involved). Collapse models give a stochastic evolution for the wave function that, although admittedly ad hoc, gives a behavior that looks more reasonable. Small things can be delocalized, big things, such as a measurement apparatus, are always well localized. The theory is still not about stuff in physical space but one can define “local beables” (that is some field/particle in physical space one takes to be real) and see collapse models as dynamical theories of this stuff. So, in a nutshell, collapse models allow to rewrite non-relativistic quantum mechanics as a theory that is specific about what the microscopic world is made of. The price to pay is that the empirical content of quantum mechanics (what it says about the statistics of measurements) is modified. For that latter reason, collapse models are currently under intense experimental scrutiny.

Now, what I have constructed changes the two previous stories in a substantial way. I propose a relativistic collapse model that has two important features:

  • It is naturally written as Lorentz invariant statistical field theory (i.e. a theory of random fields on R^4).
  • It is empirically equivalent to QFT.

That is, QFT can be written as a theory of fields in the usual sense after all, and collapse models can be completely “hidden” or embedded within an existing quantum theory. Better, it is the same model that does those two things.

To my mind, this has important implications for QFT and collapse models. First, for collapse models, one might want to slightly reconsider the efforts that have been invested in phenomenology (and thus I must respectfully disagree with the latest blog post of Sabine Hossenfelder, where she advocates for more phenomenology in this context: I think this time we need a bit more “talk” before.). Indeed, if collapse can reasonably be hidden in existing sectors of the Standard Model, it means that the effects we currently consider to be typical signatures of collapse just come from peculiar choices of non-relativistic models. Collapse models really can be seen as an interpretation of quantum theory (where interpretation is taken in the slightly improper sense it has nowadays, that is as a complete theory underlying the operational formalism of quantum theory without modifying its predictions, insofar as the latter are well defined).

One the QFT front this may also give interesting things. The reformulation of QFT I obtain takes the form of a statistical field theory on R^4, that is I have a probability measure (or an object that is formally a probability measure) and the field that one draws from this distribution can be considered to be all there is: it is the final output of the theory (and then, further analysis yields the operational prediction rules, but the latter are not primitive). The nice thing with a probability measure, and more generally about a theory that gives dynamics for “stuff”, is that it can be modified at will without becoming logically inconsistent. This is to be contrasted with an operational framework that may become self contradictory after the slightest modification. Why is this helpful for QFT? Because of the need for regularization. QFT nastily diverges. This is why one usually does computations with a regularization framework before sending the cut-off to infinity (which then requires to redefine the parameters and renormalize the theory, but renormalization is something in spirit that is very different from regularization). But why can’t we just consider regularized theories right from the start, say that they are the real thing, and then just say the cut-off is far so we devise approximation schemes to compute predictions? In this picture, renormalization would then simply be a method to relate the bare parameters of the model to what one measures, without any “fundamental” character. The issue is that, to my knowledge, regularized QFTs are never QFTs. To regularize a QFT one needs to cut the higher momenta. This can be done by putting a lattice (as in Lattice Gauge Theory), but then it’s neither a field theory nor a Lorentz invariant theory. To cut higher momenta in a covariant way, one would typically need higher derivatives in the Lagrangian. But because of Ostrogradsky’s instability, one cannot canonically quantize the corresponding theory. It is nowadays popular to say that QFTs are effective theories as a way to explain out the issues one usually encounters in the UV (such as the Landau pole of QED). However a QFT cannot be the effective theory of its regularized version as the latter cannot be fundamental. This explains (a small part) of the appeal of String theory that does have a UV cut-off without the need to break invariances.

Once QFT is defined as a classic probabilistic theory of fields, regularization is a perfectly legal thing to do. Cutting-off covariantly the highest momenta in the propagators, say by Pauli-Villars regularization, can be done at the fundamental level. In this new formulation, a regularized QFT is a QFT, and a thus a possible well defined foundation of physics.

Of course the draft is just of first exploration of this idea and I don’t want to hide the technical difficulties that are still in the way. There is certainly still a lot of stuff to be done, especially at the quantitative level, to know how fast exactly the reduction of superpositions through the collapse mechanism occurs. The level of mathematical well definiteness of the objects used in the theory, even after regularization, certainly needs to be discussed thoroughly as well. Nevertheless, I remain very happy about this result as it seems to me that it opens a whole range of new possibilities.

update 9/02/2017: I have put a newer version of the draft. I had made a mess in the functional integral representation trying to be clever with sources so I have now rewritten everything in terms of asymptotic fermionic states. Hopefully, it now makes more sense.

update 22/02/2017: The preprint is now on arxiv. I am glad I was not flagged by the algorithmic crackpot detector. Hopefully, this will provide me with feedback from a wider community

2 thoughts on “Quantum field theory as a statistical field theory

  1. Pingback: A few random items | Antoine Tilloy's research log

  2. Pingback: An apology of quantum foundations | Antoine Tilloy's research log

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s