Thirty-four years ago I left physics with a Masters degree, to start a nine year stint doing AI/CS at Lockheed and NASA, followed by 25 years in economics. I loved physics theory, and given how far physics had advanced over the previous two 34 year periods, I expected to be giving up many chances for glory. But though I didn’t entirely leave (I’ve since published two physics journal articles), I’ve felt like I dodged a bullet overall; physics theory has progressed far less in the last 34 years, mainly because data dried up:
One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.
In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. Previously, physics foundations theorists were disciplined by a strong norm of respecting the theories that best fit the data. But with less data, theorists have turned to mainly judging proposed theories via various standards of “beauty” which advocates claim to have inferred from past patterns of success with data. Except that these standards (and their inferences) are mostly informal, change over time, differ greatly between individuals and schools of thought, and tend to label as “ugly” our actual best theories so far.
Yes, when data is truly scarce, theory must suggest where to look, and so we must choose somehow among as-yet-untested theories. The worry is that we may be choosing badly:
During experiments, the LHC creates about a billion proton-proton collisions per second. … The events are filtered in real time and discarded unless an algorithm marks them as interesting. From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.
One bad sign is that physicists have consistently, confidently, and falsely told each other and the public that big basic progress was coming soon:
The second rule for inventing a new particle is that you need an argument for why it’s just about to be discovered, because otherwise nobody will care. This doesn’t have to be a good argument—everyone in the business wants to believe you anyway—but you have to give your audience an explanation they can repeat. …
Lies and exaggerations have become routine in proposal writing. …
This has resulted in decades of predictions for new effects that were always just about measurable with an upcoming experiment. And if that experiment didn’t find anything, the predictions were revised to fall within the scope of the next upcoming experiment.
Theorists doesn’t seem to have learned much from the data drought, as they tout the same sort of theories, and predict similar rates of progress, as they did before informative data stopped. In addition, theorists are subject to many known cognitive and social biases; see many related book quotes at the end of this post. Perhaps most disturbing, physicists seem to be in denial about these problems:
My colleagues only laugh when I tell them biases are a problem, and why they dismiss my “social arguments,” believing they are not relevant to scientific discourse. … Scientists trust in science. They’re not worried. “It’s the system,” they say with a shrug, and then they tell themselves and everybody willing to listen that it doesn’t matter, because they believe that science works, somehow, anyway. “Look,” they say, “it’s always worked.” And then they preach the gospel of innovation by serendipity. It doesn’t matter what we do, the gospel goes; you can’t foresee breakthroughs, anyway.
Of course physicists don’t really believe that “it doesn’t matter what we do”’; they fight fiercely over funding, jobs, publications, etc.
Hossenfelder ends saying the public can’t now trust the conclusions of most all who study foundations of physics, as such people haven’t taken steps to address cognitive biases, offer balanced account of pros and cons, protect themselves from peer pressure, and find funding that doesn’t depend on their producing popular and expected results. But she doesn’t tell the public who to believe instead.
To fix these problems, Hossenfelder proposes that theoretical physicists learn about and prevent biases, promote criticism, have clearer rules, prefer longer job tenures, allow more specialization and changes of fields, and pay peer reviewers. Alas, as noted in a Science review, Hossenfelder’s proposed solutions, even if good ideas, don’t seem remotely up to the task of fixing the problems she identifies. Sermons preaching good intentions usually lose against bad incentives, and the incentive problems here are big.
It seems to me that it will take much larger incentive changes to substantially deal with this problem. Let me take the rest of this post to elaborate.
To me, “science” is efforts to find theories that accurately predict important data, and efforts to find data that distinguishes between likely theories. The phrases “to find” are essential here. It is not enough to merely claim that you aim for this sort of theory or data; these must actually be primary aims driving your behavior. Without such aims, you may know science, use science, or help science, but you are not doing science. Many different motives, such as prestige, money, or altruism, can cause you to have such aims; the issue here is more results than feelings.
A fast enough flow of new theories and data can naturally create sufficient incentives for these aims. When the mutual testing of theory and data happens in times much shorter than a career, then theorists can gain by explaining new data, and experimenters gain by distinguishing theories. But in data droughts, theorists can have stronger incentives to join beauty fashion cabals, and data collectors can be tempted to ignore promising but unpopular theories.
Hossenfelder has convinced me that in fundamental physics today, we have reason to doubt how much such aims actually drive behavior. Yes, science is the aim they claim, but it looks more like their main aim is to appease local beauty fashions, fashions that are more rationalized than driven by an ability to explain future data.
One way to tilt fundamental physics theory back toward science is stronger longer term incentives. That is, we might induce current researchers to care more about matches between theory and data that may not be seen for decades. One approach might be to have scientists live in near poverty today, in the hopes of huge rewards later for their children or grandchildren. Alas, while we might find some who could be motivated this way, they probably don’t make the best scientists.
Fortunately, in a market economy it is possible to give long term incentives to organizations that use short-term incentives to hire experts who help them. Yes, all else equal it costs more to create long term incentives, relative to short term ones. But when a cheap product doesn’t work, consider paying more for higher quality, even when that is more expensive.
In particular, I’ve proposed that science patrons subsidize long-term prediction markets on many particular questions in fundamental physics, and on the future-evaluated prestige of each paper, person, and institution. Eleven years ago, Hossenfelder didn’t think much of my earlier proposals:
The whole idea fails on a very obvious point: … the majority of scientists is not in academia because they want to make profit by investing smartly. Financial profit … is just not what drives most scientists. … No, the majority of experts wouldn’t pay any attention to that betting market.
To argue to the contrary, that my proposals are feasible, let me remind you all of what the stock market does today for firm incentives.
Today, the market prices of firms influences firm prestige, which influences how people associated with that firm are treated. When others look to invite speakers to conferences, or to quote experts in the media, or to hire new employees, they prefer to choose individuals from higher priced firms. Also, market traders take the prestige of people and activities associated with a firm into account when setting the market price of that firm. So if there were were markets estimating future historian evaluations of the prestige of papers, scientists, and institutions, we should expect a two-way feedback between these market prices and other markers of scientific prestige, such as publications and jobs.
Today, thirty year bonds are traded often. Thus there are financial actors who in effect care about what happens in thirty years. Some of these actors are organizations. Thus there can be organizations who in effect care about possible winnings in prediction markets several decades later. So it is possible to induce organized effort today via sufficiently subsidized long term prediction markets.
Today, many organizations, such as hedge funds, gain most of their income from trading in financial markets. They do this via paying for efforts to collect info, and then using that info to make trades that are more likely to win than to lose. Yes, this requires that there exist fools out there who take the losing side of these trades, but such fools exist in most financial markets, and if they didn’t exist we could produce the same effect via subsidized financial markets (such as via automated market makers).
Hedge fund employees use many strategies to collect relevant info, including statistical analysis and complex computation. But many of them gain that info via gossip from a wide network of social contacts to whom they talk to regularly, and whom they compensate in mostly informal ways. So the people who contribute relevant info to markets need not trade themselves, or be employees of firms that trade. The employees who collect info need not take all or even any of the risks of the trades that the info the collect induces. And employees of a firm focused on making long term gains need not personally care much about the long term.
Similarly, science could be funded via long-term prediction markets. Given sufficient subsidies, hedge funds would appear that specialize in trading in such markets, and while some of those would specialize in short-term trades, others would specialize in strategies that sometimes require holding asserts over long terms. Such funds could gather info in many ways, including via gossip and via hiring both theorists and data collectors to do research and then let their patron trade first on resulting info. These hired scientists need not themselves trade, take financial risks, or care about the long term.
So in my imagined healthy future physics, there are still many “core” scientists who focus primarily on their theories or data collection efforts. But this group is not quite as autonomous a force, only accountable to other current prestigious core scientists. Instead, some part of science funding goes to pay the overhead to create hedge funds, and to give them long-term incentives to correct current market estimates, and to decide which core science efforts to fund and believe. Yes that might be more overhead than today, but overhead worth its price.
When you or anyone saw a current research effort that looked to you hopeless, you could expect to profit by selling it short. And when you or anyone saw something that looked much more promising than suggested by its current prestige markers, you could buy and expect to profit. Which is a lot more than you can usually do today.
Finally, here are those promised book quotes on biases in theoretical physics:
The criteria we use … are mostly social and aesthetic. And I doubt they are self-correcting.
There are lots of diseases in academia and one of them is that you keep doing what you’ve been doing. Somehow it turns out that everybody does the analytic continuation of what they’ve been doing for their PhD.’
This pressure to please and publish discourages innovation: it is easier to get recognition for and publish research on already known topics than to pursue new and unusual ideas.
Being open about the shortcomings of one’s research program … means sabotaging one’s chances of future funding. We’re set up to produce more of the same.…
Even tenured researchers are now expected to constantly publish well-cited papers and win grants, both of which require ongoing peer approval. The more peers approve, the better. …
Probably the most prevalent brain bug in science is confirmation bias. …
We’ve always had cognitive and social biases, of course. … And science has progressed just fine, so why should we start paying attention now? (By the way, that’s called the status quo bias.) Larger groups are less effective at sharing relevant information. Moreover, the more specialized a group is, the more likely its members are to hear only what supports their point of view. …
Default assumption must be that theory assessment is both cognitively and socially biased unless steps are taken to address these issues. But no such steps are currently being taken.