home blog misc links contact about

Pop-politics and pop-science

Physics and other natural sciences tend to have an impressive level of rigour, but softer sciences like sociology and political sciences less so. In today’s world, attention-economical mechanisms control what areas of a field of science people focus on. In this blog post, I argue that there is a specific and very dangerous interaction between these two problems that mainly manifests in the area of politics and political science.

A word of a warning:

The hierachy of sciences

Let us begin with the obligatory xkcd cartoon.

The original joke of the comic is that the various sciences are just different layers of applying higher-up sciences (rungs of the theory ladder). For instance, quantum electrodynamics (QED) fully describes how certain types of elementary particle interact with each other via the electromagnetic force. Given enough brains and approximations, we could theoretically rederive the entirety of chemistry starting from QED. We do quickly run into the problem of emergent behaviour, though. While the theory of QED is perfectly well suited for calculating the interactions of two or three elementary particles with each other, its equations become exponentially more complex once you start adding more particles. An average molecule of sugar contains about 350 electrons. Its QED field equations are mindboggingly complicated and impossible to solve with our current technology and methods.

This is why we don’t just need QED, but also theories lower on the “theory ladder”. Chemistry might not be able to describe two-particle interactions like Compton scattering, but it does make some very good predictions about sugar molecules. It tells us that when we combine a sucrose molecule with a hydrolysase, we’ll get a glucose and a fructose molecule. Chemistry has perfected the art of ignoring the details of QED, but still accurately, qualitatively describing what happens to molecules on a “large scale”. We can go further down: In principle, we can fully describe the inner workings of a cell if we know the exact positions and chemical properties of every single molecule inside the cell. However, that’s not feasible in practice, so we need molecular biology, which makes larger-scale predictions about cells. Next, we have macrobiology, making predictions about whole tissues, organs and organisms. If we focus on the brain, we get the additional layers “neurology” and “psychology”. Further down the theory ladder, we get sociology and political science. All of these are incredibly useful and indispensable for our modern society. Unfortunately, stepping down the theory ladder comes at a terrible cost.

The heavy price of the theory ladder

But let’s take a closer look what happens when we go down a rung on the theory ladder. QED makes precise predictions about the exact elementary particle densities at all times. QED gives you the magnetic moment of electrons with a precision of 15 significant figures. However, when you ask a chemist about the cross-section of a spin-up electron scattering with a RHCP photon hitting it at θ=45\theta = 45^\circ, they’ll just give you a dumb look. Chemistry just makes predictions about the rough types of molecules involved, and sometimes about reaction speed. We can now describe sugar molecules splitting up, but we’ve also lost an awful lot of precision on the single-electron level.

This only gets worse when you go to biology. Molecular biology only makes rough predictions about what kind of molecules will probably be at some spot. Macrobiology doesn’t do that at all. A macrobiologist might well not hear about electron orbitals over the course of their entire career and still make useful predictions about what happens when we inject a monkey with bacteria cultures. But their knowledge is nowhere near the rigor of theoretical physics anymore. If you ask a theoretical physicist to calculate some scattering process between two particles, you’ll get an interaction amplitude with 10 significant digits as a result. If you ask a macrobiologist their prediction about the health state of a monkey after we inject it with E. coli cultures, they’ll give you a “well maybe” answer, with a 70%ish probability combined with several dozen “but if…”’s.

A direct consequence of this loss of scientific rigour is that we need to be more wary of cognitive biases. Take confirmation bias for instance - the bias that causes people to be attached to their own beliefs and only seek out evidence that proves them right. Let’s say that my theoretical physicist colleague and I have a disagreement about the behaviour of spin-1/2 fermions when subjected to a magnetic field. No matter how strong her or my confirmation bias on our respective viewpoints, and no matter how heated the disagreement, it will only take us a comple of hours and a bit of spilled ink to definitely settle the question once and for all. Either she or I will admit to having been wrong, and we’ve both learned something from the discussion.

What if chemists or molecular biologists have a similar disagreement though? Their standard theories and models already are a bit more fuzzy, so there might be some room for interpretation. Does the aforementioned hydrolysase enzyme attacks the O bond between the fructose and glucose molecule? Or wouldn’t one of the OH groups destabilize the enzyme protein structure first? There’s no mathematically precise way to tell, so the heated discussion might drag on for a significantly longer time. But in any case, the two of them can just go to the lab and settle the question by experiment.

However, this method of dispute-resolving does not work much further down the theory ladder. Macrobiologists already need to resort to randomized double-blind trials to make sure that their personal opinions don’t influence the outcome of an experiment. When two sociologists have a heated disagreement, there is no coherent procedure which they could use to get rid of their emotional cognitive biases. In their heated argument, they throw words like “diachrony”, “individualism”, “communitarianism”, and “structuralism” at each other, both of them constantly finding new reasons for why the arguments and definitions of the other person are nonsense. In the end, they part ways and found two separate schools of thought2, write voluminous and incomprehensible books about their respective opinions, and attract attention and hence secure uni funding by making polarizing statements about culture war topics.3

Therefore, in the softer sciences, it is extremely important to rigorously train all the experts to recognize and counteract cognitive biases in themselves and others - simply because the softer sciences are way more fragile in the face of emotionalized irrational thinking. As always, you need very good expert knowledge - but ALSO good rationalist training - to make good scientific predictions. Sadly, this is not being done systematically, but that’s a different topic.

Pop-science (derogatory)

In the community room of the theoretical physics department of my university, there is a dedicated box for nonsense letters sent to us by random overconfident people who think they are master physicists with new, groundbreaking theories.

Thanks to the likes of Stephen Hawking and Neil DeGrasse Tyson, the general public now has a rough idea of what fundamental physics is about. Terms like “quantum entanglement”, “Higgs boson”, “black holes”, “theory of everything” and “big bang” have entered mainstream vocabulary. While this is certainly nice to secure public funding4, it can also give people a wrong impression of what theoretical physics is about. Every physicist knows the overconfident kind of person who has read A Brief History of Time by Hawking, then thinks that they are an expert in this matter and proceeds to make up their own theories. From them, you will hear stuff like

Based on heuristics, you conclude that this person likely has no idea what they are talking about. You proceed to ask them “what is the quantum-mechanical momentum operator” and “what is the symbol used for the flat spacetime metric” - questions that every third-semester physics student is able to answer - and if they don’t instantly respond with p^=i\hat{p} = -i\hbar \nabla and ημν\eta_{\mu \nu}, you have definite proof that they are overestimating themself and not qualified to speak with that much confidence on such matters.

This behavioural pattern exists in all scientific disciplines. I will call it “pop-science” (in the derogatory sense) for the purposes of this post, and call serious attempts at explaining science to the public “science communication”.

Gatekeeping? Yes, and that’s a good thing

If you (the reader) are not a physicist, you might cry out in indignation about our supposed elitism and gatekeeping right now. If we are laughing about people who are interested, but not knowledgeable about theoretical physics, how to we expect newcomers to find their way into our field?

Well, no. Physics is maybe 1% “groundbreaking philosophical” questions like quantum entanglement or whatever. Actual physicists are also interested in the “boring” 99% of physics, like calculus, solving differential equations, calculating orbital transitions of hydrogen-like atoms, deriving Feynman rules, Lie groups, and so on. And you need to have a perfect grip on the boring 99% in order to make informed statements about the deep philosophical 1%. If someone is only interested in the “philosophical implications” of quantum mechanics, but does not know what a Hilbert space is, they won’t get anywhere in physics, because once they try reading a serious QM book, they’ll nope out because the first half is just boooring linear algebra.

All the people who work in theoretical physics are people who were interested in the boring 99% and started out small. The journey of an average physicist probably starts with them learning about Newton’s laws and calculating the optimal angle at which to throw baseball - for fun. Not because they are interested in black holes or whatever, but because it’s fun solving equations and calculating integrals. Some of them go on to study black holes, and some of them study fluid dynamics - both are at a similar level of difficulty.

So when someone comes to your department and starts talking about conciousness and quantum portals and whatever, it is safe to just throw them out. They wouldn’t have ever become a good physicist, because those who are only interested in the “exciting” 1% won’t study the “boring” 99% that’s needed to properly understand the “exciting” 1%. Polemically speaking, they’ll fail the very first class in the physics curriculum because they can’t solve the quadratic equation of horizontal throws and go on to study philosophy5. On the other hand, if that person was instead having weird ideas about how the Navier-Stokes equation of fluid dynamics work, I’d be much more inclined to talk to them.

Pop-cybersecurity, or “I want to become a master hacker”

Interestingly enough, the same pattern manifests in computer science - specifically, information security. There’s an awful lot of people who have seen “hackers” in movies or read about them in newspapers and want to become one themselves. They never care about basic algorithms and data structures, networking topology, SQL or pointers - they just want to learn “th3 l33t h4xx”, go to cybersecurity seminars, cry about the Flipper Zero being banned while not even knowing the difference between AM and FM, while making all the serious people around them die of cringe.

Needless to say, the people who actually become good infosec experts and cybercriminals are those who were interested in the basics - people who cared about the “boring” 99%, i.e. wrote webservers for fun, toyed around with PHP and SQL, gradually came to realize the weaknesses in the technologies they were studying and THEN became interested in the “interesting” 1% of cybersecurity. The other way around won’t get you anywhere.

Terminology

For simplicity, I will call the “boring” 99% of a field “subset A”, and the flashy 1% with nice pop-science implications “subset B”. Similarly, I will call people who are genuinely interested in subset A of a field of study and who may or may not go on to study subset-B topic “type-A interested”, and people who only care about the subset B of a field “type-B interested”.

Why pop-politics is infinitely more dangerous

So far, I’ve only talked about pop-science in STEM disciplines. As argued before, the STEM disciplines high up the theory ladder (and computer science) are very resistant to type-B pop-science bullshittery on their own. But it gets worse and worse once you take a few steps down. For instance, in macrobiology, it’s already more difficult to stringently prove anti-vaxxers wrong - they might pull out some outlier studies to support their point, and then you have to explain the concept of pp-value hacking to them. It’s still possible, but it takes some time. However, it’s practically impossible in sociology and political science.

In politics, you have the same subset A vs subset B structure. At the time of writing (May 2024 CE), common subset B topics include superficial discussion of

while subset A contains the details of the aforementioned stuff, and other political topics like political history, sociology theory, basically every other geopolitical conflict6, government subsidies and taxation, school systems or details about judicial systems. In order to make actual informed judgement in subset-B discussions, you need to know your way in and out of the subset-A part of these topics.

An example from personal experience: I once was involved in a debate about wealth taxes in uni. The proponents were making arguments in the lines of “capitalism is unfair”, while the opponents were arguing that wealth taxes will scare rich people away from the country and hurt the economy. The debate went on and on for an hour or so, and it went absolutely nowhere. At the end, someone had the idea of asking whether a wealth tax had already been tried in our country. None of us knew. Like, come on. If you take a side in such an argument. you absolutely should know about historical precendents. We looked it up on Wikipedia and found out that there were several historical precedents for wealth taxes in our country, along with statistics about its effectiveness. If you’re a type-A interested person, you already know about that sort of facts and are able to make an informed decision about wealth taxes. If you’re a type-B interested person, you will probably enter the debate with preconceived notions about “capitalism bad” or “socialism bad” and nothing will come out of it.

Another example: US gun control. Let’s say we have a group of people who have never heard about this issue before. You present them with the situation that gun control is very lax in the US and that a lot of people die in school shootings and similar events because of it, but that guns in the hands of the common people can be a powerful deterrent for autocratic governments - without additional facts. Starting from that point, the group can already start having vivid, emotional debates about the philosophy of freedom vs. security, or about the stories of children killed in school shootings, or about brave freedom fighters brandishing their guns in the face of tyrant governments.

But absolutely nothing will ever come out of this discourse. If you want to have an actual meaningful debate about gun control, you need to look at historical precedents like Australia, at the actual statistics of gun-related deaths and how they relate to other causes of deaths, examine the plausibility of civil militias fighting for democracy based on the history of democracy, et cetera.

So let’s say that an expert on gun control walks into the aforementioned debate. They might make some contributions by citing statistics, studies, and historical precedents - and, as a last resort, point out that the other people in that debate have no knowledge about these topics. But is that actually going to convince anyone? From my personal experience, no. Nope. The pro-gun control people will come up with reasons for why looking at Switzerland is pointless and why everything that can be learned from it doesn’t matter in the present discussion (and likewise for the anti-gun control people and Australia). There is no trace of mathematical rigour that classifies political judgements into “correct” and “incorrect” as there is in physics. Instead, we have to rely on the pattern recognition software in our brains, which is a lot better if we’ve spent years studying subset-A problems before, and near useless if we barge into a topic with preconceived emotionalized notions. No matter how nonsensical an assertion like “Switzerland/Australia is not relevant to gun control policies” is, type-B interested people supporting the respective viewpoints will genuinely believe it. “it’s not complicated!!! THINK OF THE CHILDREN DYING IN SCHOOL SHOOTINGS!!!!!!!” and “what is wrong with you!! I NEED TO DEFEND MYSELF FROM THE BADDIES!!!!!” are emotional catch-alls that will always triumph over actual factual debate.

This is the dichotomy between pop-politics and serious politics: Type-A interested people will know about all the subset-A topics, and their mental pattern recognition for politics will be well-calibrated such that they are able to form quality opinions on subset-B topics. However, type-B interested people will lack the experience and fine-tuning necessary, only focus on pop-politics topics, and tend to hold polarizing, one-sided and extremist views. In contrast to STEM, it is almost impossible to properly call people out for doing pop-politics.

A problem aggravated by attention economy

I’ve argued that in STEM, it is very easy to brush aside the pop-science types by exposing their ignorance about the fundamental type-A concepts of the field they are claiming to be knowledgeable in. In politics, it is not that easy. Plus, the laws of attention economy apply. The opinions that get the most attention are the most shrill, extreme and polarizing ones, and factual discussion on the type-B topics are drowned out by emotional discussion. It is safe to say that the average political discourse on type-B topics is frustratingly low. Type-B topics tend to dominate newspaper headlines and social media discourse, with no productive discourse emerging. Likewise, political discourse on social media is almost exclusively type-B. And the fast-pasted, short attention-span interaction on social media makes this far worse7. Also, there is a very worrying tendency of people exclusively taking their information about politics from social media feeds8.

I am not particularly interested in or knowledgeable about politics by myself - and by that, I mean subset-A politics. I am, however, subject to the same emotionalizing attention-grabbing mechanisms that make people interested in subset-B politics. I am not pretending to be better than everyone else here. I am fully guilty of engaging in pop-politics too. To counteract this, I try to economize the time I am willing to spend on politics by prioritizing issues according to heuristic utilitarian criteria. For instance, climate change has the potential to gravely affect nearly all 8 billion people living on this planet and irreversibly wreak havoc to our living conditions, so it is probably justified to spend a lot of time researching and thinking about it. In contrast, the average regional ethnic/religious conflict affects somewhere around 10 million people. Therefore as a first cause-neutral approximation ignoring tractability and neglectedness9, a person assigning priorities to political causes should spend about 1/1000th of the time they spend on climate change on e.g. the Darfur conflict10. This is how I try to force myself to divide “thinking CPU cycles” between subset-A and subset-B politics, and it often entails that I try to stop myself from thinking or reading about flashy subset-B subjects in favour of “boring” subset-A ones.

Possible solutions to the problem

Here’s a bunch of unordered ideas that may or may not be helpful:

Update 2024-10-15: Edited the second-to-last paragraph to make it clear that I consider myself 100% guilty of the bad things described here too.


  1. And if you’re something like a theologian or poststructuralist, please turn around and LEAVE NOW and DO NOT INTERACT WITH ME please.↩︎

  2. Perhaps the best indicator of the health of a discipline of science is whether there are multiple “schools of thought”. In physics, chemistry and biology, there is exactly one school of thought, because all the facts are clear enough such that no serious scientists disagree with them - and if the facts aren’t clear enough, everyone agrees that they aren’t clear enough, but doesn’t take sides. In psychology, political science and sociology, you have lots of different schools of thought that wildly disagree with each other - indicating that at least one of them (but probably all of them) must be making lots of unfounded assumptions, and that the internal metascientific mechanisms of that discipline to establish scientific truths (like peer review) are massively malfunctioning. To put it slightly differently: If I, a theoretical physicist, were to proclaim that I am a “Newtonian” or “Lagrangian” physicist and that I do not believe in “Einsteinian” physics, I’d get laughed out of the room. On the other hand, economists get away with declaring that they are “Marxists” or “Keynesians” or whatever.↩︎

  3. I honestly, unironically believe that it would do the social and political sciences more good than harm if we collectively fired everyone working in these disciplines and replaced them with STEM people who have received some cognitive bias training.↩︎

  4. For instance, I think that the science communication around the discovery of the Higgs boson was a stroke of genius. The common sci-comm explanation - “the Higgs boson/field explains why particles have mass” was borderline nonsense. The actual issue was that the left-handedness of the weak interaction was incompatible with rigid gauge invariance of Dirac mass terms, and the W and Z boson masses were incompatible with soft gauge invariance of the gauge boson kinetic terms. You can’t explain this to a non-physicist though, so instead the sci-comm people came up with something that vaguely goes in that direction, and somehow they managed to convey the importance of that discovery with this hacky way of explanation. But hey, it gets us public funding, so I’m not complaining.↩︎

  5. Funny things happen when philosophers stick their head into STEM. At my uni, the philosophy curriculum has a course on propositional logic. At some point, all the physicists started signing up for the course, and only came for the exam to collect their credit points, simply because the course was so ridiculously easy by physicist standards. My uni ended up banning physicists from taking that course.↩︎

  6. I challenge the reader to name one ethnicity involved in the Myanmar civil war or in the Tigray war. (Note that I’m not pretending to be better here. I also had to look up Wikipedia to answer that question.)↩︎

  7. Many people blame the engagement-maximizing nature of the mainstream social media algorithmic feeds. This is certainly a factor, but after some time on Mastodon, I can attest that this happens independently of these algorithms as well. The difference is that on Mastodon, you get to choose whether you want to suject yourself to this type of polarized debate.↩︎

  8. There are some arguments to be made on how sourcing information from social media can have the advantage of “uncovering” traditionally ignored problems by bypassing the mechanisms of the journalistic establishment. However, I mostly disagree. There are no mechanisms on any social media platform to ensure that the actual attention being spent on different causes are anywhere near their actual importance, or to reliably verify information being passed. While traditional journalism is a news-aggregating machine that has systematic biases, social media is more like a hotbed for Dawkins-type viral memes with no quality control at all.↩︎

  9. To be precise: This approximation assumes that the amount of good done in a certain political cause per unit effort is proportional to the people affected by the problem. This is obviously a bad approximation, but humans suck at thinking about orders of magnitude, so it’s a good way to get a first sense of scale. If you want to be more precise, you might want to look into disability-adjusted life years.↩︎

  10. Which is very obviously not the case at the time of writing.↩︎