Fixing Science Using A New Science Of Science

How do we understand and learn about the world? By gathering information. If we perform enough experiments and write down what happens, eventually we should be able to understand how everything works. This approach seems logical, and it is the basis of most science, particularly biology, psychology, and social sciences. It underlies much of the progress of scientific knowledge over the past hundred years. Surprisingly, however, this form of empirical approach turns out to be inherently limited in what it can do.

A recent scientific paper from the New England Complex Systems Institute [1] demonstrates, using mathematical proofs, why a different approach to science can result in much more rapid progress. In doing so, the paper shows that the scientific methodology can be analyzed scientifically — a science of science.

The issue is that a data-driven approach does not include a way to generalize from the observations that are made to observations that might be made in even slightly different conditions. If we don’t have a way to infer from one condition to another, we have to make that experiment too. The paper analyzes the challenge of performing all of those experiments and concludes that it is so large that there must be a better way!

The problem starts with realizing that, in the real world, you will never see the same exact conditions twice. The space of possibilities is simply too vast. To get around this in the usual method, scientists try to strictly limit the variability in their experiments, carefully defining the independent variables and monitoring only a select number of dependent variables. This approach relies on a host of assumptions that are frequently invalid. And, most importantly, they are not subject to the same criteria of empirical testing.

Today, we can explore the literature and even consider building a catalog of all existing experiments. The paper, however, considers mathematically what would it take to extend this approach to all of the possible experiments and resulting observations. On the positive side, if we were somehow able to construct a catalog of all the possible experiments, then if we wanted to answer a question about the world we would simply have to cross-reference the answer from this master list.

But how practical is this approach? Information theory can help determine its feasibility. Using this method, how much information would be needed to answer what would happen in any real-world condition?

The master list of experiments and findings would have to exist in a communicable code. The problem is that there are many possible experiments. Even if the written information about each experiment could be contained in a single atom, we would quickly run out of atoms in the universe before completing the database. No amount of data we can collect will ever bridge the gap. This is a quantitative statement that has to do with the number of possible experiments that need to be done. It is ultimately about the assumption used in existing methodology that to know what will happen in a given circumstance we have to do the observation to see it.

Behaviorism is a classic illustration of the limits of empiricism. Examples include the experiment of Pavlov on the behavior of dogs and Skinner’s similar experiments on people. Under controlled conditions, limiting stimuli and only monitoring a small number of behaviors, empirical results can be recorded. If we gradually expanded the number of options, the number of possible results would grow exponentially, and recording them all would be impossible.

For example, if we want to study human psychology, we have to identify how a person responds to different conditions — a type of experiment often done in neuroscience and psychology. But how a person responds to a stimulus like written paragraphs would require more than 10^80 tests, greater than the number of atoms in the universe. Pavlov studied dogs salivating in response to ringing bells. 100 years later, all of these behaviorist experiments put together don’t tell us much about what people do.

Double-blind medical trials, the gold standard of medical research used to test and approve medical interventions, are another example. In the simplest cases, there are two groups of subjects, those who receive the treatment and those who do not, and primarily one observation is made, whether treatment is successful or not. But there are numerous examples of medicines that received approval only to later reveal dangerous side effects. The possible interactions between different conditions and treatments within a patient’s body are so numerous that a study could never include enough subjects to detect all possible side effects.

Ultimately the problem is that this empirical approach focuses on individual experiments instead of how experiments can be used to produce robust generalizations. The New England Complex Systems Institute has developed multiscale information theory [2] to address this challenge. Rather than collecting all possible observations about a system, the objective is to determine what information is actually important. This approach uses theory to make the best use of experiments. The key insight is using observations to validate generalizations — what one experiment can tell you about others — rather than treating them as a long list of individual results.

The complexity of our world, biological and social, is straining the limits of empirical science. Basing scientific progress on a strictly empirical approach, even with massively big data, is not enough. A reframing of science in favor of using data effectively is necessary to face these challenges.

These findings are described in the article entitled The limits of phenomenology: From behaviorism to drug testing and engineering design, recently published in the journal Complexity. This work was conducted by Yaneer Bar-Yam from the New England Complex Systems Institute.

References:

  1. Yaneer Bar-Yam, The limits of phenomenology: From behaviorism to drug testing and engineering design, Complexity (2015). doi: 10.1002/cplx.21730
  2. Yaneer Bar-Yam, From big data to important information, Complexity (April 25, 2016). doi: 10.1002/cplx.21785