X

Fixing Science Using A New Science Of Science

How do we understand and learn about the world? By gathering information. If we perform enough experiments and write down what happens, eventually we should be able to understand how everything works. This approach seems logical, and it is the basis of most science, particularly biology, psychology, and social sciences. It underlies much of the progress of scientific knowledge over the past hundred years. Surprisingly, however, this form of empirical approach turns out to be inherently limited in what it can do.

A recent scientific paper from the New England Complex Systems Institute [1] demonstrates, using mathematical proofs, why a different approach to science can result in much more rapid progress. In doing so, the paper shows that the scientific methodology can be analyzed scientifically — a science of science.

The issue is that a data-driven approach does not include a way to generalize from the observations that are made to observations that might be made in even slightly different conditions. If we don’t have a way to infer from one condition to another, we have to make that experiment too. The paper analyzes the challenge of performing all of those experiments and concludes that it is so large that there must be a better way!

The problem starts with realizing that, in the real world, you will never see the same exact conditions twice. The space of possibilities is simply too vast. To get around this in the usual method, scientists try to strictly limit the variability in their experiments, carefully defining the independent variables and monitoring only a select number of dependent variables. This approach relies on a host of assumptions that are frequently invalid. And, most importantly, they are not subject to the same criteria of empirical testing.

Today, we can explore the literature and even consider building a catalog of all existing experiments. The paper, however, considers mathematically what would it take to extend this approach to all of the possible experiments and resulting observations. On the positive side, if we were somehow able to construct a catalog of all the possible experiments, then if we wanted to answer a question about the world we would simply have to cross-reference the answer from this master list.

But how practical is this approach? Information theory can help determine its feasibility. Using this method, how much information would be needed to answer what would happen in any real-world condition?

The master list of experiments and findings would have to exist in a communicable code. The problem is that there are many possible experiments. Even if the written information about each experiment could be contained in a single atom, we would quickly run out of atoms in the universe before completing the database. No amount of data we can collect will ever bridge the gap. This is a quantitative statement that has to do with the number of possible experiments that need to be done. It is ultimately about the assumption used in existing methodology that to know what will happen in a given circumstance we have to do the observation to see it.

Behaviorism is a classic illustration of the limits of empiricism. Examples include the experiment of Pavlov on the behavior of dogs and Skinner’s similar experiments on people. Under controlled conditions, limiting stimuli and only monitoring a small number of behaviors, empirical results can be recorded. If we gradually expanded the number of options, the number of possible results would grow exponentially, and recording them all would be impossible.

For example, if we want to study human psychology, we have to identify how a person responds to different conditions — a type of experiment often done in neuroscience and psychology. But how a person responds to a stimulus like written paragraphs would require more than 10^80 tests, greater than the number of atoms in the universe. Pavlov studied dogs salivating in response to ringing bells. 100 years later, all of these behaviorist experiments put together don’t tell us much about what people do.

Double-blind medical trials, the gold standard of medical research used to test and approve medical interventions, are another example. In the simplest cases, there are two groups of subjects, those who receive the treatment and those who do not, and primarily one observation is made, whether treatment is successful or not. But there are numerous examples of medicines that received approval only to later reveal dangerous side effects. The possible interactions between different conditions and treatments within a patient’s body are so numerous that a study could never include enough subjects to detect all possible side effects.

Ultimately the problem is that this empirical approach focuses on individual experiments instead of how experiments can be used to produce robust generalizations. The New England Complex Systems Institute has developed multiscale information theory [2] to address this challenge. Rather than collecting all possible observations about a system, the objective is to determine what information is actually important. This approach uses theory to make the best use of experiments. The key insight is using observations to validate generalizations — what one experiment can tell you about others — rather than treating them as a long list of individual results.

The complexity of our world, biological and social, is straining the limits of empirical science. Basing scientific progress on a strictly empirical approach, even with massively big data, is not enough. A reframing of science in favor of using data effectively is necessary to face these challenges.

These findings are described in the article entitled The limits of phenomenology: From behaviorism to drug testing and engineering design, recently published in the journal Complexity. This work was conducted by Yaneer Bar-Yam from the New England Complex Systems Institute.

References:

  1. Yaneer Bar-Yam, The limits of phenomenology: From behaviorism to drug testing and engineering design, Complexity (2015). doi: 10.1002/cplx.21730
  2. Yaneer Bar-Yam, From big data to important information, Complexity (April 25, 2016). doi: 10.1002/cplx.21785

View Comments

  • Yaneer, Yes, nice work, and we definitely need a "science of science" to straighten this out. It's very laudable to aim at giving AI some semblance of environmental consciousness. I don't think that will be of much real help for our task of managing an ever faster changing and complicating relationships with each other and the natural world.

    The deep problem with AI is very closely related to the deeper problem with human thought,... that we have relied on the economic power of defining our solutions conceptually, creating mental models of cause and effect to replace nature's more complex structures and behaviors. Our models are made to be internally consistent... but are abstracted as models having no actual environment except assumptions.

    It's embedded in science too, what we have to view it as a major error, of focusing entirely on equations that redefining physical things as numerical operators. What science is then most missing is any direct relation to the forms of organization found in the natural world, made mostly of environmentally embedded self-organizing systems. Such natural systems seem to all originate from their own individual growth processes, as original emergent systems of natural organization. These natural systems do develop their own internal complex designs, yes like conceptual thinking does too, but do so by a process that relies on developing and maintaining roles in their environments that mature as the emerging systems develop.

    So, yes, AI seems doomed in a vain attempt at taking over nature having no clue... but it could be a learning experience too! ;-)Show less

  • A timely article Yaneer Bar-Yam which identifies well the (current) limits of Machine Learning and AI - the capability of generalizing.

    At the present stage of development this is where humans can excel because of our capabilities in "discerning patterns" (and disregarding detailed data that we consider to have low significance) and "discerning structures" (such as causation factors we think are significant). These human capabilities are largely "heuristic", but have enabled science to proceed. When machines can learn such capabilities progress will I anticipate be accelerated on multiple fronts.

    On the other hand the scientific discoveries that will emerge are themselves becoming progressively more intricate, the sources of their information bases wider, and the compliance limitations stricter; these effects will absorb advanced machine learning capabilities.

    Another aspect to consider is the very nature of "hypotheses" and their validation. In my view, through history all the great scientific discoveries have been made through "disproving" accepted or "settled" theory. In planning scientific endeavours, it will be just as or more fruitful to adopt a contrarian mindset - putting as much effort into disproving what we think we know - as it will to seek to reinforce our biases.

    All challenges to, but not ultimately beyond the bounds of, future AI.