Information

Could be misleading using some terms to describe certain phenomena?

Could be misleading using some terms to describe certain phenomena?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am implementing a series of comparative analyses related to Low Complexity Regions in some organisms. Given that these regions show a wide range of length (measured in amino acids) and different characteristics, writing my report I would like to use "mosaicism" to describe this interspersion. Given that the term mosaicism is used to describe when someone have two or more genetically different cells inside his or her organism, I am in doubt if using this word could be misleading or, given that the term per-se indicates a set of gussets that compose a picture, to consider it somewhat elegant given that is clear enough what I am talking about.

Obliged


Some mammals can breathe through their butt, scientists discover

This finding could lead to an alternative treatment for severe Covid-19.

A long time ago, ancient fish swam in a harsh aquatic environment, with scarce light and limited oxygen. Many died but some survived and evolved, eventually giving rise to Misgurnus anguillicaudatus — a type of loach fish common in parts of East Asia. The loach fish’s secret survival mechanism? A unique form of intestinal breathing via their posterior.

In other words: loach fish breathe through their butt. But scientists now report this evolutionary breathing mechanism may not be limited to fish.

Turns out certain mammals can also breathe through their intestines using a process researchers describe as “enteral ventilation via anus.” The breathtaking finding is described in a new paper published on Friday in the journal CellPress.

There’s also a timely reason for researchers to explore this technology: Covid-19. The study team argues the method explored here could eventually be used to help humans experiencing lung failure. This intestinal breathing technique, facilitated by external ventilation, could aid patients failed by or without access to current tools like ventilators.

What’s new — For the first time, researchers have proof that intestinal breathing can occur in mammals — albeit, with a little intervention.

When the research team injected either gaseous or liquid oxygen into the rectums of both rodents and pigs, a procedure known as enteral ventilation via anus (EVA), they found the animals were capable of intestinal breathing.

In fact, the procedure boosted oxygen levels in animals experiencing oxygen deprivation, increasing their chances of survival.

“A proof-of-principle EVA approach is effective in providing [oxygen] and alleviating respiratory failure symptoms in two mammalian model systems,” the team writes.

The animals experienced no apparent side effects from the somewhat unorthodox treatment.

The controversy — The fact that land-based mammals and aquatic species share the capacity for this breathing is a remarkable finding for evolutionary biology. But it’s a pretty controversial idea within the medical research community, according to the scientists.

Previous research suggests oxygen infusion from this type of procedure may help children experiencing lung failure, but not all scientists agreed with these conclusions. Furthermore, researchers don’t agree on what part of the gut is most important for intestinal breathing.

“We speculate that [other researchers] are mostly focused on upper GI tract — such as stomach, small intestine — whereas our protocol focused on GI tract, most remarkably the rectum as the main site for breathing,” lead author Takanori Takebe, of the Tokyo Medical and Dental University and the Cincinnati Children's Hospital Medical Center, tells Inverse.

How it works — It sounds stranger than fiction: Scientists helped pigs and rodents breathe through their gut by inserting oxygen into animal’s butts via an enema.

Takebe breaks down what it likely looks like when mammals breathe using this little-understood intestinal mechanism:

  1. Scientists deliver oxygen gas or liquid oxygen — perfluorocarbon — to the animal’s rectum via the EVA method.
  2. Scientists deprive the animal’s bodies of oxygen. Critically, the oxygen provided during EVA helps keep these animals alive in these hypoxic conditions, circulating around the rectum and gut.
  3. An exchange of gases — oxygen and carbon dioxide — occurs, as would normally happen during breathing. Oxygen and carbon dioxide travel between the lungs, bloodstream, and heart, supplying the body with oxygen.

Finally: When using liquid oxygen, some liquid will then be excreted from the anus. This procedure can get a little messy.

The researchers tested their experiment on mice and pigs to confirm intestinal breathing could work in mammals of different sizes — and it did.

In mice, the researchers found that they were able to reverse hypoxia for 60 minutes — possibly a life-saving amount of time. The scientists also used control groups — animals that did not receive the EVA — to confirm that their treatment improved the animal’s oxygen levels.

For example, two mice — one that had received the EVA and one that did not — walked side by side. The EVA-treated mice had a “statistically significant” increase in oxygen compared to the mice that did not receive the treatment.

Why it matters — The researchers weren’t just poking around animal butts for fun or to simply confirm an evolutionary hunch. They were hoping to harness techniques that could one day treat lung failure in humans.

Previous research has used perfluorochemicals to treat lung injuries, a technique underlying what’s known as “liquid ventilation,” but this new study paves the way for a broader application of EVA to help humans in respiratory distress.

“Due to ease of the method — simple enema — it can be potentially used even at an understaffed hospital which is not able to use high-end medical procedures such as a ventilator or ECMO (extracorporeal membrane oxygenation),” Takebe says.

Patients suffering from severe Covid-19 are often placed on ventilators or may undergo ECMO, requiring doctors to pump and oxygenate a patients’ blood using a machine.

But during times of peak Covid-19, ventilators and ECMO machines fall into short supply. This intestinal breathing technique, facilitated by external ventilation, could potentially work as an alternative treatment for Covid-19 patients, Takebe says.

“We can potentially develop a new medical device, aimed at increasing oxygen level in humans,” he says. “If granted, clinicians can explore the option to support respiratory complications associated with many infectious diseases including COVID-19.”

What’s next — The study holds promise for future medical treatments, but scientists still need to answer three questions before we can implement these treatments in human patients.


The Energy Story

Overview of the Energy Story

Whether we know it or not, we tell stories that involve matter and energy everyday, we just don&rsquot often use terminology associated with scientific discussions of matter and energy.

The setup: a simple statement with implicit details
You tell your roommate a story about how you got to campus by saying, "I biked to campus today". In this simple statement are several assumptions that are instructive to unpack, even if they may not seem very critical to include explicitly in a casual conversation between friends over transportation choices.

An outsider's reinterpretation of the process
To illustrate this, imagine an external observer, for instance an alien being watching the comings and goings of humans on earth. Without the benefit of knowing much of the implied meanings and reasonable assumptions that are buried in our language, the alien's description of the morning cycling trip would be quite different than your own. What you described efficiently as "biking to campus" might be more specifically described by the alien as a change in location of a human body and its bicycle from one location (the apartment, termed position A) to a different location (the university, termed position B). The alien might be even more abstract and describe the bike trip as the movement of matter (the human body and its bike) between an initial state (at location A) to a final state (at location B). Furthermore, from the alien's standpoint what you'd call "biking" might be more specifically described as the use of a two-wheeled tool that couples the transfer of energy from the electric fields in chemical compounds to the acceleration of the two-wheeled tool-person combo and heat in its environment. Finally, buried within the simple statement describing how we got to work is also the tacit understanding that the mass of the body and bike were conserved in the process (with some important caveats we&rsquoll look at in future lectures) and that some energy was converted to enable the movement of the body from position A to position B.

Details are important. What if you owned a fully electric bike and the person you were talking with didn&rsquot know that? What important details might this change about the &ldquoeveryday&rdquo story you told that the more detailed description would have cleared up? How would the alien&rsquos story have changed? In what scenarios might these changes be relevant?

As this simple story illustrates, irrespective of many factors, the act of creating a full description of a process includes some accounting of what happened to the matter, what happened to the energy and almost always some description of a mechanism that describes how changes in matter and energy of a system were brought about.

To practice this skill, in BIS2A we will make use of something we like to call "The Energy Story". You may be asked to tell an "energy story" in class and to use the concept on your exams. In this section, we focus primarily on introducing the concept of an energy story and explaining how to tell one. It is worth noting that the term "energy story" is used almost exclusively in BIS2A (and has a specific meaning in this class). This precise term will not appear in other courses at UC Davis (at least in the short term) or if it appears, is not likely be used in the same manner. You can think of &ldquoThe Energy Story&rdquo as a systematic approach creating a statement or story describing a biological process or event.

Definition 1: Energy Story

An energy story is a narrative describing a process or event. The critical elements of this narrative are:

  1. Identifying at least two states (e.g. start and end) in the process.
  2. Identifying and listing the matter in the system and its state at the start and end of the process.
  3. Describing the transformation of the matter that occurs during the process.
  4. Accounting for the &ldquolocation&rdquo of energy in the system at the start and end of the process.
  5. Describing the transfer of energy that happens during the process.
  6. Identifying and describing mechanism(s) responsible for mediating the transformation of matter and transfer of energy.

A complete energy story will include a description of the initial reactants and their energetic states as well as a description of the final products and their energetic states after the process or reaction is completed.

We argue that the energy story can be used to communicate all of the useful details that are required to describe nearly any process. Can you think of a process that cannot be adequately described by an energy story? If so, describe such a process.

Example 2: Energy Story Example

Let us suppose that we are talking about the process of driving a car from "Point A" to "Point B" (see the figure).

Figure 1: A schematic of a car moving at the start from position "Point A" to position "Point B" at the end. The blue rectangle depicted in the back of the car represents the level of gasoline, the purple squiggly line near the exhaust pipe represents the exhaust, squiggly blue lines on top of the car represent sound vibrations and the red shading represents areas that are hotter that at the start. Source: Created by Marc T. Facciotti (Own work) A Car Moves from Point A to Point B

Let's step through the Energy Story rubric:

1. Identifying at least two states (e.g. start and end) in the process.
In this example we can easily identify two states. The first state is the non-moving car at "Point A", the start of the trip. The second state, after the process is done, is the non-moving car at "Point B".

2. Identifying and listing the matter in the system and its state at the start and end of the process.
In this case we first note that the "system" includes everything in the figure - the car, the road, the air around the car etc.

It is important to understand the we are going to apply the physical law of conservation of matter. That is, in any of the processes that we will discuss, matter is neither created or destroyed. It might change form, but one should be able to account for everything at the end of a process that was there at the beginning.

At the beginning of the process, the matter in the system consists of:
1. The car and all the stuff in it
2. The fuel in the car (a special thing in the car)
3. The air (including oxygen) around the car.
4. The road
5. The driver

At the end of the process, the matter in the system is distributed as follows:
1. The car and all the stuff in it is in a new place (lets assume, aside from the fuel and position, that nothing else changed)
2. There is less fuel in the car and it too is in a new place
3. The air has changed - it now has less molecular oxygen, more carbon dioxide and more water vapor.
4. The road (let's assume it didn't change - other than a few pebbles moved around)
5. The driver (let's assume she didn't change - though we'll see by the end of the term that she did (at least a little). But, the driver is now in a different place.

3. Describing the transformation of the matter that occurs during the process.

What happened to the matter in this process? Thanks to a lot of simplifying assumptions, we see that two big things happened. First, the car and its driver changed positions - they went from "Point A" to "Point B". Second, we note that some of the molecules in the fuel, which used to be in the car as a liquid have changed forms and are now mostly in the form of carbon dioxide and water vapor (purple blob coming out the tailpipe). Some of the oxygen molecules that used to be in the air are now also in a new place as part of the carbon dioxide and water that left the car.

4. Accounting for the &ldquolocation&rdquo of energy in the system at the start and end of the process.
It is again important to understand that we are going to invoke the physical law of conservation of energy. That is, we stipulate that the energy in the system cannot be created or destroyed and therefore the energy that is in the system at the start of the process must still be there at the end of the process. It may have been redistributed but you should be able to account for all the energy.

At the beginning of the process, the energy in the system is distributed as follows:
1. The energy tied up in the associations between atoms that make up the matter of the car.
2. The energy tied up in the associations between atoms that make up the fuel.
3. The energy tied up in the associations between atoms that make up the air.
4. The energy tied up in the associations between atoms that make up the road.
5. The energy tied up in the associations between atoms that make up the driver.
6. For all things above we can also say that there is energy in the molecular motions of the atoms that make up the stuff.

At the end of the process, the energy in the system is distributed as follows:
1. The energy tied up in the associations between atoms that make up the matter of the car.
2. The energy tied up in the associations between atoms that make up the fuel.
3. The energy tied up in the associations between atoms that make up the air.
4. The energy tied up in the associations between atoms that make up the road.
5. The energy tied up in the associations between atoms that make up the driver.
6. For all things above we can also say that there is energy in the molecular motions of the atoms that make up the stuff.

This is interesting in some sense because the lists are about the same. We know that the amount of energy stored in the car has decreased because there is less fuel. Something must have happened.

5. Describing the transfer of energy that happens during the process.
In the particular example it is the transfer of energy among the components of the system that is most interesting. As we mentioned, there is less energy stored in the gas tank of the car at the end of the trip because there is now less fuel. We also know intuitively (from our real life experience) that the transfer of energy from the fuel to something else was instrumental in moving the car from "Point A" to "Point B". So, where did this energy go? Remember, it didn't just disappear. It must have moved somewhere else in the system.

Well we know that there is more carbon dioxide and water vapor in the system after the process. There is energy in the associations between those atoms (atoms that used to be in the fuel and air). So some of the energy that was in the fuel is now in the exhaust. Let's also draw from our real life experience again and state that we know that parts of our car have gotten hot by the end of the trip (e.g. the engine, transmission, wheels/tires, exhaust etc.). For the moment we'll just tap our intuition and say that we understand that making something hot involves some transfer of energy. So we can reasonably postulate that some of the energy in the fuel went (directly or indirectly) into heating the car, parts of the road, the exhaust and thus the environment around the car. An amount of energy also went into accelerating the car from zero velocity to whatever speed it traveled, but most of that eventually went into heat when the car came to a stop.

The main point is that we should be able to add all the energy at of the system at the beginning of the process (in all the places it is found) and at the end of the process (in all the places it is found) and those two values should be the same. Our actually examples in class will be simpler than this, but this example provides you with an opportunity to think about these in a well-understood context. Our goal here is to instill an intuitive sense of the nature of energy transfers. Our examples from Biology, mostly involving molecules that you cannot see, are more abstract, and so not the easiest to grasp intuitively. Hopefully by the end of the quarter you will have developed an "intuitive feel" about the energetics of these chemical changes.

6. Identifying and describing mechanism(s) responsible for mediating the transformation of matter and transfer of energy.

Finally, it is useful to understand how those transformations of matter and transfers of energy might have been facilitated. For the sake of brevity, in this example we might just say that there was a complicated mechanical device (the engine) that helped facilitate the conversion of matter and transfer of energy about the system and coupled this to the change in position of the car. Someone interested in engines would, of course, give a more detailed explanation.

In this example we made a bunch of simplifying assumptions to highlight the process and to focus on the transformation of the fuel. But that's fine. The more you understand about the processes the finer details you can add. Note that you can use the Energy Story rubric for describing your understanding (or looking for holes in your understanding) of nearly any process (certainly in biology). In BIS2A we'll use the Energy Story to get an understanding of processes as varied as biochemical reactions, DNA replication, the function of molecular motors, etc.

First: We will be working many examples of the energy story throughout the course - do not feel that you need to have mastery over this topic today.

Second: Nevertheless, while it is tempting to think all this is superfluous or not germane to your study of biology in BIS2a, let this serve as a reminder that your instructors (those creating the course midterm and final assessments) view it as core material. We will revisit this topic often throughout the course but you'll need to become familiar with some of the basic concepts now.

This is important material and an important skill to develop - do not put off studying it because it doesn't "look" like Biology to you today. The academic term moves VERY quickly and it will be difficult to catch up later if you don't give this some thought now.


How to Rewrite the Laws of Physics in the Language of Impossibility

Constructor theory grew out of work in quantum information theory. It aims to be broad enough to cover areas that can’t be described in the traditional ways of thinking, such as the physics of life and the physics of information.

Amanda Gefter

They say that in art, constraints lead to creativity. The same seems to be true of the universe. By placing limits on nature, the laws of physics squeeze out reality’s most fantastical creations. Limit light’s speed, and suddenly space can shrink, time can slow. Limit the ability to divide energy into infinitely small units, and the full weirdness of quantum mechanics blossoms. “Declaring something impossible leads to more things being possible,” writes the physicist Chiara Marletto. “Bizarre as it may seem, it is commonplace in quantum physics.”

Marletto grew up in Turin, in northern Italy, and studied physical engineering and theoretical physics before completing her doctorate at the University of Oxford, where she became interested in quantum information and theoretical biology. But her life changed when she attended a talk by David Deutsch, another Oxford physicist and a pioneer in the field of quantum computation. It was about what he claimed was a radical new theory of explanations. It was called constructor theory, and according to Deutsch it would serve as a kind of meta-theory more fundamental than even our most foundational physics — deeper than general relativity, subtler than quantum mechanics. To call it ambitious would be a massive understatement.

Marletto, then 22, was hooked. In 2011, she joined forces with Deutsch, and together they have spent the last decade transforming constructor theory into a full-fledged research program.

The goal of constructor theory is to rewrite the laws of physics in terms of general principles that take the form of counterfactuals — statements, that is, about what’s possible and what’s impossible. It is the approach that led Albert Einstein to his theories of relativity. He too started with counterfactual principles: It’s impossible to exceed the speed of light it’s impossible to tell the difference between gravity and acceleration.

Constructor theory aims for more. It hopes to provide the principles behind a vast class of theories of physics, including the ones we don’t even have yet, like the theory of quantum gravity that would unite quantum mechanics with general relativity. Constructor theory seeks, that is, to provide the mother of all theories — a complete “Science of Can and Can’t,” the title of Marletto’s new book.

Whether constructor theory can really deliver, and how much it truly differs from physics as usual, remains to be seen. For now, Quanta Magazine caught up with Marletto via Zoom and by email to find out how the theory works and what it might mean for our understanding of the universe, technology, and even life itself. The interview has been condensed and edited for clarity.

At the heart of constructor theory is the feeling that there’s something missing in our usual approach to physics.

The standard laws of physics — such as quantum theory, general relativity, even Newton’s laws — are formulated in terms of trajectories of objects and what happens to them given some initial conditions. But there are some phenomena in nature that you can’t quite capture in terms of trajectories — phenomena like the physics of life or the physics of information. To capture those, you need counterfactuals.

Which are?

The word “counterfactual” is used in various ways, but I mean a specific thing: A counterfactual is a statement about which transformations are possible and which are impossible in a physical system. A transformation is possible when you have a “constructor” that can perform a task and then retain the capacity to perform it again. In biology, we call that a catalyst, but more generally we can call it a constructor.

In the current approach to physics, some laws already have this counterfactual structure — the conservation of energy, for example, is the statement that it is impossible to have a perpetual motion machine.


Related Biology Terms

  • Primary Production – The process of converting inorganic energy, such as sunlight, into biological energy, usually glucose.
  • Niche – A role or position that a creature can role within an ecosystem.
  • Nutrient cycling – The process through which different elements pass from organism to organism, and are used in different ways or returned to the environment.
  • Biosphere – The sum of all ecosystems on the planet, acting as one ecosystem.

1. A scientist is studying the structure of a specific protein. He writes a paper on its shape, and what molecule it changes. Is this ecology?
A. Yes
B. No

2. A beaver cuts down trees, drags them into a stream, and floods an area to create a pond it can live in. What is this behavior called?
A. Habitat Destruction
B. Niche Construction
C. Forest Thinning

3. An ecologist studies a pack hyenas, and their interactions with the local lions. Which type of ecology would best describe this study?
A. Organismal Ecology
B. Population Ecology
C. Community Ecology


How Complex Wholes Emerge From Simple Parts

You could spend a lifetime studying an individual water molecule and never deduce the precise hardness or slipperiness of ice. Watch a lone ant under a microscope for as long as you like, and you still couldn’t predict that thousands of them might collaboratively build bridges with their bodies to span gaps. Scrutinize the birds in a flock or the fish in a school and you wouldn’t find one that’s orchestrating the movements of all the others.

Nature is filled with such examples of complex behaviors that arise spontaneously from relatively simple elements. Researchers have even coined the term “emergence” to describe these puzzling manifestations of self-organization, which can seem, at first blush, inexplicable. Where does the extra injection of complex order suddenly come from?

Answers are starting to come into view. One is that these emergent phenomena can be understood only as collective behaviors — there is no way to make sense of them without looking at dozens, hundreds, thousands or more of the contributing elements en masse. These wholes are indeed greater than the sums of their parts.

Another is that even when the elements continue to follow the same rules of individual behavior, external considerations can change the collective outcome of their actions. For instance, ice doesn’t form at zero degrees Celsius because the water molecules suddenly become stickier to one another. Rather, the average kinetic energy of the molecules drops low enough for the repulsive and attractive forces among them to fall into a new, more springy balance. That liquid-to-solid transition is such a useful comparison for scientists studying emergence that they often characterize emergent phenomena as phase changes.

Our latest In Theory video on emergence explains more about how throngs of simple parts can self-organize into a more extraordinary whole:


Asking the right questions

In 2010, a record-breaking heat wave swept through Russia, driving temperatures in some places above 100 degrees Fahrenheit. According to some estimates, the extreme temperatures contributed to the deaths of more than 50,000 people.

Two separate studies attempted to quantify the influence of climate change on that event and appeared to come to very different conclusions, inspiring a confusing series of headlines in the news. One research paper, published in Geophysical Research Letters, suggested that the heat wave was mainly the product of natural climate variations, while the other, in Proceedings of the National Academy of Sciences, claimed that human-caused climate change was a major factor.

"That, of course, sounded as if they were contradictory," said Otto, the Oxford attribution expert. For a brief time, scientists were bemused&mdashthe two sets of findings had to be at odds with one another.

But in a separate paper, published in 2012 in Geophysical Research Letters, Otto, Allen and several other colleagues demonstrated that the two studies were actually investigating two different questions&mdashand their conclusions were compatible.

The first study, they found, explored the extent to which climate change had affected the heat wave's magnitude, or severity, and concluded that natural climate variations were mainly accountable. The second had investigated global warming's influence on the heat wave's overall probability of occurring. It's possible for climate change to have a significant effect on one factor, but not the other, for the same event, Otto and her colleagues pointed out.

Today, scientists still generally agree that it's impossible to attribute any individual weather phenomenon solely to climate change. Storms, fires, droughts and other events are influenced by a variety of complex factors. And they're all acting at once, including both natural components of the climate system and sometimes unrelated human activities. For instance, a wildfire may be made more likely by hot, dry weather conditions, and by human land-use practices.

But what scientists can do is investigate the extent to which climate change has influenced a given event. Generally, researchers do this with the help of climate models, which allow them to run simulations accounting for the influence of climate change alongside simulations that assume that climate change did not exist. Then they compare the outcomes. The focus is typically on highly unusual or even unprecedented events where the influence of human-caused climate change, as opposed to natural climate variability, is likely to be clearer.

Certain types of events lend themselves to analysis better than others. For instance, researchers have high confidence when investigating heat waves, droughts or heavy precipitation. But they have less confidence when it comes to hurricanes and other more complex phenomena.

Still, scientists are investigating all kinds of weather events. The special issue of the Bulletin of the American Meteorological Society issued last month included about two dozen papers on a variety of extreme events from 2016, ranging from snowstorm Jonas to the heat-induced bleaching of the Great Barrier Reef.

It also contained some surprises: Three papers, for the first time in the Bulletin's history, suggested that the studied events not only were influenced by climate change but could not have occurred without it (Climatewire, Dec. 14, 2017). The studies determined that the record-breaking global temperatures in 2016 (the hottest year ever recorded), extreme heat in Asia and an unusually warm "Blob" of water off the coast of Alaska would all have been impossible in a world where human-caused climate change did not exist.

Scientists have cautioned that the findings don't necessarily overturn the existing narrative that no single event can be attributed to climate change. Even events that would not have been possible without warming are still influenced by the Earth's natural climate and weather systems. But the research does make it clear that the planet has reached a new threshold in which climate change has become not only a component of extreme weather events but an essential factor for some.

As scientists continue to investigate the weather and climate events that reflect the changing planet, the two questions asked by the Russian heat wave studies&mdashone focusing on probability, and the other on magnitude&mdashhave emerged as two main approaches used in attribution studies. The probability approach is perhaps most significant from a policy perspective, Otto suggested, because it helps identify the types of events that might become more common in the future and where they may occur.

The second method, sometimes called the "anatomy of an extreme event," advances scientists' understanding of the components that cause these events, and how changes to the climate system may affect them.

Both approaches are strengthening the body of evidence that climate change can influence the kinds of damaging weather events formerly thought of as "natural" disasters. As a result, some experts now believe that extreme event attribution could be the cutting edge not only of climate science but of climate litigation, as well.


Can history be a science?

It would be pointless to try to discuss the question, “Can history be a science?” without clarifying its central terms, “history” and “science”. For there is no general agreement about what these terms mean and indeed, it is doubtful that they mean anything very specific, taken in isolation from the various contexts of discourse in which they may appear. A discussion based upon vague and contested terms is bound to be without profit. What we need to do is to stipulate meanings for the sake of our present discussion, after which we can reflect upon the conclusions to which our stipulations have led us. Perhaps we will be satisfied with our work or perhaps we will become convinced that the stipulations that we made were misleading or fruitless.

Let us first say that science is the systematic and critical search for the apposite understanding of law-governed phenomena a search that is grounded in the application of recognized standards of evidence, inference and sound practice.

By saying that science is systematic, we mean that one scientific investigation takes account of, and is correlated with, others, both past and present. Science as a human activity is collective and cooperative, at least globally speaking and the various sciences are interconnected, results in one area having significance for other areas. Scientific projects are not undertaken at random, or in isolation from previous work. Speaking of science as systematic supposes a community – however diffuse and inexactly circumscribed – within which investigations, hypotheses, and findings are communicated and which takes collective responsibility for the practice of science.

By saying that science is critical, we mean that all investigations, hypotheses, and results communicated to the scientific community are meant to be received and evaluated with an eye to the methodological standards recognized and supported by that community.

Taking up a view set out by Aristotle, it is here stipulated that science is concerned only with law-governed phenomena. This does not imply that only such phenomena are worthy of study, nor that other phenomena could not be studied in a critical and systematic way but it does imply that any study of non-law-governed phenomena would not be scientific. The significance of this limitation to law-governed phenomena is in fact uncertain, since it is not entirely apparent – and is indeed debated – which phenomena may be considered to be law-governed and which not. It is not settled how the notion of “law-governed” is to be understood and even if this was stipulated, the matter would still be unclear as to cases. Phenomena assumed to be law-governed might turn out not to be so, and the converse is likewise possible. Whether or not a given type of phenomenon is law-governed is not self-evident it is a matter that can only be resolved through investigation, and such an investigation may be quite subtle and long-reaching. We will have more to say about this matter presently.

By the apposite understanding of law-governed phenomena is meant the sort of understanding appropriate to law-governed phenomena as law-governed phenomena. Just because a given phenomenon is law-governed does not mean that a given approach to understanding it must be concerned with its law-governed nature. For example, suppose for the sake of argument that meanings are law-governed. A theory of meaning might be aimed at describing the principles that govern meaning. Now literary criticism might aim at grasping the meaning of a certain text, say, a poem by Yeats. But the point might be to interpret the text – we can call the understanding aimed at interpretive understanding – and giving the interpretation might not appeal in any way to the principles governing meaning. Indeed, this attempt at understanding might be completely unconcerned with the question whether or not meanings are, or are not, law-governed.

The two last exegetical points indicate that our definition of science supports a certain idea as to how scientific understanding is to be achieved, although the idea is not strictly entailed by our definition. The idea is that science aims at discovering the laws or principles that govern various domains of phenomena and at explaining the phenomena that fall within those domains by showing how they derive from the laws that govern them. This is scientific explanation as conceived by John Stuart Mill, Carl Hempel, and a host of others. The apposite understanding of law-governed phenomena is achieved through providing explanations of this kind.

When we characterize science as a search for understanding grounded in the application of recognized standards of evidence, inference, and sound practice, we refer once more to the methodological standards recognized and supported by the scientific community. These, we said, underlie the critical aspect of science, for it is with an eye to these standards that investigations, hypotheses, and results communicated to the scientific community are meant to be received. But these standards are also what shape the search for understanding to which we here attach the name science. They are the standards that scientific education and training are meant to inculcate in those who would become part of the scientific community indeed, an understanding of, and respect for, these standards may be viewed as the (only) true credential of membership in that community.

The standards in question are accepted standards of evidence, inference, and sound practice. There are important realms of inquiry to which no such standards apply. Philosophy, which may be characterized as a systematic and critical search for understanding – and indeed as a paradigm of such – may be contrasted with science in precisely this respect. For in philosophy, the standards of evidence, inference, and sound practice are all a part of what is debated.

Is the fact that an action would produce the best balance of happiness over unhappiness evidence for its being morally estimable? John Stuart Mill says yes Immanuel Kant says no. What is contested here is not just what particular actions are morally estimable (indeed, there might not be much disagreement about that) but rather what sorts of considerations would be material to considering an action to be so. This is a question about what to count as evidence. If observed instances of A have all been instances of B, may we legitimately conclude that instances of A yet to be observed will likewise be instances of B? Most philosophers of science think that we may but Karl Popper and his school think that we may not. This is a question about what to count as legitimate inference. Is a notion that we are unable to explicate or analyze in terms of sense-experience (say, the notion of obligation) to be dismissed as nonsensical? Or may notions of this kind be given a place – even a central place – in our accounts of the world? Many so-called “empiricists” have made the former claim, while many so-called “rationalists” have made the latter. This may be understood as a disagreement about what constitutes sound practice.

These are the sorts of differences that we find in philosophy, but not in science. Philosophy may be thought of as a search for standards of evidence, inference, and sound practice that might someday be accepted as forming a framework for certain realms of inquiry. When such a framework has been achieved, we speak of “science”. Until such a framework is achieved, we speak of “philosophy”. This may explain the common sentiment that philosophy never gets anywhere – simply, when it does get somewhere, we switch our terminology. On the view here presented, philosophy is the mother of the sciences. Sciences come to exist in the wake of philosophical creativity, reflection, and debate. A subject-matter becomes scientific when philosophy has created a framework within which it may be investigated on a common ground.

We should note that science is not, on this view, a domain within which all, or most, matters are settled. On the contrary, science may be – as it seems in fact to be – a hotbed of controversy. But it is a domain within which controversy is carried on within a framework that provides the basis for eventual settlement, because there is a common understanding of the kinds of evidence that may count for or against a given view and of the ways in which this evidence may be applied to the case at hand.

In developing our characterization of science, we have stressed that science is a human activity, rooted in a community that applies normative standards to practice, to theory, and to results. These standards may change over time. What characterizes science is not the standards to which it cleaves at any given moment, but that it cleaves to some such set of standards at every moment. But every science has an historical dimension, and may be seen as the development of understanding within a certain tradition.

Our way of characterizing science is not philosophically impartial and is not a characterization that anyone could be forced to accept. It is, however, a view that many have accepted, at least in its essentials, although most often without formulation or announcement. It captures – or is at least meant to capture – one of the leading ideas about science.

We can add to what we have so far said that science may be observational, explanatory, or technical these three modes of scientific practice are distinguished by their aims.

Observational science is concerned to describe what happens, both in particular instances and as a rule. In other words, it describes both individual events or conditions and also regularities. Observational data constitute its basis it is not here implied that “what happens” may be simply observed (eg it was not simply observed that the planets revolved around the sun in elliptical orbits). On the contrary, “what happens” must often be hypothesized in the wake of certain observations and these hypotheses tested against further observations. There is in fact a certain sort of explanation which belongs primarily to observational science: the sort of explanation that “organizes and makes plain” a given body of observational data (to borrow a phrase from Nancy Cartwright).

Explanatory science is concerned with explaining why what happens happens. It is concerned with framing causal explanations. Explanatory science presupposes observational science. On the other hand, it might be said that observational science anticipates explanatory science for in isolation – without the explanatory goal in prospect – observational science would hardly be recognizable as science.

Finally, technical science consists in the application of the results of observational and explanatory science to practical endeavors: to the development of technology. Technology need not be scientific, by the way it may be the offspring of practical know-how and experience. It should be thought of as scientific just to the extent that it depends upon the application of observational and explanatory science.

A typical scientific discipline combines all three of the modes just described, rather than restricting itself to any one of them.

Let us now say that history is the systematic and critical search for the understanding of past events, selected and treated with a view to their human significance a search which is grounded in the application of recognized standards of evidence, inference, and sound practice.

Here again we give a characterization that no one would have to accept. History can be described differently. But it does not seem unreasonable to describe it as we have. Our description is modest and seems at first glance to describe the kind of activity in which many historians are engaged. Let us look more closely at the elements of the description.

We see immediately that this characterization of history reiterates many of the elements that were included in our characterization of science. History is described as a systematic and critical search for understanding, and this – particularly as regards the implications involved in describing history as systematic and critical – is to be understood in more or less the same way as before. We said earlier that in attributing these features we made tacit reference to a certain community. In the present case, rather than referring to the scientific community as a whole, we refer to a smaller group, which may simply be described as the community of historians. We leave open for the moment the question whether this community is to be viewed as a part of the scientific community.

We have also described history, like science, as grounded in the application of recognized standards of evidence, inference, and sound practice. While we leave open the possibility that these standards differ in certain respects from the standards applied by the scientific community, they are broadly speaking standards of just the same kind. Taken together with the systematic and critical character that we have attributed to history, we may say that these features suffice to characterize history as a discipline. Whether it is a scientific discipline is a matter that we will go on to consider. The position taken here is that every science is a discipline, but that not every discipline is necessarily a science. It is perhaps a bit unnatural to describe science globally as a discipline, but there seems little harm in doing so so in addition to saying, on the basis of the features just mentioned, that physics, chemistry, biology, and so on are disciplines, we will also apply the term discipline, at a higher level so to speak, to science as a whole. As a discipline, history is evidently to be thought of as being at the level of the individual scientific disciplines just mentioned, and not at the more global level.

Having looked at the elements common to our characterization of science, on the one hand, and history, on the other, let us now turn to the special elements included in our description of history.

We have characterized history as concerned with past events, selected and treated with a view to their human significance. Now the term events, as used here, is meant to cover human actions, both individual and collective. It would arguably be too narrow to restrict the domain of history to human actions, since various events, such as floods and famines, have had human significance and have, indeed, led or forced human beings to act, singly and collectively, in various ways. But much of what historians have to tell us concerns what people have done, for instance that Caesar led his legions across the Rubicon, thus defying the Roman republican government or that on 1 January 1863, Abraham Lincoln issued a proclamation abolishing slavery in the United States or that Parisians stormed the Bastille on 14 July 1789.

The past events with which the historian is concerned are first and foremost human actions. But even with this firmly in mind, most past events – most past human actions – are of no concern to the historian. It is only those events whose human significance is robust that belong to the subject matter of history. The idea of “human significance” is not clearly fixed in fact, one might look upon it as contested among historians (and among others as well). One can clarify by example the kind of thing that is meant in mentioning this as a key feature of the events that concern history. An event has human significance if it is constitutive of or affects central elements of human social life such as language, culture, political organization, economic organization, class structure, family structure, or modes of employment this list is of course not meant to be complete. Thus, Napoleon’s presenting the Empress Josephine with a gold necklace in 1807 would not have human significance in the sense meant here but his reconciling with the Emperor Alexander of Russia in 1807 would.

That said, however, it seems that historians have quite different ideas about the events that have human significance. A long tradition in history selects mainly particular acts of powerful political figures as having significance of this kind. Perhaps the greatest part of written historical work focuses upon the struggles of such figures to gain and retain power, and upon the acts that they performed in exercising that power (eg levying taxes, suppressing religions, building fleets, commissioning calendars, mounting wars, and reforming laws). This selection could, of course, be seen as merely reflecting the personal interests of the bulk of historians. But historians typically pretend to be doing more than writing about the matters that fascinate them individually they say to us in effect, “Look, these are the events that made a difference to human social life in their time these are the events worth writing about.” When historians remain silent about events in the lives of common people, for instance – as they have indeed done until quite recently – they reflect their judgment that such events are of little consequence or in our terms lack “human significance”.

Historians not only select for treatment events or actions whose human significance is judged to be robust, but they also investigate and write about those events in such a way as to bring out or explain their human significance. That seems to be the point of researching the past in the historians’ way and of writing history: to grasp, and then to convey to an audience, the human significance of salient past events.

History may be descriptive, concerning itself with what happened – for instance with the question whether Richard III of England did, or did not, murder the little princes in the Tower – or it may be aetiological, concerning itself with why certain things happened – for example with the question of why so many Oklahoma farmers migrated to California in the 1930s.

We can now ask whether history, as we have briefly characterized it, could be a science in the sense described earlier.

In this regard, we need to consider first the question whether the phenomena studied by history are law-governed. These phenomena are, we said, past events, including past human actions, both individual and collective indeed, it is primarily past human actions of which historians seek understanding, and we may thus restrict our attention to them here. If human actions are not law-governed, then that would mean, according to our formulations, that history could not be a science.

It is not easy to answer the question of whether human actions are law-governed for several reasons. One of the main reasons is that it is not clear what requirements apply to a phenomenon said to be law-governed. Another reason is that even were it in fact the case that human actions are law-governed on some reasonable construal of what this means, we have not yet come close to discovering the laws that govern them. Thus, we are not in a position to assert with any confidence that human actions are law-governed, even if they are.

The question we are now considering comes up not only in connection with history but applies to all of the social sciences. Interestingly, history is sometimes classified with the social sciences and sometimes with the “arts” and this may reflect two different ideas about what history is or aspires to be. Be that as it may, if history is any kind of a science, then it is evidently a social science, or what John Stuart Mill would have called a “moral science”.

The moral sciences, for Mill, were those whose target phenomena were grounded in the “laws of mind”. Mill imagined that there were laws of mind, properly so-called, although these he considered largely undiscovered in his day. These would be the principles governing thought, feeling, intention and therefore human action. Just as physics and chemistry may be thought of as the fundamental natural sciences, studying the basic principles according to which all natural phenomena work, so psychology and ethology may be thought of as the fundamental moral sciences, studying the basic principles of mind and action. And just as geology, biology and other special natural sciences might be thought of as investigating the ways in which the basic principles of nature work in special contexts, in application to particular subject-matters, so history, sociology and other special moral sciences might be thought of as investigating the ways in which the basic principles of mind work in application to particular spheres of thought and action. Hence, Mill thought of the natural and moral sciences as two separate, but structurally and methodologically similar, systems. By implication, the basic laws of mind would be counterparts of such principles as the law of gravitation: of the same kind from the logical or methodological point of view, but applying to very different phenomena. These would evidently be deterministic laws, expressible in the form of universal (that is unexceptionless) generalizations.

Mill considered the question whether the laws of mind might not be shown to reflect, and to be dependent upon, the laws of nature whether, in contemporary language, the social sciences might not be reducible to the natural sciences. Mill thought that this possibility could not be ruled out, but that there was no real evidence that such is the case. He thought it anyway a very premature question, one that could not be seriously debated without a lot more being known than was known in his time.

Mill’s approach, then, is to think of the social sciences, including history, as nascent sciences, making the reasonable assumption (as Mill thought it to be) that the phenomena which they study are governed by fundamental “laws of mind”. History so conceived would be ultimately concerned with providing “covering law” accounts of past human actions. And these accounts would in many cases be causal accounts not, however, in terms of physical causes but rather in terms of mental causes such as motives and intentions.

This is surely a possible point of view but we have not come much further than Mill’s contemporaries in discovering the basic laws of mind upon which the social sciences are meant to be built. It has, however, come to light in the interim that the idea of deterministic governance by basic laws does not hold even for physical phenomena. In other words, not even basic physical phenomena are “law-governed” in the sense imagined by Mill. Yet we are reluctant to give up the idea that the physical sciences are sciences. So we need either to abandon the view that law-governed phenomena form the subject-matter of the sciences, or we need to appeal to a different notion of “law-governed” than Mill (and many others) had in mind. Here, the second approach is recommended.

Michael Scriven has advanced the view that, for the purposes of history at any rate, human actions and other historical events need not be seen as governed by anything more than very loose ceterus parabus principles, expressible as “truisms” or what Scriven sometimes calls “normic statements”. And I, among others, have argued that this may also apply widely to the phenomena studied by the natural sciences. If this view is correct, we may arguably be said to know already that human actions are law-governed and to know many of the laws that govern them. Without pursuing this question further here, let us imagine that the phenomena studied by history are law-governed and that history is not to be excluded from the sciences on that ground.

But now the question arises whether history aims at understanding past human events as law-governed phenomena. If so, that would mean that historical research should be heavily concerned with showing how particular past events fall under laws or at least it should frequently invoke the laws governing human actions in the descriptions and explanations that it offers. This seems not to be the case, even if it is granted that historians must assume, in offering such descriptions and explanations, that the events with which they concern themselves are governed by particular laws (which could perhaps be stated if necessary). Historians are also little concerned with discovering and articulating the laws, if there be such, that govern human actions whereas natural scientists are very much concerned with bringing to light the principles which govern natural phenomena indeed, this may be seen as the key to the understanding of nature sought by the natural sciences. Historians seem to be concerned with quite different matters, namely with bringing out the human significance, as we have called it, of past events. The understanding aimed at by historians might therefore be described as the understanding of the human significance of past events.

In pursuit of this sort of understanding, the historian must, of course, give an account of what has happened and also of why it has happened. Since it is mostly human actions that are in question, such an account is usually an account of what has been done and why it has been done. It has been frequently maintained that an account of what has been done – and even more so an account of why what has been done has been done – must indicate something about the intentions or purposes with which the agents in question acted. Actions fall under various descriptions, not all of which make reference to the intentions that underlie them. Intention-indicating descriptions seem, however, to be necessary elements of any account of why an action has been done, in the sense of “why” which looks for an agent’s reasons for acting. Descriptive history may thus appear to be less dependent than aetiological history upon giving an account of intentions. But, in the way that we said that observational science anticipates explanatory science, descriptive history surely anticipates aetiological history. An historian could hardly consider the question whether or not Richard III of England murdered the little princes in the Tower without considering what reasons Richard would have had for murdering them.

Furthermore, the human significance of an action lies not only in its effects but in its underlying intentions. For we assess such significance not only in terms of an action’s leading to economic ruin, to the growth of cities, to industrialization, to the decline of scholarship, to the dissolution of the nuclear family, to urbanization and the like, but also in terms of its being short-sighted, ill-considered, stupid, cruel, clever, generous, selfish, forward-looking, and so on. Bringing about economic ruin through design is a different sort of act, with a different sort of significance, than bringing about economic ruin through stupidity and the source of the difference lies in the intentions that underlie the actions.

The necessary concern with giving an account of intentions gives us a clue as to why historians have traditionally refused to consider as historical data anything other than written accounts, eschewing, for instance, the ruins, gravesite bones, and artifacts which so occupy archaeologists. The idea must be that written texts are the sources out of which we are most likely to be able to read intentions. From any other point of view, this dogma of historians is hard to comprehend.

In any case, what we have described here as the sort of understanding at which history aims is evidently of a very different sort than that at which the sciences aim. And this is probably the strongest reason for saying that history is not, and indeed does not seek to be, a science.

The sort of understanding at which history aims (according to our account) has sometimes been called “narrative” or “interpretive” understanding. The historian must tell a story about events in which their human significance (as the historian understands it) is brought to light. The significance of an event – even where that event is an action – is not something that it contains wholly within itself, but depends both upon its inherent qualities and upon our normative reactions to those qualities. Significance depends both upon the events and upon the interpreter. The story told by the historian must therefore embody (and convey to an audience) a certain normative stance – a perspective on human significance that is applied to the events that are treated. This perspective will influence the historian’s account of what has been done and of why it was done, and it will also affect the way in which an event is placed into a horizontal narrative designed to reveal both its roots and its portent.

It is clear from what has just been said that history is thoroughly and inescapably normative. This might be thought to tell against its being a science, since it is often maintained that science is (or should be) “value-free”. But that is no part of what has been maintained here. Indeed, we have claimed that science is grounded in normative standards respected and applied by the scientific community this might be called the constitutive normativity of science. What we have claimed here about history is that it is also perspectivally normative, in other words, that it makes value judgments with respect to its subject matter. (These two types of normativity have often been confounded together in discussions concerning the “value-freedom” of science.)

Our characterization of science did not specify that it must be value-free in the latter sense (that it must avoid perspectival normativity). We required only that science aim at understanding law-governed phenomena as law-governed phenomena. In order to show that science must not be perspectivally normative, one would therefore have to show, given our account, that perspectival normativity is incompatible with understanding law-governed phenomena as law-governed phenomena. And I do not think that this can be shown.

Since ideas about human significance change over time (and are diverse even at any given time), history must be written in many versions, and re-written, even about the same events (or what might be identified as “the same events” under some thin description). This need not show that history cannot be objective in a certain sense. Of course it cannot be objective in the sense of being free of normativity. But it can objectively reflect what may be said about a given set of past events from a given point of view about human significance and the historian can also be explicit about which point of view he takes.

The conclusions that we have reached about whether history can be a science, and about related matters, are of course very tentative ones. We have asked large questions and given overly quick answers. But our discussion was not meant to answer our large questions once and for all, but to show by example how such questions must be approached and to offer some food for thought to those who would like to pursue these questions more deeply.

This paper was originally presented on 7 December 2001 as a public lecture invited by the Department of Philosophy of the University of Genova. It was intended mainly for students, but was meant as well to have something to offer to academic colleagues in history and philosophy. The main point of the paper is not to answer its title question definitively or to provide a conclusive definition of history, but to show how questions of this kind may be approached in a helpful and illuminating way, as opposed to getting bogged down in fruitless argument.

Published 5 September 2005
Original in English
First published by Kulturos barai 7/2005 (Lithuanian version)


Beware Number Fetishism

When I read reports from other people's research, I usually find that their qualitative study results are more credible and trustworthy than their quantitative results. It's a dangerous mistake to believe that statistical research is somehow more scientific or credible than insight-based observational research. In fact, most statistical research is less credible than qualitative studies. Design research is not like medical science: ethnography is its closest analogy in traditional fields of science.

User interfaces and usability are highly contextual, and their effectiveness depends on a broad understanding of human behavior. Typically, designers must combine and trade-off design guidelines, which requires some understanding of the rationale and principles behind the recommendations. Issues that are so specific that a formula can pinpoint them are usually irrelevant for practical design projects.

Fixating on numbers rather than qualitative insights has driven many usability studies astray. As the following points illustrate, quantitative approaches are inherently risky in a host of ways.

Random Results

Researchers often perform statistical analysis to determine whether numeric results are "statistically significant." By convention, they deem an outcome significant if there is less than 5% probability that it could have occurred randomly rather than signifying a true phenomenon.

This sounds reasonable, but it implies that 1 out of 20 "significant" results might be random if researchers rely purely on quantitative methods.

Luckily, most good researchers — especially those in the user-interface field — use more than a simple quantitative analysis. Thus, they typically have insights beyond simple statistics when they publish a paper, which drives down, but doesn't eliminate, bogus findings.

There's a reverse phenomenon as well: Sometimes a true finding is statistically insignificant because of the experiment's design. Perhaps the study didn't include enough participants to observe a major — but rare — finding in sufficient numbers. It would therefore be wrong to dismiss issues as irrelevant just because they don't show up in quantitative study results.

The "butterfly ballot" in the 2000 election in Florida is a good example: a study of 100 voters would not have included a statistically significant number of people who intended to vote for Al Gore but instead punched the hole for Patrick Buchanan, because less than 1% of voters made this mistake. A qualitative study, on the other hand, would likely have revealed some voters saying something like, "Okay, I want to vote for Gore, so I'm punching the second hole . oh, wait, it looks like Buchanan's arrow points to that hole. I have to go down one for Gore's hole." Hesitations and almost-errors are gold to the observant study facilitator, but to translate them into design recommendations requires a qualitative analysis that pairs observations with interpretive knowledge of usability principles.

Pulling Correlations Out of a Hat

If you measure enough variables, you will inevitably discover that some seem to correlate. Run all your stats through the software and a few "significant" correlations will surely pop out. (Remember: 1 out of 20 analyses are "significant," even if there is no underlying true phenomenon.)

Studies that measure 7 metrics will generate 21 possible correlations between the variables. Thus, on average, such studies will have one bogus correlation that the statistics program deems "significant," even if the issues being measured have no real connection.

In my Web Usability 2004 project, we collected metrics on 53 different aspects of user behavior on websites. There are thus 1,378 possible correlations that I could throw into the hopper. Even if we didn't discover anything at all in the study, about 69 correlations would emerge as "statistically significant."

Obviously, I'm not going to stoop to correlation hunting I'll only report statistics that relate to reasonable hypotheses founded on an understanding of the underlying phenomena. (In fact, statistics program analyses assume that researchers have specified the hypotheses in advance if you hunt for "significance" in the output after the fact, you're abusing the software.)

Overlooking Covariants

Even when a correlation represents a true phenomenon, it can be misleading if the real action concerns a third variable that is related to the two you're studying.

For example, studies show that intelligence declines by birth order. In other words, a person who was a first-born child will on average have a higher IQ than someone who was born second. Third-, fourth-, fifth-born children and so on have progressively lower average IQs. This data seems to present a clear warning to prospective parents: Don't have too many kids, or they'll come out increasingly stupid. Not so.

There's a hidden third variable at play: smarter parents tend to have fewer children. When you want to measure the average IQ of first-born children, you sample the offspring of all parents, regardless of how many kids they have. But when you measure the average IQ of fifth-born children, you're obviously sampling only the offspring of parents who have 5 or more kids. There will thus be a bigger percentage of low-IQ children in the latter sample, giving us the true — but misleading — conclusion that fifth-born children have lower average IQs than first-born children. Any given couple can have as many children as they want, and their younger children are unlikely to be significantly less intelligent than their older ones. When you measure intelligence based on a random sample from the available pool of children, however, you're ignoring the parents, who are the true cause of the observed data.

(Update added 2007: The newest research suggests that there may actually be a tiny advantage in IQ for first-born children after correcting for family size and the parents' economic and educational status. But the point remains that you have to correct for these covariants, and when you do so, the IQ difference is much less than plain averages may lead you to believe.)

As a web example, you might observe that longer link texts are positively correlated with user success. This doesn't mean that you should write long links. Website designers are the hidden covariant here: clueless designers tend to use short text links like "more," "click here," and made-up words. Conversely, usability-conscious designers tend to explain the available options in user-centered language, emphasizing text and other content-rich design elements over more vaporous elements such as "smiling ladies." Many of these designers' links might indeed have a higher word count, but that's not why the designs work. Adding words won't make a bad design better it'll simply make it more verbose.

Over-Simplified Analysis

To get good statistics, you must tightly control the experimental conditions — often so tightly that the findings don't generalize to real problems in the real world.

This is a common problem for university research, where the test subjects tend to be undergraduate students rather than mainstream users. Also, instead of testing real websites with their myriad contextual complexities, many academic studies test scaled-back designs with a small page count and simplified content.

For example, it's easy to run a study that shows breadcrumbs are useless: just give users directed tasks that require them to go in a straight line to the desired destination and stop there. Such users will (rightly) ignore any breadcrumb trail. Breadcrumbs are still recommended for many sites, of course. Not only are they lightweight, and thus unlikely to interfere with direct-movement users, but they're helpful to users who arrive deep within a site via search engines and direct links. Breadcrumbs give these users context and help users who are doing comparisons by offering direct access to higher levels of the information architecture.

Usability-in-the-large is often neglected by narrow research that doesn't consider, for example, revisitation behavior, search engine visibility, and multi-user decision-making. Many such issues are essential for the success of some of the highest-value designs, such as B2B websites and enterprise applications on intranets.

Distorted Measurements

It's easy to prejudice a usability study by helping the users at the wrong time or by using the wrong tasks. In fact, you can prove virtually anything you want if you design the study accordingly. This is often a factor behind "sponsored" studies that purport to show that one vendor's products are easier to use than a competitor's products.

Even if the experimenters aren't fraudulent, it's easy to get hoodwinked by methodological weaknesses, such as directing the users' attention to specific details on the screen. The very fact that you're asking about some design elements rather than others makes users notice them more and thus changes their behavior.

One study of online advertising attempted to avoid this mistake, but simply made another one instead. The experimenters didn't overtly ask users to comment on the ads. Instead, they asked users to simply comment on the overall design of a bunch of web pages. After the test session, the experimenters measured users' awareness of various brands, resulting in high scores for companies that ran banners on the web pages in the study.

Does this study prove that banner ads work for branding, even though they don't work for getting qualified sales leads? No. Remember that users were directed to comment on the page designs. These instructions obviously made users look around the page much more thoroughly than they would have during normal web use. In particular, someone who's judging a design typically inspects all individual design elements on the page, including the ads.

Many web advertising studies are misleading, possibly because most such studies come from advertising agencies. The most common distortion is the novelty effect: whenever a new advertising format is introduced, it's always accompanied by a study showing that the new type of ad generates more user clicks. Sure, that's because the new format enjoys a temporary advantage: it gathers user attention simply because it's new and users have yet to train themselves to ignore it. The study might be genuine as far as it goes, but it says nothing about the new advertising format's long-term advantages once the novelty effect wears off.

Publication Bias

Editors follow the "man bites dog" principle to highlight new and interesting stories. This is true for both scientific journals and popular magazines. While understandable, this preference for new and different findings imposes a significant bias in the results that get exposure.

Usability is a very stable field. User behavior is pretty much the same year after year. I keep finding the same results in study after study, as do many others. Every now and then, a bogus result emerges and publication bias ensures that it gets much more attention than it deserves.

Consider the question of web page download time. Everyone knows that faster is better. Interaction design theory has documented the importance of response times since 1968, and this importance has been seen empirically in countless Web studies since 1995. Ecommerce sites that speed up response times sell more. The day your server is slow, you lose traffic. (This happened to me: on January 14, 2004, Tog got "slashdotted" because we share a server, my site lost 10% of its normal pageviews for a Wednesday when AskTog's increased traffic slowed useit.com down.)

If 20 people study download times, 19 will conclude that faster is better. But again: 1 of every 20 statistical analyses will give the wrong result, and this 1 study might be widely discussed simply because it's new. The 19 correct studies, in contrast, might easily escape mention.


Life is Physics

There is nothing simple about life. Millions of carefully coordinated chemical reactions occur every second inside a single cell billions of single-celled organisms can organize into colonies trillions of cells can precisely stick together into tissues and organs. Yet, despite this complexity, life is easy to identify. Physicists think that this recognizability could arise from foundational physical principles that underlie all life. And they are on the hunt for a mathematical theory based on these principles that explains why life can exist and how it behaves. Such a theory, they say, could allow researchers to control and manipulate living systems in ways that are currently impossible.

Physicists love unifying theories. These theories boil complex phenomena down to a small set of ideas whose mathematical formulations can make remarkably successful predictions. For example, the laws of thermodynamics, which explain how energy moves around in systems from atoms to hurricanes, can accurately predict how long a kettle of water takes to boil. Yet despite such successes, researchers have not yet found universal equations that describe everyday phenomena relating to life. Such equations could provide the same predictive power as other unifying theories, allowing researchers to gain precise control over living things. This control could enable better treatment protocols for bacterial infections, improved therapies for cancers, and methods to prevent plants from developing resistance to weed killers.

&ldquoPhysicists have studied many complicated systems, but living systems are in a completely different class in terms of complexity and the number of degrees of freedom they have,&rdquo says Ramin Golestanian, a director at the Max Planck Institute for Dynamics and Self-Organization in Germany. Golestanian studies living systems, like bacterial swarms, by modeling them as moving groups of energy-consuming particles, so-called active matter. He also helped organize Physics of Living Matter, an APS conference held last year, where researchers discussed whether writing down a mathematical theory of life is an achievable goal and, if so, what questions such a theory should answer.

For some in the field, finding a theory starts with upending how biologists describe living systems. &ldquoWhen I go to a biology conference, somebody always stands up and says, &lsquolife is chemistry,&rsquo and then shows a whole bunch of putative reactions,&rdquo says Nigel Goldenfeld, a physicist at the University of Illinois at Urbana&ndashChampaign who studies problems related to evolution and ecology. &ldquoI don&rsquot think life is chemistry.&rdquo Chemistry provides information on the molecules needed to make life, but not on how to get a functioning cell, for example. Instead, he says, &ldquolife is physics,&rdquo and researchers should think of living organisms as condensed-matter systems with thermodynamic constraints [1].

Golestanian and Goldenfeld both believe that the traits of life, such as replication, evolution, and using energy to move, are examples of what condensed-matter physicists call &ldquoemergent phenomena&rdquo&mdashcomplex properties that arise from the interactions of a large number of simpler components. For example, superconductivity is a macroscopic property that arises in metals from attractive interactions among its electrons, which lead to a state with zero electrical resistance. In the case of life, the emergent behaviors arise from interactions among molecules and from how the molecules group together to form structures or carry out functions.

But life functions very differently from the standard condensed-matter fare of metals or superconductors, which are &ldquodead&rdquo things whose behaviors are predetermined. Living creatures can respond in seemingly disparate ways to the same stimulus. &ldquoBiological systems have this feedback loop that makes them very difficult to analyze using standard differential equations,&rdquo Goldenfeld says, adding that he doesn&rsquot yet know how to address that problem.

Goldenfeld&rsquos sentiment is echoed by Cristina Marchetti of the University of California, Santa Barbara, who, like Golestanian, studies living things by modeling them as active matter. &ldquoLiving systems evolve, adapt, and change as a result of their interactions or information exchange with other systems,&rdquo Marchetti says. But right now, those essential processes are mostly missing from the theories that she and others have developed for describing the behaviors of specific biological systems, such as the motion of bacterial swarms or the clustering of cells in tumors. Work on theories that account for the evolving states of living systems &ldquois really very much in its infancy,&rdquo she says.

Another challenge in developing a universal theory that explains why life can exist is that very few people are working on the problem. Rather, most biologists and physicists studying the inner workings of life focus on modeling some specific process in their current favorite organism&mdashfor example, how vision works in a particular species of fruit fly&mdashwithout looking at the bigger picture, Goldenfeld says. William Bialek, a theoretical physicist at Princeton University, New Jersey, concurs with this view but also sees a positive side to studying specific organisms. He notes that theoretical physicists can fail in their search for theories if they are &ldquodisconnected from the details.&rdquo

&ldquoThe essential problem of our field is to find a balance between searching for general theoretical principles and engaging with the details of experiments on particular systems,&rdquo Bialek says. Golestanian agrees, adding that whoever tasks themselves with formulating a universal theory of life will &ldquohave to develop an appetite and capacity to study a range of phenomena, catalog them, and look for patterns that point towards a comprehensive description.&rdquo

Ilya Nemenman of Emory University in Atlanta is one physicist taking this approach. He studies how living things&mdashfrom worms to birds&mdashprocess information about their surroundings, with the aim of finding patterns and deriving general equations that apply to more than one system. Nemenman says that one of the biggest barriers to developing any general theory for biological systems is pinning down which quantities matter and which are inconsequential.

In traditional condensed-matter topics, a system&rsquos symmetries&mdashquantities that are unchanged by a modification in the coordinate system&mdashdetermine the key quantities. For example, in crystals, the symmetry is the ordered pattern of the atoms everything looks the same when you move the coordinate axes from one unit cell to another. But in biological systems, those symmetries are absent, or at least currently unrecognizable, adding an additional level of complexity to the process of writing down the correct equations. Nemenman thinks that machine learning might be helpful in this goal, and his group recently used this tool to uncover the equations that describe how a worm responds to heat [2].

The field of biology has managed for centuries without such a unifying theory, so why is it so important to find one? For Goldenfeld, the driving force is the potential predictive capabilities of such a theory and the control that it could allow over the behavior of biological systems. The example he gives is treating bacterial infections. Current treatment plans don&rsquot properly account for the evolution that occurs when antibiotics leave some of the unwanted bacteria alive. Those remaining bacteria can evolve and grow to form antibiotic-resistant superbugs like MRSA. &ldquoIf we understand how to control a living, evolving system, then we could find treatment protocols that kill all the bacteria and don&rsquot make the problem worse,&rdquo Goldenfeld says. Golestanian declined to offer a potential application of the theory noting that &ldquospecific predictions at this stage are obviously premature.&rdquo However, he adds, &ldquoI have absolutely no doubt that good things will come out of this sort of knowledge.&rdquo