What are some examples of scaling laws in biology?

What are some examples of scaling laws in biology?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I've seen that metabolic rate scales logarithmically as function of mass for many animals over an extremely large span of parameters. What other scaling laws exist at the individual level?

Here are some off the top of my head.

  • The height an animal can jump depends on the muscle cross-sectional area ($l^2$) and its mass ($l^3$). Mass grows faster with body size ($l$) so small animals can therefore jump higher relative to their body size ($l$) than large animals. Very similar scaling exists for strength of limbs vs. mass, ability to fly vs. mass, etc.

  • In pinhole eyes (as found in clams and nautilus) sensitivity to light depends on the size of the pupil ($l^2$), the bigger the hole the more light gets in. However the ability to focus depends on the reciprocal of the size of the hole (but I don't know the scaling, so $l^{-n}$). Therefore there is a trade off between sensitivity and ability to focus.

  • There are some recent models about foraging behaviour in a 2D terrestrial environment (with the front of the animal being a length $l$) vs. foraging in a 3D marine environment (with the front of the animal being a surface $l^2$). The model predicts different behaviours for the size of prey animals should aim for.

  • The rate at which sound attenuates scales with the frequency of the sound as $1/f^2$. The frequency of a resonator scales with the reciprocal of volume $1/l^3$. So small animals make high pitched sounds which attenuate quickly and don't travel very far.

  • Finally, the properties of fluids change with size. The Reynolds number describes viscosity and is dependent on linear scale $l$. Therefore small animals experience fluids as viscous and can "crawl" through water whereas large animals have to "swim" through water.

However, any directly physical or chemical-physical property of biology will experience scaling laws.

What are some examples of scaling laws in biology? - Biology

If you never thought that sex appeal could be calculated mathematically, think again.

Male fiddler crabs (Uca pugnax) possess an enlarged major claw for fighting or threatening other males. In addition, males with larger claws attract more female mates.

The sex appeal (claw size) of a particular species of fiddler crab is determined by the following allometric equation:

where Mc represents the mass of the major claw and Mb represents the body mass of the crab (assume body mass equals the total mass of the crab minus the mass of the major claw) [1] . Before we discuss this equation in detail, we will define and discuss allometry and allometric equations.

  • a 10 kg organism may need a 0.75 kg skeleton,
  • a 60 kg organism may need a 5.3 kg skeleton, and yet
  • a 110 kg organism may need a 10.2 kg skeleton.

As you can see by inspecting these numbers, heavier bodies need relatively beefier skeletons to support them. There is not a constant increase in skeletal mass for each 50 kg increase in body mass skeletal mass increases out of proportion to body mass [2].

Allometric scaling laws are derived from empirical data. Scientists interested in uncovering these laws measure a common attribute, such as body mass and brain size of adult mammals, across many taxa . The data are then mined for relationships from which equations are written.

f (s) = c s d ,

  • If d > 1, the attribute given by f (s) increases out of proportion to the attribute given by s. For example, if s represents body size, then f (s) is relatively larger for larger bodies than for smaller bodies.
  • If 0 < d < 1, the attribute f (s) increases with attribute s, but does so at a slower rate than that of proportionality.
  • If d = 1, then attribute f (s) changes as a constant proportion of attribute s. This special case is called isometry, rather than allometry.

Using Allometric Equations

Notice that (1) is a power function not an exponential equation (the constant d is in the exponent position instead of the variable s). Unlike other applications where we need logarithms to help us solve the equation, here we use logarithms to simplify the allometric equation into a linear equation.

We rewrite (1) as a logarithmic equation of the form,

When we change variables by letting,

Therefore, transforming an allometric equation into its logarithmic equivalent gives rise to a linear equation.

By rewriting the allometric equation into a logarithmic equation, we can easily calculate the values of the constants c and d from a set of experimental data. If we plot log s on the x-axis and log f on the y-axis, we should see a line with slope equal to d and y-intercept equal to log c. Remember, the variables x and y are really on a logarithmic scale (since x = log s and y = log f). We call such a plot a log-log plot.

Because allometric equations are derived from empirical data, one should be cautious about data scattered around a line of best fit in the xy-plane of a log-log plot. Small deviations from a line of best fit are actually larger than they may appear. Remember, since the x and y variables are on the logarithmic scale, linear changes in the output variables (x and y) correspond to exponential changes in the input variables (f (s) and s). Since we are ultimately interested in a relationship between f and s, we need to be concerned with even small deviations from a line of best fit.

Integration in Ecological and Biological Stoichiometry

Stoichiometry is the application of laws of matter conservation and of definite proportions to the understanding of the rates and yields of chemical reactions given a set of reactants. Ecological stoichiometry recognizes that organisms themselves are outcomes of chemical reactions and thus their growth and reproduction can be constrained by supplies of key chemical elements [especially carbon (C), nitrogen (N), and phosphorus (P)] [8]. Much stoichiometric work lies in the characterization of the elemental composition of organisms and in understanding how closely their chemical composition is regulated (“stoichiometric homeostasis”), and thus the extent to which their growth conforms to a law of definite proportions.

Whereas breaking organisms and ecosystems down into their elemental compositions is reductive in nature, ecological stoichiometry does not stop there. Take, for example, the application of stoichiometry to explain observations in freshwater ecology showing that changes in food-web structure can affect the relative availabilities of the key limiting nutrients N and P in lakes [9]. These changes result from cascading effects of food-web structure, which alter the relative abundance of herbivorous zooplankton species in the community [10]. Specifically, lakes with four dominant trophic levels (phytoplankton, zooplankton, planktivorous fish, and piscivorous fish) are generally dominated by the large-bodied and P-rich (low C: P, low N:P) crustacean Daphnia [11], whereas lakes with three dominant trophic levels (lacking in piscivores) are dominated by low P (high C:P, high N:P) copepods. Thus, alterations in food-web structure cause the zooplankton communities to sequester and recycle N and P differentially [12], in turn affecting the nutrient regime experienced by the phytoplankton community.

Thus, ecological stoichiometry provides an understanding of how food-web structure can affect phytoplankton nutrient limitation due to shifts among dominant zooplankton that differ in C:N:P ratios. But at this point, it is only natural to ask why C:N: P ratios among zooplankton species are so different. Elser and colleagues [13] proposed the “growth rate hypothesis” (GRH) to answer this question, in a step toward a broader theory of biological, rather than ecological, stoichiometry. Specifically, they proposed a growth-rate dependence of C:P and N:P ratios in living things, because organisms must increase their allocation to P-rich ribosomal RNA in order to meet the elevated protein synthesis demands of rapid growth. Support for the GRH in zooplankton soon appeared [14,15]. The GRH not only explains elemental composition in terms of its biochemical basis, but it also provides a clear evolutionary connection (as some advocates of a new biology have urged): evolutionary changes in organismal growth or development rate have physiological and ecological ramifications due to the changes they induce in organismal elemental demands. Evolutionary change requires a genetic mechanism, so Elser and colleagues [16] proposed that selection for changes in growth or developmental rate operates on available genetic variation in transcriptional capacity of the genes that encode for ribosomal RNA, the rDNA. Preliminary support for such mechanisms in Daphnia has been produced [17,18].

While a satisfying reductionist account seems in hand, the effort has opened up multiple avenues for broad integration in which connections are made not by further digging for lower-level mechanisms, but by seeking new connections of two kinds. One kind of connection is horizontal—the aim is to extend the results of reductionistic digging to include other taxa and systems at roughly the same level of organization. Vertical connections, by contrast, attempt to “resurface” by applying the results of mechanistic explanation in one field to make and test predictions about yet-undocumented phenomena at higher levels and in other fields.

In ecological stoichiometry, horizontal integration has been attempted by applying stoichiometric analysis to trophic interactions beyond lakes and freshwater zooplankton. Stoichiometric analysis is readily used for cross-ecosystem comparisons, as in comparison of the stoichiometric structure of lake and marine food webs [19] and lake and terrestrial food webs [20]. Likewise, data were soon produced demonstrating a key role of P-based stoichiometric imbalance in affecting the growth of terrestrial insects [21,22], as had been shown previously for zooplankton [23,24]. Furthermore, the GRH should apply to a variety of biota, not merely freshwater zooplankton. Elser and colleagues [25] showed that zooplankton, bacteria, fruit flies, and other insects display similar growth-RNA-P relationships, whereas Weider and colleagues [26] presented evidence that the functional significance of rDNA variation in explaining such relations is broadly similar across diverse taxa, which are examples of horizontal integration within biological stoichiometry.

Vertical integration has worked somewhat differently in biological stoichiometry. Take, for example, the connections made by applying the GRH to the study of cancer [27]. Elser and colleagues noted that many well-known oncogenes influence the expression of ribosomal RNA genes, increasing production of ribosomal RNA. This suggests that rapidly growing tumor tissues may have unusually high P demands and thus may experience P-limited growth. While clinical data suggest that proliferating tumors can deplete body P supplies, testing these ideas with existing information has proven difficult. New efforts are underway to compare the C:N: P stoichiometry of tumor and normal tissues directly. These confirm that colon and lung tumors are indeed more P-rich than normal tissues (JJ Elser et al., unpublished data), information that can be incorporated into simulation models to assess whether such differences might affect tumor progression [28].

Vertical integration works here by thinking mechanistically rather than directly in evolutionary terms: new relationships at higher levels are predicted based on known lower-level mechanisms. In the cancer example, these higher-level phenomena occur in areas of biology that are well outside the scope of the initial investigation. An important feature of the upward integration move made in this case is that it poses questions that may never have been asked at the higher level. Whether or not tumor tissue growth is P-limited only becomes an issue if one has reason to believe that growth rate and P requirements are connected.

We should note that seeking integrative connections—particularly upward across levels of organization from mechanisms identified in other taxa—is unlikely to proceed as cleanly as the reductionist part of the explanatory process in most cases. This is partly because the strength and number of causal factors for different systems vary, even though all systems must also be constrained by the same fundamental thermodynamic rules. For example, the ability of the GRH to explain animal C:P and N: P ratios diminishes with increasing body size, because growth rate scales negatively with size [29] variation instead is driven by allocation to P-rich bones, with subsequent connections to nutrient cycling processes driven by vertebrates [30]. We think that this relative lack of precision is a feature, rather than a flaw, of upward integration, because it calls attention to opportunities to identify the unique level-, taxon-, or system-specific causal processes that are operational in any particular context.

What are some examples of scaling laws in biology? - Biology

Definition and Examples

Logarithms are encountered throughout the biological sciences. Some examples include calculating the pH of a solution or the change in free energy associated with a biochemical reactions. To understand how to solve these equations, we must first consider the definition of a logarithm.

Definition- The formal definition of a logarithm is as follows:

The base a logarithm of a positive number x is the exponent you get when you write x as a power of a where a > 0 and a &ne 1. That is,

loga x = k if and only if a k = x.

The key to taking the logarithm of x > 0 is to rewrite x using base a. For example,

Who invented such a thing?

John Napier, a Scottish mathematician is credited with the invention of logarithms. His book, A Description of the Wonderful Law of Logarithms, was published in 1614. Napier devised a method to facilitate calculations by using addition and subtraction rather than multiplication and division. Today, we ususally use logarithms to the base 10, common logs, or logarithms to the base e, or natural logs. In Napier's publication, he describes logs to the base 2.

Some examples of logarithms

Logarithms, just like exponents, can have different bases. In the biological sciences, you are likely to encounter the base 10 logarithm, known as the common logarithm and denoted simply as log and the base e logarithm, known as the natural log and denoted as ln. Most calculators will easily compute these widely used logarithms.

B ase 10 logarithm The common logarithm of a positive number x , is the exponent you get when you write x as a power of 10. That is,

log x = k if and only if 10 k = x

Computing the common logarithm of x > 0 by hand can only be done under special circumstances, and we will examine these first. Let&rsquos begin with computing the value of,

log 10.

According to our definition of the common logarithm, we need to rewrite x = 10 using base 10. This is easy to do because 10 = 10 1 . So the exponent, k, we get when rewriting 10 using base 10 is, k = 1. Thus, we conclude,

log 10 = log 10 1 = 1.

While this example is rather simple, it is good practice to follow this method of solution. Now try the following exercises.

As you worked through these exercies, did you notice the outputs of logarithms increase linearly as the inputs increase exponentially?

Natural logarithms

The natural logarithm of a positive number x , is the exponent you get when you write x as a power of e. Recall that

loge x = ln x


ln x = k if and only if e k = x .

Logarithmic calculations you cannot do by hand.

Now, suppose you were asked to compute the value of log 20. What would you do (or try to do) to get an answer? Do you notice anything different about this problem?

As you most likely noticed, there is no integer k, such that 10 k = 20. So, in this case, you will need to rely on your calculator for help. Using your calculator you will find,

log 20 &asymp 1.30.

Remember that this is true because,

10 1.30 &asymp 20.

After completing these exercises you will notice that your answers (outputs) are small relative to your large inputs. Remember that logarithms transform exponentially increasing inputs into linearly increasing outputs. This is quite convenient for biologists who work over many orders of magnitude and on many different scales.

Since exponential and logarithmic functions are inverses, the domain of logarithms is the range of exponentials (i.e. positive real numbers), and the range of logarithms is the domain of exponentials (i.e. all real numbers). This is true of all logarithms, regardless of base.

Recall that an exponential function with base a is written as f (x) = a x . The inverse of this function is a base a logarithmic function written as,

f &minus1 (x) = g (x) = loga x.

When there is no explicit subscript a written, the logarithm is assumed to be common (i.e. base 10). There is one special exception to this notation for base e &asymp 2.718 , called the natural logarithm,

g (x) = loge x = ln x.

To compute the base a logarithm of x > 0 , rewrite x using base a (just as we did for base 10). For example, suppose a = 2 and we want to compute,

log2 8.

To find this value by hand, we convert the number 8 using base 2 as,

log2 8 = log2 2 3 = 3,

just as we did for base 10.

In the next section we will describe the properties of logarithms.

Endergonic and Exergonic Reactions

If energy is released during a chemical reaction, then the resulting value from the above equation will be a negative number. In other words, reactions that release energy have a ∆G < 0. A negative ∆G also means that the products of the reaction have less free energy than the reactants because they gave off some free energy during the reaction. Reactions that have a negative ∆G and, consequently, release free energy, are called exergonic reactions. Exergonic means energy is exiting the system. These reactions are also referred to as spontaneous reactions because they can occur without the addition of energy into the system. Understanding which chemical reactions are spontaneous and release free energy is extremely useful for biologists because these reactions can be harnessed to perform work inside the cell. An important distinction must be drawn between the term spontaneous and the idea of a chemical reaction that occurs immediately. Contrary to the everyday use of the term, a spontaneous reaction is not one that suddenly or quickly occurs. The rusting of iron is an example of a spontaneous reaction that occurs slowly, little by little, over time.

If a chemical reaction requires an input of energy rather than releasing energy, then the ∆G for that reaction will be a positive value. In this case, the products have more free energy than the reactants. Thus, the products of these reactions can be thought of as energy-storing molecules. These chemical reactions are called endergonic reactions they are non-spontaneous. An endergonic reaction will not take place on its own without the addition of free energy.

Figure (PageIndex<1>): Exergonic and Endergonic Reactions: Exergonic and endergonic reactions result in changes in Gibbs free energy. Exergonic reactions release energy endergonic reactions require energy to proceed.

Innovations underlying scaling in alignment algorithms

Alignment tools have co-evolved with sequencing technology to meet the demands placed on sequence data processing. The decrease in their running time approximately follows Moore’s Law (Fig. 3a). This improved performance is driven by a series of discrete algorithmic advances. In the early Sanger sequencing era, the Smith-Waterman [19] and Needleman-Wunsch [20] algorithms used dynamic programming to find a local or global optimal alignment. But the quadratic complexity of these approaches makes it impossible to map sequences to a large genome. Following this limitation, many algorithms with optimized data structures were developed, employing either hash-tables (for example, Fasta [21], BLAST (Basic Local Alignment Search Tool) [22], BLAT (BLAST-like Alignment Tool) [23], MAQ [24], and Novoalign [25]) or suffix arrays with the Burrows-Wheeler transform (for example, STAR (Spliced Transcripts Alignment to a Reference) [26], BWA (Burrows-Wheeler Aligner) [27] and Bowtie [28]).

a Multiple advances in alignment algorithms have contributed to an exponential decrease in running time over the past 40 years. We synthesized one million single-ended reads of 75 bp for both human and yeast. The comparison only considers the data structure, algorithms, and speeds. There are many other factors, such as accuracy and sensitivity, which are not discussed here, but which are covered elsewhere [25]. Initial alignment algorithms based on dynamic programming were applicable to the alignment of individual protein sequences, but they were too slow for efficient alignment at a genome scale. Advances in indexing helped to reduce running time. Additional improvements in index and scoring structures enabled next generation aligners to further improve alignment time. A negative correlation is also observed between the initial construction of an index and the marginal mapping time per read. b Peak memory usage plotted against the running time for different genome assemblers on a log-log plot. Assembler performance was tested using multiple genomes, including Staphylococcus aureus, Rhodobacter sphaeroides, human chromosome 14, and Bombus impatiens. Data were obtained from Kleftogiannis et al. [33]

In addition to these optimized data structures, algorithms adopted different search methods to increase efficiency. Unlike Smith-Waterman and Needleman-Wunsch, which compare and align two sequences directly, many tools (such as FASTA, BLAST, BLAT, MAQ, and STAR) adopt a two-step seed-and-extend strategy. Although this strategy cannot be guaranteed to find the optimal alignment, it significantly increases speeds by not comparing sequences base by base. BWA and Bowtie further optimize by only searching for exact matches to a seed [25]. The inexact match and extension approach can be converted into an exact match method by enumerating all combinations of mismatches and gaps.

In addition to changing search strategies, algorithms adjusted to larger datasets by first organizing the query, the database, or both. This involves an upfront computational investment but returns increased speed as datasets grow larger. For example, some algorithms (BLAST, FASTA, and MAQ) first build indexes for query sequences before scanning the database. On the database side, some algorithms (such as BLAST and MAQ) format the database into compact binary files, whereas others (such as BLAT, Novoalign, STAR, BWA, and Bowtie) build an offline index. STAR, BWA, and Bowtie in particular can significantly reduce the marginal mapping time (the time it takes to map a single read), but require a relatively large period of time to build a fixed index. In general, we find a negative correlation between the marginal mapping time and the time to construct the fixed index, making BWA, Bowtie, and STAR better suited to handle progressively larger NGS datasets (Fig. 3a). Much like the expansion phase observed in the S-curve trajectories that produce Moore’s law, many of these algorithms have been refined to improve performance. For example, BLAST has been heavily optimized for different datasets, producing HyperBLAST [29], CloudBLAST [30], DynamicBlast [31], and mBLAST [32], to name a few. In the case of mBLAST, researchers involved in the Human Microbiome Project commissioned the optimization of the algorithm so that the analyses could be performed on a reasonable time scale. Nevertheless, many of these alignment algorithms are not suitable for longer reads because of the scaling behavior of their seed search strategies. As long-read technologies continue to improve, there will be an ever greater need to develop new algorithms capable of delivering speed improvements similar to those obtained for short-read alignment [25].

Recently, new approaches have been developed that substitute assembly for mapping. These are not directly comparable to the mappers above, but they provide significant speed gains in certain contexts and may represent the next technological innovation in alignment. These approaches, including Salmon and Kallisto [29, 30], mostly focus on RNA-seq transcript identification and quantification, and they employ hashed k-mers and a De Bruijn graph for the task of RNA-Seq quantification. Moreover, instead of developing a base-pair resolution alignment, these approaches identify a ‘pseudoalignment’ that consists of the set of transcripts compatible with a given read.

In addition to read alignment, the other main computationally intensive algorithmic issue associated with the analysis of sequencing reads is the de novo assembly of a genome sequence. Many tools have been developed for assembly using short-read sequencing technology [31, 32]. The time and memory requirements are to some degree related to genome size but vary significantly between algorithms (Fig. 3b) [33]. The advent of long-read sequencing technologies such as Pacific Biosciences, Oxford Nanopore and Moleculo [34] promise high-quality sequence assemblies with potentially reduced computational costs. However, higher sequencing error rates for longer reads require novel assembly algorithms [35–38]. The main benefit is that it is possible to assemble contigs that are 10–100× larger than those assembled by traditional short-read technologies, even with lower-fold coverage (see [39] for a comparison in mammalian genomes).

Vertebrate Biology in the 21st century involves some time in the makerspace

Election Day, 2018. Bill Storm and Ekaterina Mashanova were looking at computer screens, considering the available options.

They glanced at each other and exchanged wry looks. How to choose among so many snakes?

Storm 󈧘 and Mashanova 󈧘 were looking at candidates representing the suborder Serpentes — snakes. Literal snakes. They were especially interested in the scales of the snake and they understood that different snake species have different kinds of scales.

The two, along with teammate Ian Wilenzik 󈧘, were beginning to pursue a deep understanding of a snake scale. The idea is to probe the form and function of the scale, then manufacture their own scale or set of scales, using the tools and expertise in William & Mary’s makerspace facilities in Swem Library and Small Hall.

The snake team was just one group in Laurie Sanderson’s Vertebrate Biology class. Sanderson wants to introduce her BIOL 456 students to 21st century concepts, skills and techniques.

Sanderson is a professor in William & Mary’s Department of Biology . She has been teaching Vertebrate Biology at the university since 1992. The makerspace-based projects are an enhancement of the lab component of the class. The traditional vertebrate bio lab is based on examination of preserved specimens and bones.

“But now, available biological specimens are more diverse,” she said. “The entire field is more interdisciplinary.”

Traditional dissections still happen

Sanderson’s students still do traditional dissections in lab — for example fish, often specimens obtained from the sampling work at the Virginia Institute of Marine Science. In previous years, there have been field trips, too. But they spend much of the first two-thirds of the semester’s lab work in learning about computer-aided design (CAD), image analysis, 3D scanning and other advanced techniques available to 21st century anatomists.

“The students pick projects as a kind of capstone to the lab,” Sanderson said. “Some element of vertebrate biology that they can work together to design, research and execute.”

Sanderson's own experiments with CAD and 3D-printed models of fish mouths led to a patent granted to William & Mary in 2016 for a novel filtration device. She introduced advanced-manufacturing concepts to the Vertebrate Biology lab three years ago, when the Bioengineering Lab in the Integrated Science Center acquired two 3D printers. But the expansion of the makerspace environment on campus opens a wider range of possibilities to Sanderson’s students.

The Round Table of Makerspace Student Engineers

Each Vertebrate Biology team received the benefit of makerspace guidance from MSEs — Makerspace Student Engineers. “I’m pretty sure I’m empowered to grant knighthoods,” deadpanned Jonathan Frey in introducing MSEs John Garst 󈧙 and Jacob Brotman-Krass 󈧚, “so these guys are both Sirs.”

Frey is well into his first year as director of William & Mary’s makerspace environment in Small Hall. Sir John and Sir Jacob are just two of the members of the MSE Round Table that Frey has assembled, a fellowship devoted to offering aid to the questing pilgrims who come seeking arcane lore such as how to reverse-engineer a snake scale.

 “The role of the makerspace is to facilitate student-to-student learning, intra-community learning,” Frey explained.

Garst says he is paid for his shifts in the makerspace and is on the books for 10 hours a week.

“I spent probably twice that time here, though,” he said. “Just working on my own projects and other homework. I love being in here.”

On Election Day, Garst and Brotman-Krass were on the clock, listening to Storm and Mashanova explaining that their team was interested in exploring the variation among snake scale anatomy.

“Snake scales are different,” Storm said. “Ventral and dorsal scales are different. And they’re different among snake species, too”

Another team was working in the Small Hall Makerspace on Election Day. Call them Team Turtle. Their plan was to test the design of three kinds of turtle shells for resistance to being cracked when dropped by predatory birds.

“No turtles will be harmed in the pursuit of this project,” intoned Angie Pak 󈧘, as she checked out turtle shell scans available online. Her teammate Cameron Staubs 󈧘 said that the idea was to replicate the shells in a 3D printer and drop them from a drone.

“Do you have any wire?” Staubs asked Garst.

“Wire? We have plenty of wire!,” Garst said, opening one of the cabinets along the makerspace wall to reveal an ample supply of insulated copper in manifold gauges.

“No, I mean like a coat hanger,” Staubs said. She explained that she had found online instructions for building a drone payload-release mechanism that called for plain old hanger wire.

Starting on the internet

Like Staubs’ drone hack, the how-to portion of most of the projects began on the internet. The third member of Team Turtle, Evan Broennimann 󈧘, explained that they were looking for three-dimensional scans of turtle shells that could be used to create 3D renderings for experimenting.

“We found some we liked, but there was a charge,” Broennimann said. “We needed open source or Creative Commons images.”

By late November, the projects were in advanced stages. The snake-scale team showed off a pair of scales, a product of the 3D printer in the Swem Library makerspace. The renderings of the scales were based on anatomical scans of a Mexican pit viper, Atropoides nummifer.

One of their scales had a keel the other was smooth on both sides. They made a computer simulation showing how all scales work together as a lattice network.

“We now know how to make a network of scales — and how to analyze it,” Mashanova said. “A lot of studies view snake scales in the context of movement, but now we can also analyze them in the context of armor.”

The turtle group ended up with three turticular species for their drop test and lined them up for inspection.

“This is the leopard tortoise. This is a box turtle and this is a sea turtle,” Broennimann said. “The idea was to see which shell shape was most resistant to being dropped by a bird.”

He said the team predicted the box turtle shell would be most resistant, as box turtles are more likely than the other two species to be the object of bird predation. “The leopard tortoise is very big,” Broennimann explained. “And the sea turtle is…in the ocean!”

Their observations supported their hypothesis. “You can look at the box turtle shell and you can see it doesn’t have any fractures to the shell, ” Staubs said.

The locomotion specialists

In addition to the snake scale and turtle shell teams there was a third group of individual projects, all involving locomotion. For example, ChiChi Ugochukwu 󈧘 took her interest in horses into a deep dive into horseshoes.

“My original idea, which was a bit ambitious, was to look at how different styles of horseshoes affect the way in which horses move,” she said. “It’s hard to do a project like that without actual, live horses at my disposal. ”

Ugochukwu was horseless but not recourseless, as she figured out a way to adapt her project to available resources. Many projects were similarly revised. For instance, the turtle team abandoned their drone plan in favor of dropping their printed shells in the high bay laboratory in Small Hall.

Ugochukwu embarked on a study of the anatomy of the equine leg and hoof. She found that there’s an enormous artisanal quality to the shoeing of a horse.

“One of the things I’ve run into is that when a farrier shoes a horse, there’s not any established guidelines,” she said. “It seems more like an art that he's perfected over time: “When you look at different horses each of them will have shoes to fit their need,” she said.

Charnae Holmes 󈧗 is a biology major and art minor. She combined her two areas of study in an investigation of cartilage, that often-troublesome connective tissue that serves as padding in joints.

“I’m interested in evolution,” she said. “I am studying how cartilage evolved in modern humans, starting with our closest relatives, chimpanzees and then Lucy and on to Homo sapiens.”

Holmes is tracing the development of cartilage in the knee and she is especially interested in the shift from quadrupedalism to bipedalism. She said she is using her studio art background to fill in gaps that science leaves.

“Cartilage doesn't fossilize,” she said. “So I’m using my artistic skills to infer the changes that have occurred.”

As humans grow, they turn most of their cartilage into bone, but some cartilage remains, especially in knees, Holmes said. “A little bit of cartilage in the knee has to support a lot of weight,” she added. “A lot of the problems we have with joints — osteoporosis and things like that — happen because of problems in that area. So, learning about the evolution of cartilage will possibly have some implications in helping people with knee issues.”

Jessica Fleury 󈧘 would like to improve the state of the art when it comes to prosthetic limbs. The best replacement leg, she said, is one that most closely matches your own gait.

“With a better limb, there would be less need for physical therapy and overall better quality of walking,” she said.

Fleury videotaped people walking, using a camera borrowed from the Reeder Media Center in Swem Library. She recorded their gaits in socks and in sneakers. Then she uploaded the files to her computer and took screen shots, paying close attention to the frames that showed the subject’s heels hitting the floor.

“Then I uploaded the pictures to ImageJ,” she said. Many of the students used ImageJ, a versatile image-processing program developed by the National Institutes of Health.

Fleury used ImageJ to take a set of measurements for each subject: the angles created when the feet hit the floor, stride length and so on. She found that the mechanics of walking has interesting variations.

“Everyone has a different walk,” she said. “Some people hit the floor more flat footed. Some people have a higher elevation when they hit the floor. You’d think some people who are taller would have a longer stride. Usually they do, but that’s not always the case.”


A common reductionist assumption is that macro-scale behaviors can be described “bottom-up” if only sufficient details about lower-scale processes are available. The view that an “ideal” or “fundamental” physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent.

The Easiest Science Majors

1. Psychology

Psychology majors study how people behave along with the influence their motivations and desires have on their behavior. The study of psychology further investigates the behavior of the individual within culture and society.

Psychology is commonly thought of as the easiest of the science majors thanks to its relative lack of complex math, although psych majors can still expect to do a fair amount of statistical analysis on their way to a degree. More social than many other science classes, psychology puts an emphasis on working with people and prioritizes skills such as critical thinking, problem solving, and communication.

While it is one of the easiest science fields to major in as an undergrad, students who want to work as a psychologist will need to continue on and earn an advanced degree like a Master’s or Doctorate—and the coursework for an advanced degree in psychology is considerably more challenging.

2. Biology

A biology major studies living organisms, including their origins, characteristics, and habits. Through their pursuit of a biology degree, students will learn how living organisms work.

Because biology is a broad field, it lacks the intensity and specific skill sets required of other science majors. It also features less math than other types of sciences, focusing on concepts, theories, and memorization rather than hard math. Perhaps this is why biology is one of the most popular science majors: in 2017-2018, nearly 6% of all undergrad degrees were granted to people studying biological and biomedical sciences.

The flexibility needed in biology also makes it less singularly focused than other science majors. Students will work both independently and in teams coursework also takes place in the classroom, lab, and field.

3. Environmental Science

Students studying environmental science will explore how the physical and biological worlds interact. Degree holders often transition to careers focused on conservation, in positions varying from activism to consulting to research. Students might work for governmental agencies in policymaking, or go into the private sector to assess a company’s environmental impact.

Environmental science is commonly thought of as one of the easier science degrees to obtain. One of the reasons for this perception is that it is very hands-on and requires a minimal amount of complex math, at least by science major standards. Despite environmental science’s reputation, it’s still a comparatively challenging major that requires an understanding of core sciences such as chemistry, physics, biology, and geology, as well as scientific methodology.

Environmental science is popular with students who enjoy learning outside of the classroom. Although fieldwork attracts many, there are also plenty of administrative opportunities in the area for those who prefer life in an office or lab.

As you consider a career in science, you might also wonder what your chances are of getting into the best schools for science. To help you answer those tough questions, CollegeVine offers a free Chancing Engine that lets you know your chances of acceptance at the schools of your choice, plus how to improve your profile. Sign up for a free CollegeVine account to get started today.

Watch the video: Η δημόσια συγνώμη της Κατερίνα Καινούργιου στη Μαρία Μπακοδήμου (June 2022).


  1. Nixon

    Do not puzzle over it!

  2. Watts

    I think they are wrong. I propose to discuss it. Write to me in PM.

  3. Fegar

    Bravo, the ideal answer.

  4. Robbin

    You are not right. Write in PM, we will communicate.

Write a message