Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Biological Molecules

by Nathan H Lents, Ph.D., Lizzie Stark, M.S./M.F.A, Bonnie Denmark, M.A./M.S.

What do butter, beeswax, and testosterone have in common? They’re all lipids, a type of compound produced by plants and animals that includes fats and oils as well as waxes and steroids. As a group, lipids have many different functions and uses in living cells and organisms, from storing energy to regulating metabolism, signaling hormones, and providing the structure of cell membranes. They help sea otters’ fur repel water and give a waxy sheen to many plant leaves. In our daily lives, lipids provide the delicious richness in ice cream, give carrots their color, lubricate our car engines, and help clean our clothes.

If you have ever made salad dressing, seen a photograph of an oil tanker spill, or tried to clean a greasy stain with water, then you have likely noticed one of the defining factors of lipids: They do not mix well with water. Lipids are mainly composed of carbon and hydrogen atoms, and this hydrophobic ("water fearing") nature of lipids is driven by the bonds between these many carbons and hydrogens.

In a water molecule, the bonding between the oxygen and hydrogen atoms results in a polar covalent bond (see our module Water: Properties and Behavior). The electrons that form this bond are shared unequally between the atoms because oxygen atoms have a stronger pull on electrons than hydrogen does. This creates a slight negative charge at the oxygen end of the water molecule, and a slight positive charge at the hydrogen end, as shown in Figure 1.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Electronic distribution in H2O.

However, the bonding between carbon and hydrogen atoms in lipids is not polar. This is because the electrons in the covalent bonds are shared equally between the carbons and the hydrogens and there are no partial charges anywhere. Thus, long chains of carbon-hydrogens bonds form a nonpolar molecule.

The bonding differences between water and lipid molecules is important because “like attracts like.” As a polar solvent, water prefers to dissolve molecules with polar bonds, such as salt and sugar. Molecules with nonpolar bonds will not normally dissolve in polar solvents because there is no charge on the nonpolar molecule to attract the polar molecule. Nonpolar liquids mix with other nonpolar liquids and dissolve nonpolar solutes (the substance that is dissolved); polar liquids mix with other polar liquids and dissolve polar or charged solutes.

While lipids cannot dissolve in polar solvents, they can dissolve in nonpolar solvents – those with a balanced electron distribution – such as gasoline and chloroform. This is why lighter fluid can help remove engine grease and cooking oil stains from clothing.

Comprehension Checkpoint

Covalent bonds between atoms are polar when

As a group, lipids are a diverse collection of naturally-occurring organic compounds with important roles to play:

  • Fats and oils store energy for cells. In animals, they provide electrical insulation for nerves, and cushion internal organs.
  • Phospholipids form cellular membranes and play an important role in diffusion (see our Membranes I: Introduction to Biological Membranes module).
  • Steroids are formed from cholesterol and are involved in cellular communication.
  • Carotenoids are pigments used to help absorb light energy in plants, algae, and photosynthetic bacteria.
  • Waxes form a barrier to exclude water in both plants and animals. Waxes are found in leaves, ear canals, and the beeswax that makes honeycomb.

Without fully realizing it, humans have been performing chemical reactions with lipids for thousands of years. Soap, for example, was a very early human invention and possibly the first such innovation to be the result of a chemical reaction. There is even a recipe for making soap on Sumerian tablets dating back to 2500 BCE (Levey, 1954). In the ancient world, soap was made by first boiling rainwater with ashes from burnt wood to produce lye: a very basic, or alkaline, solution (high pH) (see our Acids and Bases: An Introduction module). Next, this solution was combined with animal fat or vegetable oil and cooked over a low fire for many hours until the mixture changed into a gel. The fundamental procedure of this chemical reaction, now called saponification, is still used today to make soap.

The first steps toward understanding lipids were taken in the early 1800s by a young French scientist named Michel Chevreul (1786-1889). Chevreul began his career in the laboratory of Louis Vauquelin, where his role was to use various solvents (such as water, alcohol, and ether) to separate the colored dye pigments from natural products like vegetable oils, waxes, tree gums, and resins. Without knowing it, he was working with various kinds of lipids (McNamara, Warnick, & Cooper, 2006).

At the end of each experiment, Chevreul would wash out the glassware using a lot of soap. While conducting his research, Chevreul observed that if he accidentally left soapy water in some glassware and it evaporated overnight, salt crystals would be left behind. He was confused by this because he had added only water (or another solvent) and soap to the glassware. It raised the question: Where was the salt coming from? Through deductive reasoning, Chevreul realized it must be the result of the soap. When he learned how soap was made by mixing animal or vegetable fat with alkali water, though, he was still confused because there was no salt in that process either.

Intrigued and persistent, Chevreul went on to study the process of soap-making in his own laboratory. As he made various kinds of soap, he observed that as oils react with the alkali water, they turn from a translucent liquid into a thick, milky pudding, which gradually hardens. At the time, he knew that oils and fats contain large amounts of carbon and hydrogen and only small amounts of oxygen. He hypothesized that the reaction with the alkali solution, which had a high pH and thus a higher concentration of hydroxide ions (OH-), was somehow adding oxygen atoms to the structure of the fats to change them from pure hydrocarbons to molecules with some salt-like properties.

This was an excellent hypothesis because it would explain two different phenomena at the same time. First, it explained the salt crystals left when soapy water dries. Second, it explained why soap is soluble in both water and oil. The hydrocarbons from the fat would still be oil-soluble, but their new salt-like properties, coming from the added oxygen atoms, would allow them to be soluble in water, a property that all salts have.

Comprehension Checkpoint

When soapy water evaporates, it leaves salt crystals behind because

Although it took him most of his career to do it, Chevreul demonstrated that his hypothesis was correct. He did this by performing painstaking chemical analyses of various fats, oils, and the soaps that are produced when alkali is added to them. Chevreul discovered that, during saponification, some of the hydroxide (OH-) ions from the alkali solution are indeed added to the hydrocarbons from the fats. When this happens, some chemical bonds in the fat molecules are broken, releasing long-tailed fatty acids (Figure 2). Many of the names of common fatty acids that we use today were given to these molecules by Chevreul (Cistola et al., 1986).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The basic chemical reaction of saponification.

The reason that hydrocarbon tails from fats are not soluble in water is because almost all of the bonds are symmetrical and thus nonpolar. However, when the hydroxide ions break the ester group in fat molecules during saponification, a charged and polar group is created – a carboxylic acid group – which is very soluble in water.

These fatty acids have a very special structure. They have long chains of nonpolar bonds, which makes them easily dissolvable in oil and grease; but they also have a polar charged group at one end, which makes them easily dissolvable in water. Thus, these molecules have a dual nature – they are both water-soluble (hydrophilic, "loves water") and oil-soluble (lipophilic, "loves fat"). The word for this is amphiphilic, which means "loves both." This is why fatty acids perform so well as soaps and detergents – they are capable of dissolving, and thus cleaning, both watery and greasy substances.

What Chevreul and others showed was that an alkali solution breaks up the fat molecules and two parts are released: glycerol and fatty acids. We now know the complete structure of the fat molecule (Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: A fat molecule showing its component parts: the glycerol, carboxyl groups, and fatty acids. From Harrigan, G.G., Maguire, G., and Boros, L. 2008. Metabolomics in alcohol research and drug development. Alcohol research Health, 31(1): 27-35. image © Harrigan, G., Maguire, G., and Boros, L.

During the process of saponification, the hydroxide ions in the alkali solution "attack" the ester group and thus release the fatty acid chains from the glycerol backbone. Chevreul was able to figure this out by analyzing the chemical composition of the fats before the reaction, and then repeating the analysis with the fatty acids that resulted. He did this again and again with different kinds of fats, which made slightly different kinds of soaps. The result was the common theme that fats are made of glycerol and fatty acids.

Animals and plants use fats and oils to store energy. As a general rule, fats come from animals and oils come from plants. Because of slight differences in structure, fats are solid at room temperature and oils are liquid at room temperature. However, both fats and oils are called triglycerides because they have three fatty acid chains attached to a glycerol molecule, as shown in Figure 3.

The carbon-hydrogen bonds (abbreviated C-H) found in the long tails of fatty acids are high-energy bonds. Thus, triglycerides make excellent storage forms of energy because they pack many high-energy C-H bonds into a compact structure of three tightly packed fatty acid tails. For this reason, dietary fats and oils are considered "calorie dense." When animals, including humans, consume fats and oils, a relatively small volume can deliver a large number of calories. Animals, particularly carnivores, are drawn to high-fat foods for their high caloric content.

Triglycerides are formed inside plant and animal cells by attaching fatty acids to glycerol molecules, creating an ester linkage. This reaction is called a dehydration synthesis because a water molecule is formed by "pulling out" two hydrogen atoms and an oxygen from the reactants. Because a new water molecule is formed, this new reaction is also called a condensation reaction (see Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: The dehydration synthesis reaction, where a water molecule is formed by "pulling out" two hydrogen atoms and one oxygen atom.

Comprehension Checkpoint

Fats that we eat are calorie-dense because

The reason why fats are solid at room temperature while oils are liquid has to do with the shape of the fatty acids these triglycerides contain. Remember that the fatty acids are long chains of carbon molecules that have hydrogen atoms attached. The C-H bonds are where energy is stored. At one end of the tail, fatty acids have a carboxyl group (-COOH), which gives the molecule its acidic properties (Figure 5).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: The essential features of a fatty acid showing the long hydrocarbon chain and the carboxylic acid group.

If a fatty acid looks like the molecule above, with only single bonds between the carbons, we say that this fatty acid is saturated. This term is used because every single carbon is surrounded by as many hydrogen atoms as is possible; it is saturated with hydrogen.

However, some fatty acids have a double bond between two of the carbons in the chain. Wherever this double bond exists, abbreviated C=C, both of the carbons involved in this double bond have one less hydrogen than the other carbons. This is because carbon can only normally make four bonds. When two carbons form a second bond in between them, they each must "let go" of a hydrogen so that the total number of bonds for each carbon is still four. Because these fatty acids have two fewer hydrogen atoms than they otherwise would have, we call them unsaturated fatty acids (Figure 6). They are unsaturated because they do not contain the maximum number of hydrogen atoms that they could have.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: A mono-unsaturated fatty acid.

When a fatty acid has a double bond in its chain, the chain has a "kink" in its shape because there is no free-rotation around a C=C double bond. The kink is "fixed" in the structure of the fatty acid. In contrast, saturated fatty acids have free rotation around all of the single bonds in the chain since saturated fatty acids are long and straight. A comparison is shown in Figure 7.

The kinks found in unsaturated fatty acids make it so that many chains cannot pack together very tightly. Instead, the kinks force the fatty acids to push further apart. For this reason, triglycerides with unsaturated fatty acids are liquid at room temperature. Instead of packing together tightly, the molecules can slide past each other easily. The opposite is true for triglycerides with saturated fatty acids. Because their fatty acid tails are straight with no kinks, they can pack together very tightly. Thus, these molecules are more dense and solid at room temperature.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: A comparison of a saturated fatty acid (stearic acid, found in butter) and an unsaturated fatty acid (linoleic acid, found in vegetable oil).

Animal fats are often saturated, which explains why lard, bacon fat, and butter are all solid at room temperature. Plant triglycerides, on the other hand, are typically unsaturated. This is why vegetable oils (such as canola, olive, peanut, etc.) are liquid at room temperature. Most often, unsaturated fats have only one C=C double bond and are thus called monounsaturated. However, some plants make triglycerides with multiple C=C bonds. These kinds of triglycerides are called polyunsaturated. (See Figure 8.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: A comparison of the bonds in a monounsaturated fatty acid (oleic acid) and a polyunsaturated fatty acid (linoleic acid).

Monounsaturated fats appear to be the healthiest triglycerides for humans to consume in their diets because the cells that remove fats from our blood after they are absorbed from our diet do their work most quickly with monounsaturated fats. Because we are slower to remove them from our blood, saturated fats stay in our bloodstream longer and thus have a greater chance to contributing to the formation of plaques and clots. For this reason, doctors and dieticians recommend diets high in monounsaturated fats and low in saturated fats. Polyunsaturated fats are somewhere in between saturated and monounsaturated fats in terms of their healthiness in our diet (Mattson & Grundy, 1985).

Comprehension Checkpoint

Saturated fatty acids have ___________ hydrogen atoms than unsaturated fatty acids.

Another type of fatty acid that has gotten a lot of attention recently is the trans fatty acid. Trans fatty acids have a hydrocarbon tail with a double bond that is in the trans configuration, instead of the more common cis configuration (see Figure 9).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 9: A comparison of the cis double-bond configuration and the trans double-bond configuration.

As discussed above, C=C double bonds are present in the fatty acid tails of unsaturated fats. When these unsaturated fatty acids are made naturally by living cells, most often plant cells, the C=C double bonds are always in the cis configuration, almost never in the trans configuration. However, during industrial production of certain fat-containing products, the trans configuration can be inadvertently formed. This occurs when unsaturated fats, usually vegetable oils, are subjected to the process of hydrogenation in order to turn them into saturated fats (shown in Figure 10).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 10: Unsaturated fats, usually vegetable oils, are subjected to the process of hydrogenation in order to turn them into saturated fats.

The purpose of industrial hydrogenation is to create solid fats, which are more desirable for deep-frying, out of vegetable oils. This is done because vegetable oils are much less expensive than naturally saturated fats such as lard. Crisco™ and margarine are two such chemically-produced saturated fats that are made of hydrogenated vegetable oils. Crisco™, or shortening, is cheaper than lard but can be used similarly and gives similar taste. Margarine, or oleo, was developed as a cheaper substitute for butter, particularly during the era of the World Wars and global depressions that marked the first half of the 20th century, when rationing and scarcity of staples was common. Today, many packaged desserts and candies also have these kinds of industrially produced saturated fats, which often cost less than natural saturated fats but provide better texture and firmness than unsaturated fats. During hydrogenation, occasionally the chemical reaction does not go to completion and the process of turning a cis unsaturated fat into a saturated fat creates a trans fat instead.

In recent years, trans fats have received a lot of attention from dieticians and the general public because of their association with elevated health risks. Individuals with diets higher in trans fats are more likely to develop coronary heart disease, suffer heart attacks and stroke, and die earlier than those with diets low in trans fats (Mensink & Katan, 1990). It was always known that hydrogenation produces some trans fats, but because they are not acutely toxic, their long-term health dangers are only now being realized.

Scientists have discovered the reason for these elevated risks: Trans fats spend a much longer amount of time in our bloodstream after we consume them, instead of being quickly absorbed into our cells. Unlike saturated fats and cis unsaturated fats, trans fats don't appear in nature in very large amounts – they are an "unnatural" form of fat which humans are not well designed to consume. Because humans only began to eat trans fats in the 20th century (other than the very tiny amounts that are present in some forms of red meat), we do not have receptor molecules in our blood vessels that seek out these trans fats and remove them from the bloodstream. Thus, when we consume trans fats, they persist in our bloodstream for a very long time, compared to natural forms of fat. The longer these molecules spend in our bloodstream, the more they can contribute to the formation of clots, plaques, and hardened arteries. For this reason, the United States Food and Drug Administration has recently made a preliminary determination that trans fats are “not generally recognized as safe,” a determination that will likely lead to a complete ban on their presence in foodstuffs (Brownell & Pomeranz, 2014).

Comprehension Checkpoint

Trans fats are

Perhaps the most important and basic function of lipids in living cells is in the formation of cellular membranes. All cells, from the most basic bacterium to those that form the most specialized human tissues, are surrounded by a plasma membrane made of lipid molecules. For more detail, see the Membranes I: Introduction to Biological Membranes module.

The lipids that form membranes are a special type called phospholipids (Figure 11). They are so named because they have a characteristic phosphate group (PO4). Like triglycerides, the central structure of a phospholipid is the glycerol molecule. However, phospholipids have two fatty acid tails attached to the glycerol, whereas triglycerides have three. On the remaining carbon of the glycerol, a large, charged, phosphate-containing group is added.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 11: A phospholipid. image © OpenStax College

This distinctive head group gives phospholipids their unique properties. Like fatty acids, the presence of a hydrophobic tail and a hydrophilic head means that phospholipids are amphiphilic. This distinctive structure leads to a very peculiar behavior by phospholipids – the spontaneous formation of bilayers. When phospholipid molecules are placed into an aqueous solution (water-based), they will arrange themselves into sphere-shaped structures in which the surface of the sphere is a double layer of phospholipids. While the hydrophilic head groups are attracted to the water in the surrounding solution, the hydrophobic tails are repelled by it and attracted to each other. This means that the most “comfortable” arrangement for the phospholipids to take is to tuck their tails together in a water-free interior space, with the polar head groups facing out, interacting with water (Figure 12) – this is called a micelle<. href="/en/library/Biology/2/Membranes-I/198">Membranes I: Introduction to Biological Membranes module.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 12: Three of the different structures phospholipids can form in an aqueous solution: micelle, liposome, and bilayer sheet. In this depiction, the hydrophilic heads are round and white and the hydrophobic tails are yellow wavy lines.

Comprehension Checkpoint

Cell membranes have a __________ layer of lipid molecules.

Another class of lipid molecules that are important in cells are the steroids, also called sterols. Unlike triglycerides and phospholipids with their long hydrocarbon tails, steroids consist of four fused carbon rings, as shown in Figure 13. As you would expect because of all of the nonpolar C-H bonds, steroids are not soluble in water.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 13: The generic structure of a steroid molecule and the structure of cholesterol.

The most fundamental steroid molecule is cholesterol because all of the other steroids that are made from it. Cholesterol has its own functions as well. For example, in animal cells, cholesterol is embedded in cell membranes to give them fluidity and to prevent them from solidifying in cold temperatures. Plants contain molecules similar to cholesterol called phytosterols that perform similar functions.

Cholesterol was named by Michel Chevreul in 1815, who found that human gallstones have a large amount of this lipid. A century later, Alfred Windaus and Henrich Wieland confirmed that the liver made cholesterol, although they deduced its structure incorrectly. They shared the Nobel Prize in 1928 for their discovery that cholesterol and other bile acids are made by the liver and used to dissolve dietary fats so that they can be absorbed by the intestines. The correct structure of cholesterol wasn't confirmed until 1945, when Dorothy Crowfoot Hodgkin used the new technique of X-ray diffraction (see Figure 14) to realize the precise arrangement of the four-ring structure (Bloch, 1982).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 14: An x-ray diffraction pattern. image © Jeff Dahl

There are many other steroids, but all of them, by definition, are cholesterol derivatives (Figure 15). That is, they are made using cholesterol as the starting material. Many of these steroids are hormones, such as the sex steroids estrogen, progesterone, testosterone, and their cousins. Other steroid hormones include cortisol and aldosterone.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 15: A chart of the steroid hormones and their biosynthetic relationships. image © David Richfield and Mikael Häggström

Although these hormones all perform widely differing functions in the body, they have a strikingly similar structure. This common structure means that they have a similar mechanism of action. Steroid hormones are released by glands and then travel throughout the body where they exert their actions by binding to their receptors inside of cells and then activating or de-activating genes. The power of steroid hormones is in their lipid nature, which allows them to cross biological membranes easily. Thus, a hormone produced in one tissue will quickly and easily diffuse throughout the entire body, passing through cells as easily as oxygen and carbon dioxide do (see Figure 16.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 16: A steroid hormone receptor's mechanism of action. image © Designua

Several other sorts of compounds are grouped in with the lipid family because they are insoluble in water.

The pigments that give some plants their orange and yellow color (e.g., carrots and summer squash) are carotenoids. They contain branching five-carbon chains called isoprene units (see Figure 17). Animals are able to break down these molecules into vitamin A, which may then be used to produce retinal, a pigment necessary for eyesight.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 17: Isoprene units contain branching five-carbon chains. Animals are able to break down these molecules into vitamin A.

Waxes appear in many different living things, providing the natural coating on some leaves and fruits, the sheen on the feathers of some birds, the shine on human hair, and the protective secretions in our ear canals. Like triglycerides, waxes are esters of fatty acids, consisting of an alcohol molecule bonded to fatty acids through ester linkage. Wax is strongly hydrophobic, and thus serves as an effective water repellant. In addition, the fully saturated hydrocarbon chains of wax molecules makes them solid at room temperature, like saturated fats discussed earlier (see Figure 18).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 18: A wax molecule showing the long-chain alcohol and fatty acid.

Comprehension Checkpoint

Some lipids are manufactured in the human body.

Lipids play a role in eyesight, nerve tissue, vitamin absorption, the endocrine system, and many other body functions. Scientists have known that some fat is carried in the bloodstream ever since the late 1600s, when researchers examined the blood of animals that had just eaten a fatty meal and discovered that it briefly turned milky and yellowish. Now it’s clear that an excess of cholesterol in the blood can lead to deposits called plaque in artery walls, which increases a person’s risk of heart attack. Research into these fatty plaques has revealed that trans fats strongly exacerbate their formation, given how much longer they persist in the bloodstream. In addition, chemicals from cigarette smoke have been shown to increase the inflammatory response that gradually turns these fatty deposits into plaques and then to obstructive clots. Fortunately, arterial plaques are dynamic, and their formation can be reversed by stopping smoking and transitioning to a diet lower in cholesterol and fats from the saturated and trans fats family.

Ongoing research in lipid chemistry advances medical knowledge as we seek to understand and treat high cholesterol, heart disease, hormone disorders, thyroid disease, fatty liver disease, multiple sclerosis, autism spectrum disorder, macular degeneration, Guillain-Barré syndrome, and other conditions.

Fats, oils, waxes, steroids, certain plant pigments, and parts of the cell membrane – these are all lipids. This module explores the world of lipids, a class of compounds produced by both plants and animals. It begins with a look at the chemical reaction that produces soap and then examines the chemical composition of a wide variety of lipid types. Properties and functions of lipids are discussed.

Key Concepts

  • Lipids are a large and diverse class of biological molecules marked by their being hydrophobic, or unable to dissolve in water.

  • The hydrophobic nature of lipids stems from the many nonpolar covalent bonds. Water, on the other hand, has polar covalent bonds and mixes well only with other polar or charged compounds.

  • Fats and oils are high-energy molecules used by organisms to store and transfer chemical energy. The distinct structures of different fat molecules gives them different properties.

  • Phospholipids are specialized lipids that are partially soluble in water. This dual nature allows them to form structures called membranes which surround all living cells.

  • Bloch, K. (1982). The structure of cholesterol and of the bile acids. Trends in Biochemical Sciences, 7(9), 334-336.
  • Brownell, K. D., & Pomeranz, J. L. (2014). The trans-fat ban: Food regulation and long-term health. New England Journal of Medicine, 370(19), 1773-1775.
  • Cistola, D. P., Atkinson, D., Hamilton, J. A., & Small, D. M. (1986). Phase behavior and bilayer properties of fatty acids: hydrated 1:1 acid-soaps. Biochemistry, 25(10), 2804-2812.
  • Levey, M. (1954). The early history of detergent substances: A chapter in Babylonian chemistry. Journal of Chemical Education, 31(10), 521-524.
  • Mattson, F. H., & Grundy, S. M. (1985). Comparison of effects of dietary saturated, monounsaturated, and polyunsaturated fatty acids on plasma lipids and lipoproteins in man. Journal of Lipid Research, 26(2), 194-202.
  • McNamara, J. R., Warnick, G. R., & Cooper, G. R. (2006). A brief history of lipid and lipoprotein measurements and their contribution to clinical chemistry. Clinica Chimica Acta, 369(2), 158-167.
  • Mensink, R. P., & Katan, M. B. (1990). Effect of dietary trans fatty acids on high-density and low-density lipoprotein cholesterol levels in healthy subjects. New England Journal of Medicine, 323(7), 439-445.

Nathan H Lents, Ph.D., Lizzie Stark, M.S./M.F.A, Bonnie Denmark, M.A./M.S. “Lipids” Visionlearning Vol. BIO-4 (1), 2014.

Top


Page 2

Ecology

by Devin Reese, PhD.

“It’s no mystery why indigenous groups are so adept at protecting biodiversity. For generations, we have accumulated intimate and detailed knowledge of the specific ecosystems where we live. We know every aspect of the plant and animal life, from mountain-tops to ocean floors.”

– Victoria Tauli-Corpuz, the UN’s Special Rapporteur for Indigenous Peoples, 2019

Step outside and spend a few minutes looking around. Make a rough count of how many different types of living things (including humans) you see. Look closely. Include tiny things like mosquitoes, moss, or mites. If you don’t know what it is, that’s fine. Just count them up. By counting, you have taken a step towards understanding the biodiversity around you. You are making an approximation of how many species—types of organisms able to breed with each other—live in your neighborhood.

Ask yourself a few questions: How many types of living things did you find? Which types are the most common? Why might they thrive while others don’t? These questions are at the core of understanding biodiversity and the factors that determine it.

The term “biodiversity,” a contraction of “biological diversity,” refers to the variety of life on Earth. The term stems from the Greek word bios (life) and the Latin word diversitas (difference or variety). In combination, the two words describe the enormous range of living things from tiny bacteria to the largest animal, the Antarctic blue whale, or an even larger organism called a honey fungus that grows to several miles in diameter (Casselman, 2007).

The human understanding of biodiversity likely began long ago. Our hunter-gatherer ancestors would have needed to be aware of the diversity of plant and animal life they depended on for survival (Tallavaara et al., 2017). By the 300s BCE, the Greek philosopher Aristotle observed that plants and animals could be sorted into groups based on how they looked and behaved. His work led to the approach we use today to classify and assign scientific names to living things (see our Taxonomy I: What's in a name? module).

Since Aristotle’s time, we’ve come a long way in describing biodiversity. The official definition is “the variability among living organisms from all sources, including, inter alia, terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part; this includes diversity within species, between species and of ecosystems” (Convention on Biological Diversity, 2006). Biodiversity includes the variety of living organisms, the diversity of genes they carry, and the variety of ecosystems in which they live. This official definition includes three levels of biodiversity: species diversity, genetic diversity, and ecosystem diversity.

Species diversity is the most commonly measured level of biodiversity. Current estimates suggest that between 5 million and 10 million living species currently exist on Earth (Costello et al., 2013; Wilson, 2018). Why is there such a huge range in the estimates? To date, about 2 million species have been accounted for, meaning they have been assigned formal scientific names by people who discovered them. Based on the rates of naming of new species, the majority have yet to be discovered. While scientists think they have identified nearly all bird and mammal species, there are millions of species of fungi, bacteria, and other organisms that have yet to be identified. For example, the approximately 100,000 known fungi are thought to be less than ten percent of existing species (Sigwart et al., 2018). So, estimates of 5 to 10 million total species on Earth are based on the rate of discovery of new species and projections of how many more are likely to turn up.

Counting all the species on Earth is no simple task. Think about the magnitude of 5 to 10 million species compared to the total from your own local count.

One of the earliest published counts of species diversity was made in 1982 by American biologist Terry Erwin. He wondered how many species of beetles and other arthropods (invertebrates with jointed legs) lived in the tropics. Erwin “fogged” 19 tropical trees with insecticides and counted nearly 1,200 species of beetles that fell out. From his observations and counts, Erwin noted various beetles’ dependence on particular tree species. By estimating about 50,000 species of tropical trees, Erwin came up with a staggering tally of 30,000 species of beetles and other tropical arthropods (Erwin, 1982).

While many of the assumptions behind Erwin’s estimates are debated, such as the degree to which beetles are specialized to certain trees, his work spawned a flurry of interest in tallying up all the species on Earth (Ødegaard et al., 2000). Scientists all over the world are collectively trying to figure out global species diversity. Fogging and other collection techniques are still used today, but judiciously and alongside other methods that are less destructive. For example, insects are sampled by attracting them to lights and netting, after which they can be released (Montgomery et al., 2021).

Today, in calculating species diversity, scientists include not only “richness” (the number of different species counted) but also “abundance” (how many members) of each species counted. Relative abundance gives you information about the species’ influence on the ecosystem. For example, while an individual grass plant may have a small impact on the characteristics of an ecosystem relative to an oak tree, the sheer abundance of grasses in a meadow ecosystem makes it an excellent habitat for animals like grass snakes and voles.

Indigenous Peoples’ ways of knowing are particularly valuable in estimating biodiversity. In fact, research has shown that indigenous and other local knowledge about biodiversity is as accurate as data collected via Western science techniques (Danielsen et al., 2014). Indigenous homelands tend to have high biodiversity because of the ways they are managed to sustain natural resources that people depend on directly. For example, in New Zealand, Māori whale expert Ramari Oliphant Stewart was mentored in the natural environment by her elders from the Ngāti Awa, Rongomaiwahine, and Ngāti Mahuta tribes. At age 10, she became a “whale rider”, which signifies someone with special knowledge about and relationship to whales (Morris, 2020).

Discovering new species and adding them to the tally of biodiversity on Earth requires continued global collaboration across different communities and knowledge keepers. A project called The Encyclopedia of Life (EOL) is cataloging all living species into an open-source biodiversity information repository that anyone can add to and access. Similarly, the Map of Life (MOL) project is a similar collaborative effort to map the locations of every species in the world.

However, biodiversity goes even deeper than the species level.

Comprehension Checkpoint

True or false: Scientists estimate between 5 million and 10 million species currently exist on Earth.

Cataloging species by the way they look is a reasonable way to understand Earth’s diverse ecosystems. However, genes “code for” (determine) the very characteristics that set species apart from one another (see our DNA II: The Structure of DNA module). As the raw material for natural selection, genes are the building blocks of species diversity as it changes over time (Hughes et al., 2008). All the variability that makes life capable of adapting to changing environmental conditions has accumulated within the pool of DNA. This is genetic diversity (Convention on Biological Diversity, 2021).

Genetic diversity helps buffer species against environmental change by ensuring that at least some individuals survive disease or other catastrophes. It’s like keeping money in different places to buffer against change (Lynch, 2016). You might keep some at home, some at the bank, and maybe some in a car or other location. If your home is robbed or the bank fails, you still have part of your money elsewhere. Similarly, in a population of organisms with high genetic diversity, some are likely to be resistant to a particular disease or parasite and survive to reproduce and ensure the continuation of the species.

The consequences of losing genetic diversity are apparent in many areas, including agriculture. As industrial farms have worked to identify and use high-performing individual species, they have also reduced the genetic diversity of industrial crops. As a result, industrial crops are at a much higher risk of being wiped out by disease or parasites. For example, large-scale loss of corn to the Southern Corn Leaf Blight epidemic of 1970-71 brought attention to the importance of genetic diversity. One-billion dollars of U.S. corn was wiped out by a fungal infection because the corn genes had become so homogeneous (lacking diversity) that most of the crop lacked resistance to the disease caused by the fungus (Bruns, 2017).

The perils of losing genetic and species diversity highlights the importance of being able to measure and track it. While research on DNA dates to the late 1800s, the first successes in determining an actual DNA sequence of genes came in the 1970s (Jou et al., 1972). Building on these advances in 2003, Canadian molecular biologist Paul D. N. Hebert developed a technique called DNA barcoding that identifies species from a short segment of the genetic code (Hebert at al., 2003). A DNA barcode is a genetic signature of an organism. It’s like the codes you can scan to read the price of a product or look at a restaurant menu, except a DNA barcode provides information about the DNA of organisms (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Depiction of the relationship of a short DNA barcode to the entire DNA molecule from an individual of a species. image © CC-SA Larissa Fruehe

Hebert heads the International Barcode of Life (iBOL) Consortium, an international group of scientists that aim to collect the genetic signature, or barcode, of every species on Earth. It’s like the Encyclopedia of Life, but catalogs DNA rather than other features of organisms. The iBOL database makes genetic diversity information openly available to anyone who wants to access it.

Another way to view biodiversity is at the level of ecosystems. An ecosystem is a community of organisms interacting with their physical, or nonliving, environment. Ecosystem diversity refers to the variety of ecosystems that exist in a defined area, something visible to early naturalists.

In the early 1900s, Prussian explorer Alexander von Humboldt laid the foundation for understanding ecosystem diversity, inspired by his expedition to the American tropics. Humboldt’s Tableau Physique (1807) was one of the first formal attempts to delineate biodiversity at an ecosystem level. As shown in Figure 2, he mapped plant species in the Andes Mountains, showing how they changed with altitude.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Humboldt’s mapping of vegetation zones in the Andes published in Berghaus, 1851, Physikalischer Atlas. image © Public Domain

Humboldt’s mapping was ahead of its time. Yet, scientists today recognize the limitations of his mapping, particularly in finding exact upper and lower limits of vegetation types (Moret, 2019).

Compared to species or genetic diversity, ecosystem diversity is harder to measure. The boundaries of most ecosystems are not a sharp line, but instead a gradual transition from one community of organisms to another (Cofrin Center for Biodiversity). A city ecosystem might have an obvious boundary, say between a park and a road, or a coastal area between land and sea. But typically, ecosystem boundaries are less clear. To test this out, look online for an aerial photo of the region you live in, and try to draw lines delineating the ecosystem boundaries.

Regardless of whether you look at biodiversity through a species, genetic, or ecosystem lens, it invites questions: What creates patterns of biodiversity? Why is one area more diverse and another less diverse? As you will see in the next section, the most visible global pattern in biodiversity is how it differs across latitude.

Comprehension Checkpoint

Biodiversity is most commonly defined at the level of...

Take a look at the map below (Figure 3). Red areas indicate more species. What do you notice about the distribution of species on Earth?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Map of global biodiversity. image © CC BY: Mannion, P. D., Upchurch, P., Benson, R. B. J. & Goswami, A., based on work by Clinton Jenkin

For at least two centuries, naturalists have noted that biodiversity increases as you go from the poles to the tropics. This is called a Latitudinal Diversity Gradient (LDG). Inspired by the biodiversity he saw in the Andes, Humboldt mapped the first isothermal (temperature) bands onto the globe. In 1817, he published a map which, while based on limited data, showed how temperatures change over the globe (Klein, 2018; Humboldt, 1817). Building on Humboldt’s mapping, in 1876, British naturalist Alfred Russell Wallace reported, “Animal life is, on the whole, far more abundant and more varied within the tropics than in any other part of the globe, and a great number of peculiar groups are found there which never extend into temperate regions.” (Wallace, 1876; Dowle et al., 2013).

The tropics are close to the equator (defined as 23.5 degrees north or south), while temperature zones are further (defined as between the Tropic of Cancer and Arctic Circle, or Tropic of Capricorn and Antarctic Circle). Since Humboldt’s work, the LDG has become an accepted part of the scientific understanding of biodiversity. The LDG established that biodiversity is concentrated near the equator (that is, at lower, tropical latitudes). This is true whether you count species on land or in water, and it is true across all kinds of life - from single-celled organisms to plants and animals. Tropical rainforests house more than half of the world’s known species, despite covering just seven percent of Earth’s land surface (Primack and Morrison, 2013). Judging from fossils, the LDG is a pervasive pattern of life on Earth. In fact, fossil evidence suggests it has existed for 270 million years or more.

But the underlying question remains: Why do the tropics have higher biodiversity? Many hypotheses have been proposed, and scientists are still grappling with this key question. For example, in wondering what drives the LDG pattern, Chinese geobiologist Haijun Song and colleagues mapped latitudes of more than 50,000 marine fossils described in a database. They identified a 5-million-year period with no LDG beginning about 252 million years ago. During this period, levels of biodiversity were similar from the poles to the equator. Song attributes the pattern to intense global warming −a greenhouse interval −that overheated the tropics and forced more animals poleward (Song et al., 2020).

Song’s results support a hypothesis that heat drives the LDG.

As early as the 1960s, scientists recognized that tropical ecosystems cycle nutrients quickly than temperate ecosystems, i.e., nutrients like nitrogen move through the tropical environment faster, demonstrated by various studies (Vitousek and Sanford, 1986). Nutrient cycling requires energy, which comes from sunlight (see modules The Carbon Cycle, The Nitrogen Cycle, and The Phosphorus Cycle). More year-round sunlight near the equator means more energy supply for plants to take up nutrients and grow. Plants are at the base of food webs (the connection of all food chains in a single ecosystem), making their food through photosynthesis. As a result, their productivity is essential to supporting other organisms; this is also known as primary productivity.

Primary productivity is measured in various ways, such as calculating total plant biomass or measuring the carbon that plants incorporate from photosynthesis. Figure 4 shows the biomass of plants in different types of ecosystems. What do you notice about the biomass in tropical ecosystems (starting at the left side of the graphic) compared to other ecosystems?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Graphic showing the relative amount of carbon stored in plant biomass across ecosystems. USDA Forest Service based on data from Scharlemann et al. (2014). image © Public Domain

Measures of primary productivity show that it is about twice as high in the tropics as elsewhere. And productivity is not just about the sunlight available for photosynthesis. With more sunlight comes heat, so climates near the equator are hotter. According to Kinetic Molecular Theory, atoms and molecules are in constant motion and move faster when they are warmer (see our Kinetic-Molecular Theory module). When molecules have more energy, the chemical processes that affect biological processes, like those regulating growth and reproduction, also go faster. This helps explain the high plant productivity of the tropics (University of Southern California, 2008).

This speeding up of tropical ecosystem processes may also cause the quicker evolution of new species. Studies have found that the DNA molecules making up genes evolve faster in the tropics. Changes in DNA may ultimately result in new species (called “speciation”), which adds to biodiversity. Some scientists, therefore, call the tropics a “cradle” for biodiversity (Jablonski et al. 2006).

So, if conditions in the tropics speed up nutrient cycling, productivity, and evolution, the outcome is more species diversity. But, even if we can explain why more species arise in the tropics, why don’t they spread out into other areas?

Biologists propose that environmental conditions keep species from spreading out of the tropics. Many species have a long evolutionary history of living in the tropics. If they are adapted to a warm, humid climate, they might not tolerate other conditions (see our Adaptation: The Case of Penguins module). The outcome is a wealth of biodiversity in the tropics that has adapted to tropical conditions and cannot live elsewhere (Brown, 2014).

Studies of how organisms are distributed provide evidence that supports this hypothesis. For example, Iranian biologist Sana Sharifian studies the geographic distribution of mangrove crabs. She wondered whether she could predict where different species live based on environmental factors like sea surface temperature and other ocean conditions. Using more than 8,000 records of where mangrove crabs have been found, Sharifian calculated species richness and plotted it by latitude (Sharifian et al., 2020). In Sharifian’s graphic (Figure 5), blue dots represent the number of mangrove crab species, while colored bands represent temperature. Would you say that sea surface temperature is a good predictor of where species of mangrove crabs live?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Map of numbers of species of mangrove crabs (blue dots) against sea surface temperatures (colored bands). image © Sharifian, S., Kamrani, E. Saeedi, H. (2020)

Mapping species richness by latitude revealed that the highest mangrove crab diversity is in tropical waters, especially in the Indo West-Pacific, indicating that temperature is the best predictor of where they live.

Species accumulating in the tropics eventually spread into higher latitudes as they evolve adaptations for cooler climates. For example, American geophysicist Dave Jablonski examined fossils of marine bivalves (two-shelled clams, oysters, etc.) from the past 11 million years and plotted where and when each species originated. He found that the tropics have been “an engine of global biodiversity,” producing most of the new bivalve species, which then expanded their ranges towards the poles over thousands of years. But, even as their ranges expanded, nearly all of them continued to live in the tropics. In Jablonski’s view, this makes the tropics both a “cradle” (where species arise) and a “museum” (where species remain) for biodiversity (Jablonski et al., 2006).

Besides the stable, warm conditions of the tropics, their high biodiversity may also relate to their complexity.

Because they support high plant diversity, tropical areas have more variety of habitats (heterogeneity). A tropical forest is made up of multiple layers of plant species that differ as you move from the ground to the tree canopy. Within an ecosystem, each organism has a habitat niche, defined by the resources it uses. This layering may support high biodiversity by providing more unique niches for species.

For example, American biologist Jonathan Huie is one of a group of biologists working to understand how animals in the tropics reduce competition by occupying different habitats. As a graduate student, he examined the features of tropical anole lizards and categorized them according to their lifestyle—how they use the habitats (Huie et al. 2021).

The diagram (Figure 6) shows where you find species of anole lizards in a tropical forest ecosystem. What might you conclude about how they share the habitat?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Diagram showing where different species of anole lizard are found in tropical forests. image © CC BY: Eva Horne, modified from Williams et al., 1983

On both islands and mainland South America, scientists like Huie find that anole lizard species sort out into lifestyles of ground, grass-bush, trunk-ground, trunk, trunk-crown, twig, and crown-giant. By using different parts of the habitat, species rely on unique sets of resources. And the more heterogeneous a habitat, the more species can share it, that is, the more biodiversity.

Scientists still debate whether the high biodiversity of the tropics and the Latitudinal Diversity Gradient as you head towards the poles is due to light, temperature, stability, heterogeneity, or other factors. Explaining the LDG is a challenge that involves many scientific fields. Biologists, ecologists, geologists, and other specialists continue to gather evidence.

Find your location on a map and note your latitude. Does latitude explain the biodiversity around you? Think about both the count you did outside where you live and the life you see in your region. Note what else (besides latitude) may help explain the pattern of your local biodiversity.

Comprehension Checkpoint

Factors that might explain why biodiversity changes with latitude include...

“Islands are tumultuous places; raised from the oceans or divided from continents, they undergo change at a pace faster than most other biomes. The species that colonize and persist upon islands react and adapt to this constant change.”

– James C. Russell, 2019

Islands, as fragments of land surrounded by the ocean, are a special case when it comes to biodiversity. R. H. MacArthur and E. O. Wilson (1967) proposed the Theory of Island Biogeography, which states that biodiversity should increase with island area and closeness to other landmasses. Since islands are separated by ocean waters and not all species can fly, float or swim across, islands that are more isolated islands should have fewer species. And smaller islands should have fewer species because they offer a lower diversity of resources. Thus, you would expect the smallest, most isolated islands to have the lowest biodiversity.

The predictions of Island Biogeography Theory have proved correct in most circumstances but fail to explain the whole picture. For example, consider the Hawaiian Islands. Hawai’i is the biggest island in the archipelago (a collection of islands), and all the Hawaiian Islands are really far (more than 9,000 km or 5,600 miles) from mainland North America. Based on Island Biogeography Theory, what would you predict about the biodiversity on Hawai’i compared to its increasingly smaller neighbors of Maui, Oahu, and Kauai? Do data on island size plotted against species richness support your prediction (Figure 7)?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: Species richness plotted against size for four of the islands in the Hawaiian archipelago. Adapted from Craven at al. 2019. image © Craven et al., 2019

Island Biogeography Theory predicts that the biggest island, Hawai’I, as the highest biodiversity. In fact, Hawai’i has the lowest biodiversity, with species richness increasing as islands get smaller. While Island Biogeography Theory is accurate in many circumstances, scientists are coming to understand other important factors. Estonian ecologist Madli Jõks modeled the expected species richness on groups of islands and found factors besides island size to be important (Jõks and Pärtel, 2018). In the case of Hawai’I, island age comes into play. The smallest islands are older, having formed earlier from volcanoes building up from the ocean floor. Their higher biodiversity can be explained by more time for species to colonize them.

Comprehension Checkpoint

Island Biogeography Theory has proved to be...

"Biodiversity is an essential heritage for all humankind...Stopping its loss, and guaranteeing the continued functioning of the earth's ecosystems− both marine and terrestrial− should be a high priority for everyone."

– United Nations Secretary General Kofi Annan, 2003

Why do we care about biodiversity?

Losing species means losing the interactions that they have with other species, which can lead to a cascade of species losses (Valiente-Banuet et al., 2014). For example, local extinctions of wolves in Yellowstone National Park resulted in fewer predators for elk populations, causing them to grow. The growing elk populations reduced the streamside willows they graze on. As a result, beavers no longer had the slow-moving water around willows that they rely on and disappeared from Yellowstone. So, the loss of a single species may have far-reaching effects on an ecosystem. Supply of water, formation of soil, cycling of minerals, and maintenance of climate are among other ecosystem services that may be disrupted.

For example, Brazilian ecologist Julia Astegiano finds that as habitats get degraded, the diversity of pollinators like bees goes down. The loss of pollinator diversity leads to shifts in plant diversity. This can result in “community collapse,” where just a fraction of the former species survives (Astegiano et al., 2015). Due to the loss of insects, plants, and other species, agricultural and urban areas tend to have lower biodiversity than wild areas (Rogan and Lacher, 2018).

The significance of biodiversity was first acknowledged broadly when Ghanaian Kofi Annan, then U.N. Secretary-General, called for a Millennium Ecosystem Assessment in 2005. The assessment detailed the effects of ecosystem change on humans and concluded that biodiversity and our human well-being are inextricably linked (Millenium Ecosystem Assessment, 2005).

Since then, many have invested in studying and conserving the complex living system that is biodiversity. As we continue to learn more about what determines biodiversity, we become better equipped to manage it. But sustainably managing the diversity of life does not mean there will be no changes. Rather, it calls for an intentional approach to tracking and managing change.

Since the time of hunter-gatherers, human beings have been aware of how the wellbeing of plants and animals dictates our ability to survive. This module explores the strides we’ve made in understanding biological diversity (biodiversity) and how it impacts our ecosystems.

Key Concepts

  • On the basis of physical characteristics, genetic markers, and interactions collected through multiple methods, scientists define biodiversity as the variety of life on Earth on multiple levels: species, genetic, ecosystem.

  • Measurements of species-level biodiversity include species richness and evenness, which are calculated from samples of species distributions within and across ecosystems.

  • Scientific studies of biodiversity find that it correlates with latitude, landscape heterogeneity, and specific biogeographical pattern features like islands.

  • The functioning of Earth’s systems that sustain life depends on biodiversity at all levels, evidenced by the poor health of ecosystems with low biodiversity.

  • HS-LS2.A1, HS-LS4.C4, HS-LS4.D1, HS-LS4.D2
  • Bruckner, M.Z. Measuring Primary Production Using 14C Labeling. Microbial Life Educational Resources, Carleton College. https://serc.carleton.edu/microbelife/research_methods/biogeochemical/productivity.html

  • Loiseau N, Thuiller W, Stuart-Smith RD, Devictor V, Edgar GJ, Velez L, et al. (2021) Maximizing regional biodiversity requires a mosaic of protection levels. PLoS Biol 19(5): e3001195. https://doi.org/10.1371/journal.pbio.3001195

  • Sharifian S, Kamrani E, Saeedi H. Global biodiversity and biogeography of mangrove crabs: Temperature, the key driver of latitudinal gradients of species richness. J Therm Biol. 2020 Aug;92:102692. https://pubmed.ncbi.nlm.nih.gov/32888577/

  • United Nations News (2003, 22 May) Annan calls for preservation of world’s biodiversity. UN News: Global perspective Human stories. https://news.un.org/en/story/2003/05/68762-annan-calls-preservation-worlds-biodiversity

Devin Reese, PhD. “Biodiversity I” Visionlearning Vol. BIO-5 (6), 2022.

Top


Page 3

Evolutionary Biology

by Alfred L. Rosenberger, Ph.D.

Many people whose life and work depend on the natural environments are highly aware of the organisms around them. People who subsist on the food they grow or hunt, whether they are farmers in the rural United States or native hunter-gathers in the Amazon rainforest, are attuned to the variety of organisms around them, and can easily describe their benefits and problems. Some scientists have found that we have a genetic, instinctual fondness for nature that explains why humans are so preoccupied with plants and animals.

But there are surely practical reasons, too, for carefully observing behaviors and patterns in organisms. For those living off either a lush rain forest or the inhospitable Arctic, local plants and animals can provide food, shelter, clothing, and fuel for cooking fires or warmth. Even in less extreme regions, a basic knowledge of environmental biology, including food-related facts like the fruiting patterns of trees and the grazing habits of large mammals, has always been important to survival, so it has become a significant part of the cultural traditions of people virtually everywhere. As you might expect, each culture has its own system for naming the plants and animals with which they live.

The process of naming and classifying organisms according to set of rules is called taxonomy. In some cultures, taxonomic rules are based on traditional uses for plants and animals, and the existence of a classification system facilitates the transfer of that knowledge through generations. In modern scientific culture, taxonomic rules are based on physical appearance as well as genetic and evolutionary relationships between species, but having a classification system serves a very similar purpose by allowing scientists to communicate efficiently and effectively about the nature of a given organism with only a few words.

Comprehension Checkpoint

Taxonomy

Among Europeans, we can trace the beginnings of organized, written taxonomies to ancient Greece. As early as 300 BCE, the philosopher and naturalist Theophrastus, a disciple of Aristotle, classified plants into three categories: herbs, shrubs, or trees. In addition to classifying local specimens, Theophrastus was able to add species from other regions because Alexander the Great sent him specimens collected during his expeditions to conquer much of Europe and Asia.

During the 16th and 17th centuries, another round of famous expeditions marked the Age of Exploration. Dozens of explorers, including Ferdinand Magellan, Henry Hudson, and Hernando Cortes, traveled to distant parts of the globe and returned not only with stories of what they had seen, but also with samples of the plants and animals they encountered. European naturalists were kept busy describing these many new species and naming them in Latin, which was the language generally used for scholarly purposes.

By the 19th century, the idea of collecting exotic species became common practice and laid the foundation for research in the natural sciences. Charles Darwin, who developed the modern theory of evolution by natural selection in the middle 1800s, was one of many naturalists commissioned to collect, record, and describe the species he saw during his travels.

Progress was also being made cataloging the kinds of plants and animals that existed. Naturalists in the 17th century, such as John Ray, began to develop a scientific basis for recognizing species. Ray and others began to inventory species by arranging them into logical classes based on their appearance and characteristics.

As a result of this widespread effort to describe new species, names proliferated, resulting in overlaps and redundancies and a lot confusion. Without sharing commonly accepted standards for composing names – even regarding such a simple rule as how long a name ought to be – the whole purpose of a classification scheme as a communication tool is lost. For example, before a widely accepted taxonomic system was in place, the common Wild Briar Rose was identified by botanists as Rosa sylvestris alba cum rubore, folio glabro roughly meaning 'pinkish white woodland rose with hairless leaves'), and Rosa sylvestris inodora seu canina ('odorless woodland dog rose'). How was one to know if these names referred to one thing or two, that is, to one or two species?

Comprehension Checkpoint

What problem resulted from not having a standard naming system for plants and animals?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Old naming convention
Rosa sylvestris alba cum rubore folio glabro
Rosa sylvestris inodora seu canina
Linnaean System
Rosa canina

In the 18th century, the Swedish scientist Carolus Linnaeus more or less invented our modern system of taxonomy and classification. Linnaeus was one of the leading naturalists of the 18th century, a time when the study of natural history was considered one of the most prestigious areas of science.

Unlike his predecessors, Linnaeus adhered rigidly to the principle that each species must be identified by a set of names, which are termed the "genus" and "species," and classified on the basis of their similarities and differences. Although he was primarily a botanist, Linnaeus produced a comprehensive list of all organisms then known worldwide, some 7,700 plant and 4,400 animal species. He wrote one of the great classic works in the history of science, Systema Naturæ, and revised it many times.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure: The cover of Linnaeus' classic work, Systema Naturæ, which is generally considered to be the start of modern taxonomy.

We now consider the 10th revision of Systema Naturæ, published in 1758, as the official start of modern taxonomy and the first formal biological classification. It is a benchmark of modern taxonomy, an important reference to help biologists keep the many names straight. This is why when we come across taxonomic names, such as the official-looking labels identifying an animal in the zoo, Linnaeus's authorship is often acknowledged, and no dates of authorship are ever earlier than 1758. For instance, the plaque outside a gorilla exhibit may read as:

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

This is more than a simple caption. Its purpose is to let us know, clearly, that the gorillas on display are the same type of animal that the French naturalist Isidore Geoffroy named Gorilla in his publication of 1853. It also tells us that the gorilla belongs to a group of mammals known as Primates, which in turn was named by Linnaeus in the 10th edition of his Systema Naturæ. Where did that odd name “gorilla” come from? As far as we know, it was introduced to Europe by the Greek explorer Hanno, who visited northwestern Africa during the sixth century BCE. It was the word that Hanno thought the local African people used to call gorillas (and supposedly meaning wild or hairy women). In other words, it was adopted by Hanno and is still in fashion today after being introduced into the formal Linnaean taxonomic system by Geoffroy in 1853.

Comprehension Checkpoint

How was the classification system devised by Carolus Linnaeus different from previous systems?

Modern taxonomy officially began in 1758 with Systema Naturae, the classic work by Carolus Linnaeus. This module, the first in a two-part series on species taxonomy, focuses on Linnaeus’ system for classifying and naming plants and animals. The module discusses the contribution of diverse cultures to the development of our modern biological classification and describes the historical development of a scientific basis for classifying species.

Key Concepts

  • Under Linnaeus's system, every species is known by a unique Latin-sounding genus and species name that distinguishes it from other species.

  • Linnaeus's work organized organisms into logical classes based on their appearance and characteristics, and thus provides a basis for comparing different species.

Alfred L. Rosenberger, Ph.D. “Taxonomy I” Visionlearning Vol. BIO (1), 2003.

Top


Page 4

Evolutionary Biology

by Alfred L. Rosenberger, Ph.D.

Henry Fairfield Osborn was the first curator of vertebrate paleontology at the American Museum of Natural History, in New York, and its first scientist-president. He was hired in 1891, just 15 years after the museum opened. One of Osborn's most famous projects involved the naming and description of what was once only a modestly important dinosaur discovered in Montana, Tyrannosaurus rex. It was gigantic, fierce looking, and extraordinarily popular as an exhibit skeleton mounted in the museum's halls, and Osborn helped make the shorthand label for this fascinating beast, T. rex, a household expression, fitting even to become the marquee of a British rock and roll supergroup of the early 1970s.

More recently, however, after being part of our vocabulary for a century, that name was challenged. Paleontologists recently discovered that the species we know as T. rex had an earlier christening. Manospondylus gigas is its "real" name. The reason? Edward Drinker Cope, a self-taught paleontologist, proposed and published that name in 1892, about a dozen years before Osborn announced T. rex. Since it was based on a single bone, Osborn could not have known that Cope's M. gigas was the same species as his. But with many more fossils that appear to be from the famous "tyrant lizard," what should be done with multiple names?

Problems like this, the accidental duplication of names, were obvious to the father of taxonomy, Carolus Linnaeus. His response was to establish a logical, uniform approach to the naming process in the hope that it would be recognized and accepted the world over (see our Taxonomy I: What's in a name? module). Linnaeus knew that the creation of duplicate, different-sounding names for the same species, called taxonomic synonyms, was only one of many barriers relating to names that could impede accurate scientific exchange. Differences in language and culture, the idiosyncrasies of individual scientists, difficulty obtaining the writings of other scientists, unavoidable mistakes such as typographical errors – all can contribute to confusion and a host of problems when identifying and cataloging organisms. Thus, the central idea behind the Linnaean taxonomic system was to provide a stable, enduring list of names so that we can communicate effectively in all the fields of the life sciences, retrieve information efficiently, and be confident that each species name is one of a kind.

The solution that Linnaeus adopted was the consistent use of a two-name system called binomial nomenclature. He recognized that by giving every species a fixed pair of names, analogous to our "family" and "given" names, each one could be designated uniquely. The titles for the two official names were those that John Ray, a British naturalist, had proposed a century earlier, the genus and species. In practice, these terms are tied together and used in combination. The combination is presented as a sequence, first the genus name (plural genera, related to the word generic) and then the species name (plural species, related to the word specific), as in the binomial Homo sapiens.

Taxonomists have also extended this reasoning to employ a three-name set, a trinomial, which applies to the subspecies of a species. Gorilla gorilla gorilla (Western Gorilla) and Gorilla gorilla beringei (Eastern Gorilla) are examples. That scientists still quibble over whether or not the Western and Eastern populations of gorillas ought to be interpreted as different species or merely different subspecies doesn't really matter. As species, they would be known as G. gorilla and G. beringei; as subspecies, we'd call them G. gorilla gorilla and G. gorilla beringei. Trinomials even apply to our own species, as shown by the recent naming of an extinct subspecies from Ethiopia that was based on fossils that are about 160,000 years old. It is called Homo sapiens idaltu to contrast it with all of us modern people – Homo sapiens sapiens.

Comprehension Checkpoint

Linnaeus devised a naming system

For clarity and consistency, there are other rules governing the naming of species, among them:

  • Generic and specific names are italicized when typewritten.
  • The first letter of the genus name is always capitalized, while the species name is entirely lowercase.
  • Species names are constructed in the Latin form, in the tradition of the early European taxonomists.
  • When more than one name is attributed to a single species, the oldest published synonym name takes precedence over others.

Of course, the rules of Linnaean nomenclature apply only to official names, not to informal, everyday language, which is virtually impossible to track and enforce. Thus an informal reference to a species is simply written lowercase in plain text (e.g., gorilla) while a formal reference, for example to the genus, would appear in italics (e.g., Gorilla). As you have probably noticed, our gorilla example is also an unusual case of taxonomic nomenclature, where the common name and the scientific name are one and the same. It is also unusual for its historical simplicity – the formal genus name, Gorilla, has a fairly straightforward history, much less complicated than the story of the name for chimpanzees, Pan, as you see from the table below. Gorillas have only been given two generic (i.e., genus) names, and the oldest is easily decided as the proper one for us to use. Chimpanzees, on the other hand, have been given at least 11 different generic names. Its first name, Troglodytes (also once used for gorillas), is not the one we use today because before it was applied to chimps, it was given to a very successful bird, the wren, Troglodytes troglodytes. The tiny wren trumps the chimp in this case, since the rules of zoological nomenclature apply equally to all animals.

Comprehension Checkpoint

Species names are constructed to sound like

1853 Gorilla I. Geoffroy, based on Troglodytes gorilla (Savage and Wyman, 1847)
1913 Pseudogorilla Elliot, based on Gorilla mayema (Alix and Bouvier, 1877)

1812 Troglodytes E. Geoffroy, based on Troglodytes niger (E. Geoffroy, 1812)
1816 Pan Oken, based on Pan africanus (Oken, 1816)
1828 Theranthropus Brookes, based on Troglodytes niger (E. Geoffroy, 1812)
1841 Hylanthropus Gloger, based on Simia troglodytes (Blumenbach, 1799)
1860 Pseudoanthropus Reichenbach, proposed as a replacement for Troglodytes
1866 Engeco Haeckel, based on Simia troglodytes (Blumenbach, 1799)
1866 Pongo Haeckel, replacement for Troglodytes
1895 Anthropithecus Haeckel, correction for Anthropopithecus
1905 Fsihego de Pauw, based on Fsihego iturensis(de Pauw, 1905

When we use formal taxonomic names in the literature, the names themselves are often accompanied by a compact citation that identifies its author and date of publication, like this: Gorilla gorilla gorilla (Savage, 1847). Which brings us back to Henry Fairfield Osborn and his unavoidable nomenclatural faux pas. Tyrannosaurus rex (Osborn, 1905) is a name that breaks one of the cardinal rules of taxonomy, the principle of priority, which requires that in cases where taxonomic synonyms are known to occur, the first name given to a species is recognized as the authentic one. The bottom line for T. rex is that it is not being replaced by its older synonym, Manospondylus gigas (Cope, 1892), for a more practical reason: It is so familiar to us all. Consider how much confusion a taxonomic change would bring to the world of science, where T. rex is an accepted name, and to the culture at large, where T. rex is one of the world's most famous dinosaurs.

One of the interesting lessons this situation highlights is the way scientists voluntarily abide by Linnaean practices. This is not simply to avoid the chaos that would occur if they did not. When scientists describe new species, they do so in a journal article or other form of publication, and that work is subject to review by their peers (see our Module on Peer Review in Scientific Publishing. If scientists were to disregard a well-established procedure, their peers would likely not allow it to be published. Disputes and questions over Linnaean names can still arise, but most resolve themselves in the literature, where scientists present not only their research about species biology and evolution but also historical information about taxonomic names – all in an effort to keep the names straight. In cases where confusion persists, or adhering to the rules might upset the stability of names, scientists may petition one of the decision-making bodies recognized by scientists around the world for an exception to the rules. These commissions also introduce changes to the taxonomic code from time to time.

On January 1, 2000, one such amendment written by the International Commission on Zoological Nomenclature came into effect. In the spirit of Linnaeus, always hoping to maintain the stability of taxonomic names, a new ruling upheld the common sense solution to the dilemma of Tyrannosaurus vs. Manospondylus. The Commission provided a clear, legal definition of what is meant by general acceptance, as opposed to rare usage, of a taxonomic name. If a name is in use for 50 years, it does not have to revert to a rarely used prior name that may be lurking in the shadows. Osborn's T. rex has been among us, called by that name, for a hundred years, almost as long as Manospondylus gigas lay quietly buried in the literature. So, wisely – or might it be expectedly? – the challenge to the reign of Tyrannosaurus rex has bitten the dust.

Comprehension Checkpoint

Scientists voluntarily go by the Linnaean system to names species in order to

In contrast, the name of another giant, Brontosaurus (Marsh, 1879), has been sunk, as taxonomists are apt to say, when a replacement name wins out. It was changed to Apatosaurus (Marsh, 1877). Both terms were widely used for a long time but here, too, paleontologists learned recently that the bones bearing those names actually came from one species. The oldest name for that species is Apatosaurus ajax.

The consensus among paleontologists is that a name change in this case would not be too upsetting, and the giant herbivore's more familiar name "Brontosaurus" has been set out to pasture. As further insult to this case of mistaken identity, Apatosaurus is also suffering a required cosmetic makeover. For decades this gigantic animal, originally found headless, was displayed grandly and whole at the American Museum of Natural History and elsewhere, but with the wrong face. During the 1970s, paleontologists finally were able to match up skulls and skeletons with certainty, only to prove what was long suspected. The tiny heads chosen long ago as a best fit to crown those gigantic bodies were accidental imposters: They belonged to another dinosaur called Camarasaurus. So, "Brontosaurus," who is actually Apatosaurus, got its head size fixed and a new name as well, because even giants have to follow the rules.

Carolus Linnaeus, the “father of taxonomy,” developed a uniform system for naming plants and animals to ensure that each species has a unique name. This module outlines rules of forming two-term taxonomic names according to genus and species. The module gives examples of naming controversies and describes how they were resolved, including by bending the rules in regard to certain famous beasts.

Key Concepts

  • The system of binomial nomenclature was Linnaeus' response to the need of a clear, distinct naming of species that would be recognized around the world and reduce the chance of one species being known by multiple names.

  • Scientific names are always written in italics, with the genus capitalized and the species lowercase, and should sound as though they are Latin.

Alfred L. Rosenberger, Ph.D. “Taxonomy II” Visionlearning Vol. BIO-2 (2), 2003.

Top


Page 5

Genetics

by Nathan H Lents, Ph.D.

The discovery that DNA is the material that forms our genes (see our DNA I: The Genetic Material module) opened the door to the modern field of molecular biology, sometimes called molecular genetics, in which scientists examine how DNA encodes all of the great complexities of living things. One of the first major advances of the new field of molecular biology was the deciphering of the DNA molecule's structure - the double helix (see our DNA II: The Structure of DNA module).

Part of the motivation behind scientists' extensive efforts to discover the structure of DNA was the long-held scientific principle that "structure begets function." In other words, what a cell or molecule does, and how it does it, is determined by its shape and structure. This makes sense even in our everyday experience. Consider a hammer or a screwdriver. These important tools can do what they do because of their unique shape. If we changed their shape, they wouldn't work very well. Shape drives function. The same is true for DNA.

As mentioned in our DNA II module, the moment James Watson and Francis Crick first gazed upon their newly built model of DNA, they could see clues about one of the major properties that they knew DNA must somehow exhibit: self-replication. The mystery of self-replication had confused scientists for many years. But one thing was certain: Every cell, whether a yeast, a bacterium, or a human cell, must be able to copy all of its genes, all of its DNA. This is because when a cell divides in two, both resulting cells are genetically identical to each other and to the original parent cell. The sheer number of times that the DNA in your body has been replicated (and accurately) is astounding.

You began life as a single cell, a zygote, the result of the fusion of a sperm and an egg. Since then, you have developed into an organism with somewhere between 10 and 100 trillion cells (>10,000,000,000,000). And, with certain rare exceptions, every single one of your trillions of cells has the same DNA sequence as the one cell did when you were just a zygote. How does all of this copying of DNA take place?

As mentioned, the structure of the double-stranded DNA molecule gave powerful hints as to how DNA might be accurately copied. Specifically, the complementary base-pairing of DNA follows a strict pattern that allows us to accurately predict what one strand of DNA looks like just by looking at the other, complementary strand. Put another way, if someone took a regular DNA molecule, pulled the two strands apart, and showed us only one strand, we could accurately list the series of nucleotides of the missing strand.

Watson and Crick saw this possibility when they ended their paper saying, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." This possible copying mechanism is called semi-conservative DNA replication, because if a cell would duplicate its DNA in this manner, the DNA helix would split and half of both of the new double helices would retain DNA from the original strand (Figure 1). While this scheme makes good sense, it was just a logical guess at first. It wasn't until the late 1950s that Matthew Meselson and Franklin Stahl performed the scientific experiment that showed that the replication of DNA was indeed semi-conservative. (See our Meselson and Stahl: Models of DNA Replication.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Schematic of DNA replication according to the rules of Watson-Crick base-pairing. In this model, the two strands of the original DNA molecule are first pried apart. Then, complementary nucleotides (A with T, G with C, etc.) are added opposite the nucleotides in both of the original strands. The result is two DNA molecules, both identical to the original strand (and thus to each other), and both with one old strand and one new strand.

In the 1950s, Meselson and Stahl, Watson and Crick, and many other scientists explored the properties of DNA using the intestinal bacterium Escherichia coli. Because a few rare strains of E. coli have been found to cause gastrointestinal illness, E. coli is frequently associated with outbreaks of food poisoning. But actually, most strains of E. coli are harmless and our large intestines are filled with this bacterium. E. coli was among the first routinely used "model organisms," a species that is chosen for extensive study in the laboratory because it offers certain practical advantages that make research easier. E. coli, in particular, is among the fastest growing organisms on Earth, with a generation time of under 20 minutes in ideal conditions. Since long before they knew what DNA was, scientists had noticed that the amount of DNA in an E. coli cell (and any other cell for that matter) doubles prior to cell division. The pool of DNA in the cell is then split equally between the two "daughter cells" that result, so that both have the same amount of DNA that the original bacterium had before replication. Because all of this happens in E. coli in about 20 minutes, it was the logical organism for early molecular biologists to select.

Comprehension Checkpoint

Why did molecular biologists choose E. coli for laboratory studies?

While Meselson and Stahl and others were testing the possible hypothetical models of DNA replication, other scientists set out to understand its molecular mechanism by re-creating it in a test tube. This process is called in vitro reconstitution and is often used in the field of biochemistry as a way of simplifying a complex cellular event so that it happens in isolation and can thus be observed and manipulated at will. The scientists who were first able to reconstitute DNA replication in a test tube were Arthur Kornberg and his wife Sylvy and the research team that they led. They achieved this incredible feat through a painstaking process of successive chemical purification of different proteins and other components from large batches of E. coli bacteria. By separating and purifying individual components, the Kornberg research team made several important discoveries about how DNA replication occurs.

These discoveries all began with the development of a critically important technique – the DNA synthesis assay. An assay is a quantitative laboratory measurement of a certain biological or chemical process, usually in a test tube (in vitro). The DNA synthesis assay is a technique for measuring the synthesis of new DNA molecules. The Kornberg laboratory was the first to develop this assay, and the assay itself is quite simple. First, DNA polymers are easily separated from free nucleotides because DNA is not soluble in solutions that contain trichloroacetic acid (TCA), while free nucleotides are. If a scientist adds TCA to a liquid mixture of DNA and free nucleotides, the DNA will precipitate out, while the nucleotides will remain dissolved in the liquid. The DNA precipitate can then be easily separated from the liquid by centrifugation.

The second important feature of the DNA synthesis assay is its use of radioactively labeled nucleotides. A scientist can add radioactive nucleotides when preparing a DNA synthesis assay, and then later, if DNA synthesis has occurred, some of the radioactive label will be incorporated into the TCA-insoluble DNA. This provides evidence that some of the labeled nucleotides were polymerized into a new DNA molecule. This DNA synthesis assay is very simple to execute and also very quantitative, which means that it gives very reliable and reproducible numerical values that can be used to calculate how much DNA was made and how fast the synthesis took place.

Armed with this assay, the Kornberg laboratory was the first to report the synthesis of DNA outside of a living cell. The popular press of the time announced that Arthur Kornberg had "created life in a test tube."

Of course, this was hardly the case, but the new ability to synthesize DNA in vitro captured the attention of the general population and is recognized as one of the crucial successes paving the way for the emergence of genetic engineering in the 1970s and 80s. Initially, the laboratory synthesis of DNA was extremely slow (much slower than it occurs in a cell), and it occurred only when crude extracts of E. coli were added to the test tubes. Crude extracts contain all the contents of the cells – all proteins, nucleotides, DNA, RNA, lipids, carbohydrates, etc. Nevertheless, the DNA synthesis assay was a good starting point in which Kornberg and others could begin to dissect the process of DNA replication in detail.

The first discovery and arguably the most important occurred in 1955: Kornberg's research team purified the enzyme from the crude extract that is chiefly responsible for the synthesis of DNA – DNA polymerase. When purified DNA polymerase is added to the DNA synthesis assay, the synthesis of DNA occurs hundreds of times more rapidly than when it is not added. However, the in vitro synthesis of DNA still required the addition of small amounts of crude cell extract. This is because DNA polymerase does not make DNA all by itself – there are many other factors required and not all of these were known at the time. The Kornberg lab and others around the world worked to purify other important components from the crude extract, in the hopes that one day they could make DNA using only the necessary factors and no crude extract.

Some of these required components were obvious, while others were unexpected. For example, it was very quickly discovered that nucleotides were required for the synthesis of DNA, which isn't very surprising because it was well known, even in the 1950s, that nucleotides are the building blocks of DNA. However, only nucleotides in the tri-phosphate form could be used as DNA building blocks (Figure 2). Later studies demonstrated why this is so - the breaking of the high-energy terminal phosphate bond of each new nucleotide added to a growing DNA molecule provides the energy for the polymerization reaction.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Only nucleotide tri-phosphates can be used for DNA synthesis. Although nucleotides can exist with one, two, or three phosphates attached to the 5' carbon of the pentose sugar, Kornberg found that only triphosphate nucleotide can be used as building blocks for DNA synthesis. Later work demonstrated that the reason for this requirement is that the breaking of the high-energy covalent bond between the phosphates provides the energy for forming the covalent bonds between adjacent nucleotides of DNA.

Another important point that the Kornberg laboratory noted was that the test tube DNA synthesis reactions required the presence of an intact copy-template DNA in order for DNA polymerase to make more DNA. In other words, even in a test tube, DNA polymerase cannot build "random" DNA molecules through the willy-nilly polymerization of nucleotides. It can only make copies of DNA molecules that already exist. Think of it this way - DNA polymerase is like a copy machine, NOT like a computer with new sentences can be created. A copy machine cannot print anything unless it has a template to work with. So when Kornberg added purified intact DNA molecules to the DNA synthesis assay, once again the speed of DNA polymerase increased dramatically. (Prior to this discovery, DNA synthesis was occurring only because tiny amounts of DNA template were present in the crude extract that is added to the assay mixture.)

Comprehension Checkpoint

DNA polymerase makes it possible to synthesize DNA molecules in a test tube, a key aspect of genetic engineering.

In addition to the hunt for more of the individual factors involved in DNA replication, the DNA synthesis assay allowed researchers to study the properties of DNA synthesis. As scientists around the globe began to study DNA polymerase and DNA replication, they knew that the semi-conservative model of DNA replication, as proven by Meselson and Stahl, requires that the two original template strands of DNA are pulled apart in order to be copied separately. However, it was not known how this happens. Scientists had observed that the two strands of DNA are held very tightly together by the hydrogen bonds between complementary nucleotide base-pairs of the two strands. In the laboratory, the only way the two strands could be separated was by heating the DNA to near-boiling temperatures. Obviously, it is not likely that living cells generate high heat in order to pry apart the two strands of DNA, so the question remained, "Inside a living cell, what pulls apart the two original strands of DNA so that they may be copied?"

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: DNA synthesis begins at many locations. DNA replication begins at specific chromosomal locations called origins of replication. Linear chromosomes have many origins, allowing DNA synthesis to occur rapidly.

Because double-stranded DNA is very stable, scientists suspected that there must be an elaborate mechanism for pulling the two strands apart. Two research groups, including Arthur Kornberg's, discovered the answer in the late 1970s: an enzyme they named DNA helicase. This enzyme is capable of prying the two strands of DNA apart so that the two individual strands can then serve as templates for DNA polymerase, according to the semi-conservative model.

It turns out, however, that when helicase first pries apart a section of DNA, it does not start at the end of the molecule in the case of linear DNA, nor does it select a place at random. The initial "melting" of DNA occurs at specific locations, called origins of DNA replication. Each of these creates a bulge in the DNA double helix that is visible by electron microscopy. These bulges are called replication bubbles and represent sites of DNA synthesis (Figure 3).

When a replication bubble opens up and DNA synthesis begins, replication proceeds in both directions, away from the origin. A DNA helicase enzyme leads the way, unzipping the parental DNA as replication proceeds in its wake. Both of these mobile regions of DNA synthesis are referred to as replication forks, which are the sites at which the replication of DNA is executed (Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: The replication fork. Following formation of a replication bubble, DNA synthesis proceeds in both directions, away from the original origin. A replication fork is the site in which the two parental DNA strands are being pried apart and DNA replication is taking place.

Comprehension Checkpoint

Replication bubbles are bulges in the DNA helix that indicate

Once scientists began to focus on the events that occur at replication forks, they made several interesting observations that helped them to realize that DNA synthesis was much more complicated than they first imagined. The first such intriguing discovery was made by a young Japanese scientist named Tsuneko Okazaki, while working as a postdoctoral fellow with Kornberg at Stanford. Okazaki noticed that DNA polymerase cannot simply begin copying a template once it is pried apart from its complementary strand. Something more is needed to "kick-start" the copying of DNA before DNA polymerase can jump into action. Okazaki then discovered that she could coax DNA polymerase into performing DNA replication if she added a short piece of DNA that was complementary to part of the DNA template (Figure 5). Because this short DNA molecule served to get DNA synthesis started, Kornberg named them primers.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: DNA polymerase requires a primer. DNA polymerase cannot begin to copy a template DNA unless a small part of the new copy DNA is already in place. Shown in light green in this figure, these short polymers are called "primers."

The discovery of primers was a major advance because now the scientists knew all the crucial components that were needed to perform an efficient DNA synthesis reaction in vitro. They no longer had a need for crude cell extract. Furthermore, the discovery of primers led to another curious observation by scientists, including Tsuneko Okazaki and her husband Reiji Okazaki, both former trainees of Kornberg, who had returned to Japan and formed their own research group. The Okazakis noticed that when a DNA synthesis reaction is set up and a primer is added, DNA synthesis begins at the primer and proceeds in only one direction. Curiously, they did not observe replication of the DNA region on the other side of the primer.

Returning to the structural model of DNA built by Watson and Crick, the Okazaki research team realized that DNA polymerization was only occurring at one end of the primer, the 3' end, and continuing in that direction. This was not simply a peculiar artifact of in vitro DNA synthesis. DNA replication inside all living cells also proceeds only in one direction: 5' to 3' (Figure 6). This property is called unidirectional DNA synthesis.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Unidirectional DNA synthesis. DNA synthesis can only proceed in one direction. This is because new nucleotides can only be added to a growing DNA polymer by addition onto the free hydroxyl group at the 3' end. The other end, the 5' end, has no free hydroxyl group.

Once it was realized that DNA synthesis proceeds in only one direction, Okazaki, Kornberg, and the entire community of DNA scientists realized that this posed a serious problem for their understanding of the DNA replication fork. There was extensive evidence that DNA synthesis proceeds on both strands of the DNA template after the two strands are pulled apart, and they had seen how DNA polymerase enzymes follow behind DNA helicase, synthesizing the new DNA strands alongside both original template strands (the semi-conservative model of DNA replication). But how could this be if DNA synthesis can proceed only in one direction (Figure 7)?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: Unidirectional DNA synthesis poses a problem for the replication fork. It was discovered that DNA synthesis can proceed in only one direction, but scientists had already observed that DNA synthesis does indeed occur on both strands of a replication fork. These two observations appeared to contradict each other.

It was Reiji Okazaki who first postulated the solution to this conundrum. He imagined that the only possible way that DNA replication can occur on both strands of a replication fork but still proceed only in the 5' to 3' direction was if DNA synthesis was continuous on one of the strands, trailing steadily behind the DNA helicase, but discontinuous on the other strand, proceeding in short stretches away from the replication fork. These short stretches are called Okazaki fragments in honor of Reiji Okazaki; however, it was the work of the whole Okazaki research team, the Kornberg research team, and several others that confirmed Okazaki's hypothesis regarding discontinuous replication of DNA (Figure 8). It was Kornberg who coined the terms leading strand for the strand in which DNA replication is continuous, and lagging strand for the strand in which DNA synthesis occurs in short discontinuous Okazaki fragments of ~300 nucleotides of DNA.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: Okazaki fragments are stretches of discontinuous DNA replication. One side of the replication fork allows steady, continuous replication and is called the leading strand. The other side, the lagging strand, must employ discontinuous replication that occurs in short stretches called Okazaki fragments.

Tragically, Reiji Okazaki died seven years after his famous discovery of discontinuous DNA replication. A native of Hiroshima, he was 15 years old when the first atomic bomb was dropped and was heavily irradiated while searching for his parents amongst the rubble. He suffered the effects of radiation sickness, finally succumbing to leukemia at the age of 44. The Okazakis and Kornbergs were both great examples of husband-wife teams of scientists (Figure 9).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 9: The Okazakis and Kornbergs. From left, Reiji and Tsuneko Okazaki, Alfred and Sylvy Kornberg. c1975. image © A-IMBN

Following these major breakthroughs, scientists moved relatively quickly in mapping out the other major players of the replication fork (Figure 10). For example, it was discovered by the Kornberg lab that the primer that is necessary to initiate DNA synthesis inside cells is actually made of RNA, not DNA, and is put in place by an enzyme called DNA primase. This RNA is eventually replaced with DNA by a specialized version of DNA polymerase, called DNA polymerase I (DNA pol I), while the main workhorse of DNA polymerase is actually DNA polymerase III (DNA pol III).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 10: Other important factors that work in the replication fork. As research continued, a more complete picture of the events of the replication fork came into view. As shown here, DNA synthesis is a complicated process performed by the coordinated function of many factors.

Further, it was discovered that the individual Okazaki fragments of the lagging strand need to be covalently bonded together. The enzyme that seals the Okazaki fragments together is called DNA ligase (Figure 11). Because this enzyme "seals" two stretches of DNA together, DNA ligase would later prove to be an essential tool in genetic engineering, as DNA molecules from different sources were "cut and pasted" to make new combinations and new DNA sequences.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 11: DNA ligases seals Okazaki fragments together. Because each Okazaki fragment is made separately, they need to be "sealed together" or else the lagging strand of DNA would have breaks in it. The enzyme DNA ligase seals these fragments together so that, when replication is complete, the lagging strand is just as smooth and unbroken as the leading strand.

Comprehension Checkpoint

During DNA synthesis, Okazaki fragments are seen on the _______ strand.

For all of the important discoveries that led to our understanding of the molecular events that take place at the DNA replication fork, Alfred Kornberg was honored with the Nobel Prize in Chemistry in 1959. Throughout his life, Dr. Kornberg mentored many young scientists who went on to great accomplishments of their own, including the Okazakis, whose pioneering work with discontinuous DNA replication led to the discovery of Okazaki fragments. Other famous students of the so-called "Kornberg school" include research leaders around the world in both academia and the biotechnology industry. In fact, it is no surprise that the biotech industry itself started mainly in the San Francisco bay area, because Kornberg spent most of his career at Stanford University, just 30 miles south of San Francisco. Among the most successful students of Arthur Kornberg is his son Roger Kornberg, who claims to have "grown up in the lab" watching his father make crucial discoveries about DNA synthesis. With his own research team, Roger painstakingly studied the processes of RNA synthesis, also called gene transcription, which has many parallels to DNA synthesis.

Just as Arthur Kornberg earned the Nobel Prize in 1959 for deciphering the events of DNA synthesis, his son Roger was awarded the Nobel Prize in 2006 for a lifetime of research on RNA synthesis. The success of this father-son duo demonstrates how the mentoring of the next generation of scientists is among the most important work that scientists perform, a reality further emphasized by the fact that the large majority of scientific research takes place in the academic setting and involves young scientists-in-training as the foot soldiers of discovery.

In the field of molecular biology, scientists examine how DNA encodes all the complexities of living things. This third module in the DNA series focuses in the process by which DNA is replicated. The module describes the DNA synthesis assay, where scientists were able to replicate DNA in a test tube. Advancements in understanding the features and properties of DNA replication are discussed.

Key Concepts

  • Once the structure of the DNA molecule was discovered, scientists could immediately envision a possible copying mechanism based on the rules of nucleotide-base pairing.

  • In order to study and observe DNA replication more directly, scientists in the 1950s devised techniques to perform DNA replication in a test tube, called the DNA synthesis assay.

  • By using the DNA synthesis assay, scientists were able to observe the features and properties of DNA replication and test various hypotheses about how the process works.

  • The process of DNA replication was identified by several teams of researchers all working together to break down the process into multiple steps that could more easily be studied individually.

  • HS-C6.1, HS-C6.2, HS-LS3.B1
  • Bessman, M. J., Lehman, I. R., Simms, E. S., & Kornberg, A. (1958). Enzymatic synthesis of deoxyribonucleic acid. II. General properties of the reaction. J. Biol. Chem., 233(1): 171-177.
  • Kornberg, A. (1991). For the love of enzymes: The odyssey of a biochemist. Boston: Harvard University Press.
  • Lehman, I. R., Bessman, M. J., Simms, E. S., & Kornberg, A. (1958). Enzymatic synthesis of deoxyribonucleic acid. I. Preparation of substrates and partial purification of an enzyme from Escherichia coli. J. Biol. Chem., 233(1): 163-170.
  • Meselson, M., & Stahl, F. W. (1958). The replication of DNA in Escherichia coli. Proc Natl Acad Sci USA, 44(7): 671-682.
  • Okazaki, R., Okazaki, T., Sakabe, K., Sugimoto, K., & Sugino, A. (1968). Mechanism of DNA chain growth. I. Possible discontinuity and unusual secondary structure of newly synthesized chains. Proc Natl Acad Sci USA, 59(2): 598-605.

Nathan H Lents, Ph.D. “DNA III” Visionlearning Vol. BIO-3 (2), 2010.

Top


Page 6

Genetics

by Nathan H Lents, Ph.D.

Look around you. Most objects you are familiar with will eventually fall into ruin if not constantly maintained: a car will eventually rust and fall to pieces; a house will spring leaks in the roof and fall to the ground; even mountain ranges are eroded by wind and rain. Yet, life on Earth continues to flourish. Your children are no weaker or more likely to fall to pieces than you are. This is because living things have a fascinating and somewhat unique ability to reproduce and make "copies" of themselves. To do this, they must first copy their genetic material, their DNA (see our DNA I module for more information). And it is the unique chemical properties of DNA that allow it to generate copies of itself. As we all know, living things do eventually age and deteriorate, much like the old house and rusty car, but by making copies of our DNA and passing it to our offspring, life continues.

Scientists first began to investigate the unique chemical properties of DNA long before the structure of the molecule was understood, and even before DNA was discovered to be the genetic material. In the late 1800s, J. Friedrich Miescher, a Swiss chemist working in Germany, was studying white blood cells (leukocytes). Because white blood cells are the principal component of pus, Miescher would go to the nearby hospital and collect pus from used bandages. He found that the nucleus of these cells was rich in a then-unknown substance that contained several elements, among them phosphorous and nitrogen. He called this substance "nuclein" because it was found in the nucleus of the cells. We now know that Miescher's "nuclein" (later renamed nucleic acid, for its acidic chemical properties) contained DNA.

In the early 1900s, the Lithuanian-American biochemist Phoebus Levene probed deeper into the chemical composition of nucleic acid and was able to further purify the material. Although Levene was not the first scientist to successfully purify DNA, he was uniquely qualified to correctly determine its composition – he had extensive expertise in the area of carbohydrate and sugar chemistry. When Levene analyzed the chemical properties of nucleic acid, he discovered that DNA was abundant in three things: five-carbon sugars (pentoses), phosphate (as Miescher had previously found), and nitrogen bases. Thus, Levene correctly deduced that the DNA molecule was made of smaller molecules linked together, and these smaller molecules, which he named nucleotides, were made of three parts – a five-carbon sugar, a phosphate group (PO4), and one of four possible nitrogen bases – adenine, cytosine, guanine, or thymine (often abbreviated A, C, G, and T).

Levene was correct in identifying the three parts of a nucleotide, and determining that nucleotides were linked together to make DNA; however, in 1928, he also incorrectly proposed that one of each of the four nucleotides was linked together in a small circular molecule and that these "tetranucleotides" were the basis of DNA (Levene and London,1928) (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Phoebus Levene incorrectly hypothesized that DNA was made of circular "tetranucleotides." image © John Schmidt

Because he thought DNA was a simple circular structure, Levene rejected the notion that it could be the genetic material and sided firmly with those who believed that proteins contained the genetic code of organisms. However, much later, in the 1940s, Austrian-American scientist Erwin Chargaff reported that DNA from various species of life forms had different amounts of the four nucleotides (Vischer and Chargaff, 1948). This strongly argued against Levene's hypothesis that DNA was simply a circular tetranucleotide, and scientists began to propose other possible structures of the DNA molecule. Despite what he got wrong, Levene's contributions to our understanding of the DNA molecule were substantial.

Thanks to the work of Levene and several others, the chemical structure of the individual nucleotides was established by the early 1910s. Below are diagrams of the three parts of a nucleotide (Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: A nucleotide. The five-carbon sugar deoxyribose forms the center of the molecule. Attached to carbon #1 is the nitrogen base, and attached to carbon #5 is the phosphate group (there may be 1, 2, or 3 phosphates in a nucleotide)

The sugar deoxyribose gets its name because when it was discovered (by Levene), it was found to lack one oxygen atom when compared to another sugar he discovered called ribose (Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Ribose vs. Deoxyribose. These two pentoses, or five carbon sugars, differ only in the presence of an oxygen on ribose at the #2 carbon. At the #2 carbon of deoxyribose, a H exists in place of the OH group on ribose; however, lone hydrogens are often omitted from drawings of organic molecules, as above.

The oxygen missing from deoxyribose is on carbon #2, thus the full name of the sugar is 2'-deoxyribose. (In biochemistry, the carbons in sugar groups are often numbered with the "prime" symbol (as in 2'), to clarify that the carbon referred to is in the sugar and not another part of the molecule.)

Levene correctly deduced the connections between the nucleotides, and the chemical name for these connections are "phosphodiester bonds." These bonds are often casually referred to as "5' to 3' connections" because a phosphate molecule (PO4) serves as the bridge between the 5' carbon of one nucleotide and the 3' carbon of the next (Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Phosphodiester bonds. Nucleotides are connected to one another through a phosphate group that is connected to the 5' carbon of one nucleotide and the 3' nucleotide of another. image © Visionlearning, Inc.

Although Levene originally thought that four nucleotides were connected together in a circular molecule, we now know that the individual nucleotides are connected to form a very long linear structure (Figure 5).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: A chain of nucleotides. As shown in this linear drawing, the sugar and phosphate groups connect in a long chain. This is referred to as the "sugar-phosphate backbone," while the nitrogen bases are attached to the backbone. image © Visionlearning, Inc.

The four nucleotides of DNA are grouped into two "families" based on their chemical structure: the purines, adenine and guanine, have a structure with two rings; and the pyrimidines, cytosine and thymine, have only one ring (Figure 6).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: The nitrogen bases. Shown here are the four different nitrogen bases found in DNA nucleotides. Note that guanine and adenine, the purines, have two rings, while cytosine and thymine, the pyrimidines, have only one ring.

Thus, the strands of DNA inside our cells are polymers of repeating units of nucleotides. It is the precise order, or sequence, of the billions of nucleotides – As, Cs, Gs, and Ts – that make up our own unique DNA molecules and give us our individual genetic traits.

Comprehension Checkpoint

Nucleotides are

Once the building blocks of DNA were fully understood, by the late 1940s and early 1950s, scientists began to study the larger structure of DNA by taking X-ray diffraction pictures of purified DNA molecules. However, the pictures they took were not consistent with a simple linear strand of nucleotides, as depicted in Figure 5. Instead, the pictures argued that DNA is even more complex and has a very regular and symmetrical shape.

A number of scientists began to propose possible structures for the DNA molecule based on this research. Because the pictures argued for a symmetrical shape and chemical evidence argued that DNA was a polymer of nucleotides, many scientists thought that multiple strands wrapped around each other, like a braid or a rope. In fact, Linus Pauling, a prominent American scientist, had envisioned that DNA might be a triple helix – three strands of nucleotides wrapping around each other. Pauling, who would later win a Nobel Prize for correctly deducing the "alpha-helix" structure of proteins, even published a paper proposing a triple helix model of DNA in 1953 (Pauling and Corey, 1953). Pauling's practice of building models of molecular structures caught on with many biochemists of the day, and this time period has been referred to as the era of model building.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: Rosalind Franklin (25 July 1920 - 16 April 1958), a chemist who made vital contributions to the understanding of the fine molecular structures of DNA and RNA. Franklin is best known for her work on X-ray diffraction images of DNA, which James Watson and Frances Crick used to formulate their 1953 hypothesis about the structure of DNA. image © Museum of London

Several variants of a helix-shaped DNA were proposed by other scientists. In 1951, the English molecular biologists Francis Crick and James Watson had published their own incorrect version of a triple helix model. However, the diffraction pictures at the time were all relatively poor quality and resolution. As the technique was further refined, a brilliant chemist named Rosalind Franklin (Figure 7), working at King's College in England, was able to take much higher-resolution X-ray diffraction pictures.

Franklin's high quality pictures confirmed that DNA is actually a double helix - two strands wrapped around each other. However, the first double-stranded molecule built by Watson and Crick had the sugar-phosphate backbones of two strands wrapped around each other and the nitrogen bases pointing outward. It was Rosalind Franklin who pointed out the error in this model. She reminded Watson and Crick that the nitrogen bases are not very soluble in water and thus they would not be pointed outward where they would be surrounded by nearby water molecules in the cell. Instead, she argued, the sugars and phosphates, which are soluble in water, would be pointed outwards, towards the water, and the nitrogen bases would likely be tucked into the interior of the molecule, away from the water molecules, and perhaps interacting with each other.

Comprehension Checkpoint

The double helix structure of DNA was confirmed by

This was a vital piece of advice for Watson and Crick, leading them to take their model apart and begin to build a new one. This time, they built the double helix with the sugar-phosphate backbones on the outside of the helix and the nitrogen bases facing inward. They realized that the nitrogen bases of the two strands would now be in proximity of one another and would likely interact. A crucial piece of evidence that helped them figure this out came from Erwin Chargaff's studies. In addition to demonstrating that different organisms had different amounts of the four nitrogen bases of DNA, in 1951, Chargaff also reported that the amount of adenine (A) always equals the amount of thymine (T) and the amount of cytosine (C) always equals the amount of guanine (G). This is now known as "Chargaff's law."

With Chargaff's law in mind, Watson and Crick had a revelation. They reasoned that if the molecule is double-stranded, perhaps every time that an A was on one strand of the molecule, a T appears in the complementary position on the opposite strand (and vice versa); further, every time a C was on one side, a G would be on the other. This would explain why Chargaff's law held true. But, there was one problem. The nitrogen bases did not "fit together" in this configuration. Franklin had taken very good pictures of the DNA molecule that demonstrated that it was a tightly packed, narrow structure. When large molecules interact tightly, the smaller constituent molecules that closely pack together must be "complementary" like two interlocking pieces of a puzzle. For example, a negative charge will be closely associated with a positive charge, etc. Watson and Crick knew that their model wasn't quite right, because the nitrogen bases were not fitting together very well.

Comprehension Checkpoint

"Chargaff's Law" has to do with

The final revelation that allowed Watson and Crick to complete their model came in a moment described as "a stroke of inspiration" when Watson realized that the nucleotides would fit together if one was "upside down" relative to the other. (According to Watson, he saw this possibility as he sat across a small table from Crick, both of them working with small models of nucleotides.) This upside down orientation would occur if the two strands that wrap around each other are not pointed in the same direction, but in opposite directions. Thus, these two strands are said to be anti-parallel, like the traffic on a two-lane highway (Figure 8).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: Antiparallel nature of the DNA double helix. Notice how the sugar-phosphate backbone is on the outside of the "ladder" while the bases point inward. Notice also how the orientation of the two strands is "antiparallel" and thus look upside down compared to each other. This is most easily seen by looking at the pentose sugars (orange). image © Visionlearning, Inc.

Suddenly, everything made sense! With the two strands wrapping around each other in an anti-parallel configuration, Watson and Crick were able to fit the strands very close together, as Franklin's picture shows them to be, and the structure is regular and symmetrical. Most importantly, the nitrogen bases fit perfectly together through a type of chemical attraction called a hydrogen bond. Hydrogen bonds hold the two strands together stably, but not permanently. Specifically, an adenine–thymine "base pair" has two hydrogen bonds and a cytosine–guanine base pair has three hydrogen bonds. (See Figure 8 above.)

Given this anti-parallel structure, to distinguish the two strands of DNA, scientists say that one strand is oriented "5' to 3' " and the other strand is "3' to 5'." This is in reference to the 5'-3' connections in the phosphate-sugar backbone. The machinery of the cell also uses this orientation to select which direction to read the genetic information contained in the nucleotide sequence. Imagine trying to read an English sentence going from right to left. This would make no sense because the proper direction of reading English is left to right. Similarly, the DNA code must be read in the correct direction, which is 5' to 3'.

The beauty of the double-stranded anti-parallel configuration is found in the complementary base pairing according to Chargaff's law. If we know the sequence of nucleotides on one strand, we can accurately predict the nucleotides on the other. An adenine on one side of the DNA molecule would be paired with a thymine on the other side, and so on. Thus, if the two strands are separated, we could look at either strand and know exactly what was on the complementary strand. In fact, this is precisely what happens during DNA replication: The DNA double helix is pried apart or "unzipped" and both of the single strands then serve as copy templates for synthesizing a new strand. The result is two new DNA double helixes, both of which are identical to each other and to the original strand (Figure 9).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 9: Schematic of DNA replication method proposed by Watson and Crick. In this model, the two strands of the original DNA molecule are first pried apart. Then, complimentary nucleotides (A with T, G with C, etc.) are added opposite of both of the original strands. The result is two DNA molecules, both identical to the original strand (and thus to each other), and both with one old strand and one new strand. image © Visionlearning, Inc.

Once Watson and Crick had built the correct model, all could see that the anti-parallel configuration and the hydrogen bond base-pairing allowed this simple and effective means of DNA self-replication. In fact, the final sentence of their 1953 research article announcing the structure of DNA was, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." Watson and Crick published their model of DNA in the journal Nature in 1953, a model which earned them the Nobel Prize in 1962.

There has been much debate about whether Rosalind Franklin, as a rare female scientist in the 1950s, received enough credit for her crucial contributions to this important discovery. Unfortunately, she died from ovarian cancer just five years after the model was built and Nobel Prizes are not given posthumously. In the 1950s, scientists were not aware of the cancer risks involved with repeated X-ray exposure and did not properly protect themselves from the radiation given off by these instruments. Thus, it is conceivable that Franklin's premature death was a direct result of her dedication to scientific research and her pursuit of the structure of the DNA molecule.

Comprehension Checkpoint

From the sequence of nucleotides on one DNA strand, we can predict

With the discovery of the structure of DNA, a number of fascinating properties of the molecule were revealed. Not only can the molecule replicate itself, but the information stored in the base sequence of a single DNA strand stores all of the genetic information in your body. Think of the phone numbers stored in your cell phone. Each digit by itself means nothing. But when strung together in a precise sequence (e.g., 6-4-6-5-5-7-4-5-0-4), these numbers form a code for contacting another specific telephone. The same is true for DNA. The bases T, C, A, and G mean nothing by themselves. However, a long sequence such as ATGGCTAGCTCGATCGTACGT... can form the code for building an important molecule in your body. This molecule may then perform a function in your body that allows your heart to beat, your stomach to digest, your muscle to flex, or your brain to think. Thus, because these sequences of nucleotides provide the information for the cell to build proteins and other molecules, DNA is often called the "blueprint of life." How this blueprint is actually used by cells to build other molecules is explored in additional modules.

Exploration of the structure of DNA sheds light on fascinating properties of the molecule. This module, the second in a series, highlights major discoveries, from the parts of a nucleotide - the building blocks of DNA - to the double helix structure of the DNA molecule. The module describes scientific developments that led to an understanding of the mechanism by which DNA replicates itself.

Key Concepts

  • DNA consist of two strands of repeating units called nucleotides; each nucleotide is made up of a five-carbon sugar, a phosphate group, and a nitrogen base.

  • The specific sequence of the four different nucleotides that make up an organism's DNA gives that organism its own unique genetic traits.

  • The four nitrogen bases are complementary – adenine is complementary to thymine, cytosine is complementary to guanine – and the pairs form hydrogen bonds when the 5'/3' ends of their attached sugar-phosphate groups are oriented anti-parallel to one another.

  • HS-C6.1, HS-C6.2, HS-LS1.A2, HS-LS3.B1
  • Franklin, R., & Gosling, R. G. (1953). Molecular configuration in sodium thymonucleate, Nature, 171: 740-741.

  • Levene, P. A., & London, E. J. (1928). On the structure of thymonucleic acid. Science, 68(1771): 572-573.
  • Maddox, B. (2003). Rosalind Franklin: The dark lady of DNA. New York: Harper Perennial.
  • Pauling, L., & Corey, R. B. (1953). A proposed structure for the nucleic acids. Proc Natl Acad Sci USA, February 1953, 39(2): 84-97.
  • Vischer, E., & Chargaff, E. (1948). The composition of the pentose nucleic acids of yeast and pancreas. J Biol Chem, 176(2): 715-734.
  • Watson, J. D., & Crick, F. H. (1953). Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid. Nature, 171(4356): 737-738.
  • Watson, J. D. (1968). The double helix: A personal account of the discovery of the structure of DNA. New York: Atheneum.

Nathan H Lents, Ph.D. “DNA II” Visionlearning Vol. BIO-3 (1), 2009.

Top


Page 7

Genetics

by Nathan H Lents, Ph.D.

Consider yourself. You are an adult human, or nearly so, composed of hundreds of different types of cells. Each of these cell types has a different structure and function which together make up you as an individual. Millions of chemical reactions are taking place inside these cells, all carefully coordinated and timed. Yet, you started life as one single cell, a zygote, the result of the fusion of a sperm and an egg. How does all this remarkable complexity come about? Just what is it that you inherit that gives you your father's eyes and your mother's hair color? These questions had perplexed scientists and non-scientists alike for thousands of years, and they were addressed through a series of very clever experiments in the early part of the 20th century.

In the mid-19th century, Gregor Mendel completed his now classic experiments on genetics (see our Mendel and Inheritance module). Mendel proposed that the "characters" that controlled inheritance exhibited certain patterns of behavior. Specifically, they seemed to operate in pairs and separated independently during reproduction. The work that Mendel did established some trustworthy rules and properties about genetics and heredity, but no one had any idea what Mendel's "characters" were and how features were passed from generation to generation. Scientists were convinced that the basis of genetics and heredity could be found somewhere in the chemistry of our cells.

In the early 1900s, scientists began to focus on a recently discovered structure in cells called chromosomes (named by Walther Flemming from the Greek words for "colored bodies" because they selectively absorbed a red dye that Flemming used to color cells). Curiously, chromosomes seemed to behave in a manner similar to Mendel's "characters." Specifically, they were seen to line up randomly, separate, and then segregate from each other just prior to cell division, reminiscent of Mendel's laws of independent assortment and segregation (Figure 1). Gradually, scientists began to suspect a connection between chromosomes and heredity.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Microscopic view of chromosomes lining up (red circles at top) and separating (red circles at bottom) during mitosis (cell division) in an onion root tip.

While biologists were becoming convinced that chromosomes were the physical seat of genetics and inheritance, chemists were claiming that these structures were made of both protein and DNA. So, which was the genetic molecule housing all the hereditary information? Many scientists of the day actually thought it was protein because there are 20 different amino acids for building a protein polymer, while DNA polymers are made of only four nucleotide bases.

Consider it this way: The genetic molecule works like a language for storing information consisting of words that are made of individual "letters." The "language" of the DNA polymer would only have four different "letters" to work with (the four nucleotide bases), while "protein language" would have twenty possible letters – the twenty different amino acids. Imagine making a language using only four letters! Thus, because it offers far more complexity, most scientists in the early 20th century believed that protein was the component of chromosomes that housed the genetic information. Regarding the DNA, they thought that perhaps it acted as structural support for the chromosomes, like the frame of a house.

Clarification came during the First World War. During the war, hundreds of thousands of servicemen died from pneumonia, a lung infection caused by the bacterium Streptococcus pneumoniae. In the early 1920s, a young British army medical officer named Frederick Griffith began studying Streptococcus pneumoniae in his laboratory in the hopes of developing a vaccine against it. As so often happens in scientific research, Griffith never found what he was looking for (there is still no vaccine for pneumonia), but instead, he made one of the most important discoveries in the field of biology: a phenomenon he called "transformation."

Dr. Griffith had isolated two strains of S. pneumoniae, one of which was pathogenic (meaning it causes sickness or death, in this case, pneumonia), and one which was innocuous or harmless. The pathogenic strain looked smooth under a microscope due to a protective coat surrounding the bacteria and so he named this strain S, for smooth. The harmless strain of S. pneumoniae lacked the protective coat and appeared rough under a microscope, so he named it R, for rough (Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Cartoon depictions of the rough (harmless) and smooth (pathogenic) strains of S. pneumoniae.

Dr. Griffith observed that if he injected some of the S strain of S. pneumoniae into mice, they would get sick with the symptoms of pneumonia and die, while mice injected with the R strain did not become sick. Next, Griffith noticed that if he applied heat to the S strain of bacteria, then injected them into mice, the mice would no longer get sick and die. He thus hypothesized that excessive heat kills the bacteria, something that other scientists, including Louis Pasteur, had already shown with other types of bacteria.

However, Dr. Griffith didn't stop there – he decided to try something: He mixed living R bacteria (which are not pathogenic) with heat-killed S bacteria, and then he injected the mixture into mice. Surprisingly, the mice got pneumonia infections and eventually died (Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Illustration of F. Griffith's discovery of transformation in S. pneumoniae using mice.

Dr. Griffith examined samples from these sick mice and saw living S bacteria. This meant that either the S bacteria came back to life, an unlikely scenario, or the live R strain was somehow "transformed" into the S strain. Thus, after repeating this experiment many times, Dr. Griffith named this phenomenon "transformation." This discovery was significant because it showed that organisms can somehow be genetically "re-programmed" into a slightly different version of themselves. One strain of bacteria, in this case the R strain of S. pneumoniae, can be changed into something else, presumably because of the transfer of genetic material from a donor, in this case the heat-killed S strain.

Scientists around the world began repeating this experiment, but in slightly different ways, trying to discover exactly what was happening. It became clear that, when the S bacteria are killed by heat, they break open and many substances are released. Something in this mixture can be absorbed by living bacteria, leading to a genetic transformation. But because the mixture contains protein, RNA, DNA, lipids, and carbohydrates, the question remained – which molecule is the "transforming agent?"

Comprehension Checkpoint

The most important finding of Griffith's experiment was that

This question was examined in several ways, most famously by three scientists working at The Rockefeller Institute (now Rockefeller University) in New York: Oswald Avery, Colin MacLeod, and Maclyn McCarty. These scientists did almost exactly what Griffith did in his experiments but with the following changes. First, after heat-killing the S strain of bacteria, the mixture was separated into six test tubes. Thus, each of the test tubes would contain the unknown "transforming agent." A different enzyme was then added to each tube except one – the control – which received nothing. To the other five tubes, one of the following enzymes was added: RNase, an enzyme that destroys RNA; protease, an enzyme that destroys protein; DNase, an enzyme that destroys DNA; lipase, an enzyme that destroys lipids; or a combination of enzymes that breaks down carbohydrates.

The theory behind this experiment was that if the "transforming agent" was, for example, protein – the transforming agent would be destroyed in the test tube containing protease, but not the others. Thus, whatever the transforming agent was, the liquid in one of the tubes would no longer be able to transform the S. pneumonia strains. When they did this, the result was both dramatic and clear. The liquid from the tubes that received RNase, protease, lipase, and the carbohydrate-digesting enzymes was still able to transform the R strain of pneumonia into the S strain. However, the liquid that was treated with DNase completely lost the ability to transform the bacteria (Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Illustration of the classic experiment by Avery, MacLeod, and McCarty demonstrating that DNA is capable of transforming harmless R strain S. pneumoniae into the pathogenic S strain.

Thus, it was apparent that the "transforming agent" in the liquid was DNA. To further demonstrate this, the scientists took liquid extracted from heat-killed S. pneumoniae (S strain) and subjected it to extensive preparation and purification, isolating only the pure DNA from the mixture. This pure DNA was also able to transform the R strain into the S strain and generate pathogenic S. pneumoniae. These results provided powerful evidence that DNA, and not protein, was actually the genetic material inside of living cells.

Comprehension Checkpoint

Which agent transformed one strain of bacteria into another?

Despite this very clear result, some scientists remained skeptical and continued to think that proteins were likely the genetic molecule. Eight years after the famous Avery, MacLeod, and McCarty experiment was published, two scientists named Alfred Hershey and Martha Chase performed an entirely different type of genetic experiment. For their experimental system, they selected an extremely small virus called a bacteriophage (or just phage), which only infects bacterial cells. At that time, scientists knew that when these phage infect a bacterial cell, they somehow "reprogram" the bacterium to transform itself into a factory for producing more phage. They also knew that the phage itself does not enter the bacterium during an infection. Rather, a small amount of material is injected into the bacteria and this material must contain all of the information necessary to build more phages. Thus, this injected substance is the genetic material of the phage.

Hershey and Chase designed a very simple experiment to determine which molecule, DNA or protein, acted as the genetic material in phages. To do this, they made use of a technique called radioactive labeling. In radioactive labeling, a radioactive isotope of a certain atom is used and can be followed by tracking the radioactivity (radioactivity is very easily detected by laboratory instruments, even back in the 1940s, and remains a very common tool in scientific research). So, what Hershey and Chase did was to grow two batches of phage in their laboratory. One batch was grown in the presence of radioactive phosphorous. The element phosphorous is present in large amounts in DNA, but is not present in the proteins of bacteria and phage. Thus, this batch of phage would have radio-labeled DNA. The second batch of phage was grown in the presence of radioactive sulfur. Sulfur is an element that is often found in proteins, but never in DNA. Thus, the second batch of phage would have radio-labeled proteins.

Then, Hershey and Chase used these two batches of phage separately to infect bacteria and then measured where the radioactivity ended up. What they observed was that only those bacteria infected by phage with radio-labeled DNA became radioactive, bacteria infected by phage with radio-labeled protein did not. Thus Hershey and Chase concluded that it is DNA, and not protein, that is injected into the bacteria during phage infection and this DNA must be the genetic material that reprograms the bacteria.

Comprehension Checkpoint

Hershey and Chase used radioactive phosphorus in their experiment because

Taken together, these experiments represented strong evidence that DNA is the genetic material. Other scientists later confirmed these result in many different kinds of experiments, including showing that eukaryotic, and even human cells can be "transformed" by the injection of DNA. The result of these findings was to convince the scientific and lay communities that the molecule of heredity is indeed DNA. It turns out that the initial instincts of many scientists were exactly backward: They assumed that protein was the genetic material of chromosomes and DNA merely provided structure. The opposite turned out to be true. The DNA molecule houses genetic information, and proteins act as the structural framework of chromosomes.

The discovery that DNA was the "transforming agent" and the genetic component of human chromosomes was one of the greatest discoveries of science in the 20th century. However, the mechanism of how DNA codes for genetic information was initially a complete mystery and became the focus of intense scientific study (see our DNA II module). Still today, the study of how DNA functions comprises an entire discipline of science called molecular biology. Originally an offshoot of biochemistry, the field of molecular biology joins biologists, chemists, anthropologists, forensic scientists, geneticists, botanists, and many others who are working to shed light onto the immense complexity of DNA, the so-called blueprint of life.

This module is the first in a series that discusses the discovery, structure, and function of DNA. Key experiments are discussed: from Griffith’s discovery of genetic “transformation” to Avery, MacLeod, and McCarty’s determination of the “transforming agent” to confirmation by Hershey and Chase of DNA rather than protein as the genetic material.

Key Concepts

  • It required numerous experiments by many scientists to determine that DNA, and not protein, is the genetic material on which life is built.
  • DNA can be “transformed,” or genetically re-programmed, into a slightly different version of itself.

  • HS-C6.1, HS-C6.2, HS-LS1.A2

Nathan H Lents, Ph.D. “DNA I” Visionlearning Vol. BIO (2), 2008.

Top


Page 8

Cell Biology

by Nathan H Lents, Ph.D., Donna Hesterman

From the time cells were first discovered in the mid-1600s, scientists knew that there must be some sort of outer wrapping around the cell to hold the contents of the cell together. Although it was too thin for them to see with simple light microscropes, scientists called this outer wrapping a membrane (in Latin, membrana), which means a thin layer of skin or tissue. From the 17th century until around the 1960s, the outer membrane of cells was thought to be a simple passive barrier. We now understand that the plasma membrane is a very dynamic part of the cell and that is much more than just a barrier. Yes, it does restrict many molecules from entering (or leaving) the cell, but it is also designed so that some molecules can very quickly move through the membrane, and thus enter or leave the cell with ease.

Our scientific understanding of membranes began with the American statesman Benjamin Franklin. In 1774, Franklin observed the effects of oil on a surface of water and found that the oil does not mix with the water but rather spreads over the water’s surface to create a thin film:

I fetched out a cruet of oil and dropped a little of it on the water. I saw it spread itself with surprising swiftness upon the surface… Though not more than a teaspoonful, produced an instant calm over a space several yards square which spread amazingly and extended itself gradually till it reached the [other] side, making all that quarter of the pond, perhaps half an acre, as smooth as a looking glass.

More than a century later, in 1890, Lord Rayleigh repeated Franklin’s experiments while studying at Cambridge University in England. He and other scientists developed tools and mathematical methods for calculating the surface area covered by the oil film. Although these early studies didn’t directly focus on membranes or even cells, they were very important because they described the repulsion that occurs when water-insoluble fluids, such as oil, come in contact with water. It was this insight – that oil and water repel each other – that led scientists to wonder if the cell membrane might somehow be made of a substance that repels water. This way, it could keep fluids outside the cell from passing through, while also preventing the fluids inside the cell from leaking out. The fact that, when viewed under a microscope, animal cells look similar to spheres of oil helped to popularize the view that cells were somehow surrounded by an oily film.

Comprehension Checkpoint

Experiments with oil and water led scientists to wonder

It took several more decades before scientists came to understand the structural features of the membrane that allow it to repel water. This understanding came in three major steps. First, chemists observed that all known types of cells contain molecules called lipids that are hydrophobic, or water-insoluble. If cells are mostly water, how do they also contain water-insoluble things? Scientists then imagined that maybe a water-insoluble outer surrounding might be the answer. If the outer membrane was made of water-insoluble lipids, the membrane would restrict water and water-soluble molecules from passing through, while hydrophobic molecules (water-insoluble) could pass through the membrane. They had further evidence to back up this idea – oxygen gas is hydrophobic but can pass through cell membranes easily.

The second major advance came in 1931 with the invention of the electron microscope, which resolved a six-year debate in the scientific community. In 1924, two competing scientists came up with opposite conclusions about the structure of the membrane. A Danish-American scientist named Hugo Fricke performed calculations involving the surface area of those cells, and their capacity for electric charge. Based on these calculations, he found that the layer of lipids surrounding the cell is 3.3 mm thick (Fricke, 1924). Although his measurements were dramatically accurate, lack of understanding of the structure of lipids led him and others to the conclusion that the layer of lipids around the cell could only be one layer thick. Meanwhile, two Dutch scientists, Evert Gorter and François Grendel approached the question a different way. They extracted all of the lipids from a sample of red blood cells and allowed them to spread out on a watery surface, much like Ben Franklin had done with the oil. They found that when the lipids spread out as one layer, the area that they covered was almost exactly twice the surface of the red blood cells themselves (Gorter & Grendel, 1925). Thus, Gorter and Grendel concluded that the lipid surface surrounding the cells must be two layers. It turns out that the limited technology of the time led to two major errors in their work. First, they did not completely extract all of the lipids from the red blood cells. Second, they underestimated the surface of the red blood cell because they were unaware of its double-concave shape. However, the two mistakes acted to cancel each other out almost exactly and their conclusions were correct.

When the electron microscope was invented in 1931 by the German scientists Max Knoll and Ernst Ruska, two thin lines could easily be seen surrounding all cells (Knoll & Ruska, 1970). This was dramatic and convincing evidence that the membrane consists of a double layer of lipids. Even more dramatically, the electron microscope revealed that the cell membrane also had visible structures embedded in it (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: An electron micrograph showing the double-membrane.

The third advance in the understanding of membranes came when it was realized that the membrane is a “fluid” structure in which component molecules are in constant and rapid motion. Although several key measurements and experiments contributed to this breakthrough in our understanding, perhaps the most dramatic was a cell fusion experiment conducted by Larry Frye and Michael Edidin at Johns Hopkins University in 1970 (Frye & Edidin, 1970). For this clever experiment, the scientists grew human cells in one dish and mouse cells in another. They used a technique, brand new at the time, to attach a fluorescent labels to some of the proteins on the outside of cells. They labeled some of the proteins in the human cells with a fluorescent blue dye, while labeling the proteins on the mouse cells with a red dye. Then, they used a virus to trick the cells into fusing together. These hybrid cells that were half human, half mouse did not survive for very long, but they did live just long enough to show us something about membranes. At first, just after the cells had fused, all of the blue label was segregated on one half of the hybrid cell, while the red label was on the other half. However, very, quickly, the labels began to intermix with each other and within 40 minutes, the blue and red labels were evenly distributed throughout the surface of the hybrid cell (Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The hybrid cell experiment showed that proteins moved fluidly around the membrane.

The quick mixing of the fluorescent labels means that the proteins that are on the surface of the cell are not fixed in place – they can and do diffuse rapidly around the exterior of the cell, while still being embedded in the plasma membrane. This realization led to the development of the fluid-mosaic model of membrane structure, which was first fully articulated by S. J. Singer and Garth L. Nicolson in 1972 (Singer & Nicolson, 1972). Singer and Nicolson explained the plasma membrane as a bilayer, two layers of lipid molecules, with protein molecules embedded in the layers. They compared this to a mosaic of colored tiles that are inlaid to form a design or picture. However, in this case, the tiles are the molecules of lipid and protein, and they are not fixed in place – they move about through diffusion. Another way to imagine the surface of the membrane is to picture the surface of the ocean on a rough and windy day. The lipid molecules are like the ocean water and the proteins are bobbing around like “icebergs…floating in a sea of lipid” (Singer & Nicolson, 1972). See Figure 3 to see an illustration of the concept.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Cell membrane proteins float in a sea of phospholipids.

Comprehension Checkpoint

Cells membranes are made of

Since 1972, we have learned a great deal about the molecular components of biological membranes and our current understanding of the very complex and dynamic nature of membranes is a far cry from the static film that was once imagined. By far, the most important structural feature of the membrane is the amphipathic nature of the lipids that make up the bulk of the membrane. It turns out that the lipids that comprise membranes are not purely hydrophobic. These special lipids have a charged phosphate group at one end which makes this region of the molecule water-soluble, or hydrophilic.

Thus, these phospholipid molecules have water-soluble head groups and water-insoluble tail groups, creating an amphipathic overall structure (Figure 4). Soaps and detergents are also amphipathic, which not only explains how they dissolve easily in water, but also how they dissolve oils and greases in water, the key to their effectiveness as cleaning agents.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: The unique structure of the phospholipids that make up the cell membrane causes it to be amphipathic.

The amphipathic nature of the phospholipid molecules is important because it explains how these molecules establish a two-layered membrane. Two rows of lipid molecules self-assemble in opposite orientations (Figure 5). The hydrophobic tail regions tuck together to create a water-free inner environment, and the hydrophilic head regions face outward where they are free to interact with water, the principle solvent both inside and outside of cells.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Phospholipids arrange themselves so that the hydrophobic tails are end-to-end and the hydrophilic heads point outward toward the cell exterior on one side and the cell interior on the other.

Comprehension Checkpoint

Molecules in detergent have long hydrophobic tails. This makes detergents

But membranes are more than simple bilayers. The experiment by Frye and Edidin involved proteins that float in the plasma membrane. It turns out that the membrane has many different kinds of molecules floating in it, not just proteins. For example, most animal cell membranes contain cholesterol, a completely different kind of lipid. Cholesterol functions to regulate the fluidity of the membrane and also prevent freezing and cracking of the cell membrane at low temperatures. (That animal cells have cholesterol in their membranes but plant cells do not explains why all cholesterol in our diets come from animal products, not plant ones.) In addition, some lipid groups have the phosphate head group replaced by a carbohydrate group. These are called glycolipids. Similarly, some of the proteins that are in membranes also have carbohydrate groups attached to them and are called glycoproteins. Both glycolipids and glycoproteins are important “cell markers” used by cells to identify themselves to other cells.

Some proteins are fully integrated into the membrane and are called integral membrane proteins or transmembrane proteins, since they “span” both layers of the membrane. Transmembrane proteins are useful to the cell because they can interact with molecules on the outside of the cell and relay information about the extracellular environment to the interior of the cell. Other proteins are more loosely attached on the inside or outside of the membrane and are called peripheral membrane proteins. Peripheral membrane proteins are often used by the cell during signal transduction – the process by which a cell responds to a signal from another cell. In addition, while most proteins are free to float around the membrane as we saw with the hybrid cell experiment, some proteins are attached to part of the cytoskeleton and are thus anchored in one place. This anchoring can serve as a crucial structural component of the cell and its attachment to other cells or to the tissue matrix. Figure 6 below gives a more complete picture of the many kinds of molecules that are found in biological membranes.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Many types of proteins are mingled throughout the cell membrane.

As explained in our module The Discovery and Structure of Cells, the outer plasma membrane is not the only membrane in the cell. Many interior organelles have membranes as well, including the nucleus, mitochondrion, chloroplast, endoplasmic reticulum, Golgi body, lysosome and peroxisome. These membranes are all very similar. They all are composed of a sea of phospholipids with proteins and other components floating within. The main differences are that the specific phospholipids that make up the membranes are somewhat different and the floating components within the membranes are different. Each organelle, including the plasma membrane, has a unique signature of proteins floating in the phospholipid bilayer.

Comprehension Checkpoint

Transmembrane proteins:

Now to the question of what the plasma membrane actually does. First and most obvious is that the plasma membrane is indeed a selective barrier. It allows the chemical activities inside the cell to proceed mostly undisturbed by events outside the cell. The famous cell biologist Gerald Weissmann emphasized the importance of this role:

In the beginning, there must have been a membrane! Whatever flash of lightning there was that organized purines, pyrimidines, and amino acids into macromolecules capable of reproducing themselves it would not have yielded cells [except] for the organizational trick afforded by the design of a membrane wrapping.

The lipid nature of the membrane allows it to serve as a good barrier. Lipids are water-insoluble and repel water, thus they are an ideal medium to separate the watery inside and outside of a cell. Anything that is water-soluble, even tiny single atoms such as H+ ions, will not easily pass through a lipid bilayer. However, water-insoluble molecules may pass freely; these include small molecules such as oxygen and carbon dioxide, and large water-insoluble hormones such as estrogen, testosterone, cortisol, thyroid hormone, and vitamin D. For these reasons, membranes are said to be semipermeable barriers. They do not let water or water-soluble molecules pass, but they do allow diffusion of water-insoluble (lipid soluble) molecules.

However, membranes are more than passive barriers. This is made clear by the many molecules that cannot pass through simple bilayers very quickly, but can pass into and out of cells. Water is the best example. As the understanding of membranes developed in the scientific community, a conundrum emerged. The phospholipid bilayer structure should not be very permeable to water, but when cells are studied in the laboratory, most are very permeable to water. How could this be? Scientists went so far as to build synthetic membranes using exactly the kinds and quantities of phospholipids found in specific types of cells. These synthetic membranes had very low water permeability, while the cells they modeled had very high water permeability. The hypothesis at the time was that there must be some sort of pore or channel in membranes through which water can pass, but all evidence for this was indirect. Channels for ions had been discovered, but the way that cells move water in and out remained a mystery.

This changed in 1992 when Peter Agre and colleagues reported their accidental discovery of channels called aquaporins (Preston et al., 1992). These channels are embedded in the plasma membrane and allow water to pass into and out of the cell (Figure 7). Agre and colleagues were not in the business of studying water transport. They were studying the Rhesus (Rh) factors that are present on red blood cells and result in blood incompatibility complications. In trying to isolate and purify these Rh factors, they noticed a “contaminant” in their test tubes – a membrane protein that they were not trying to study but which kept getting in the way. When they noticed that this protein is one of the most abundant proteins on the surface of the red blood cell, they decided to take a closer look and eventually realized that this “contaminant” was a protein that scientists had been looking for decades. Over the next few years, a whole family of related aquaporin proteins was discovered, and these proteins have a nearly identical structure in humans, fruit flies, fungi, and plants, indicating an ancient origin and strong conservation throughout more than a billion years of evolution.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: Aquaporin proteins in the membrane allow only molecules that are shaped and charged like water molecules to pass freely.

Interestingly, a research group from Romania led by Gheorghe Benga had likely made this discovery at least six years before Agre, but they had not fully isolated nor identified the protein. Nevertheless, controversy has been raised over the issue of proper credit because Benga’s work almost certainly describes the same protein and had been published publically years before, both in a US journal and an international one. Nevertheless, Agre and colleagues did not to cite this work in their publications or Nobel Prize lectures, and most of the scientific community overlooked them as well. It should be noted that, working in an Eastern Bloc country as the collapse of the Soviet Union approached, Benga and his colleagues did not have the prestige or resources that Agre and his colleagues enjoyed at Johns Hopkins University. It is conceivable that, had Benga been working in a more internationally prestigious institution and/or with more financial resources, he may have shared the Nobel Prize in 2003.

The discovery of aquaporins highlights how proteins embedded in the plasma membrane can act as gatekeepers and govern the entry of molecules into and out of the cell. The membrane has many such gatekeepers and, like aquaporin, that are very specific. For example, aquaporin allows water molecules in and out freely, but other molecules much less so. Closely related molecules can pass through, but with much less efficiency (Figure 8). For example, urea, ammonia, and alcohol can each pass through aquaporins and indeed these channels are the main route through which these molecules are absorbed by most cells. However, they pass through more than a million times more slowly than water does. The structure of aquaporins reveals how they achieve this selectivity. Within the tunnel-like chamber through which water molecules pass, there are structural features that fit only a molecule with the size, shape, and partial-charge distribution that water has. Thus, while molecules similar in size and charge to water sometimes can pass through, they pass through at a much lower rate than water itself.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: Aquaporins allow molecules like urea, ammonia, and alcohol to pass through at a much slower rate than water molecules.

The examples of aquaporins and CFTR show how the plasma membrane can be selective about what enters and leaves the cell. As cell biologist Daniel Mazia put it:

The cell membrane is not a wall or a skin or a sieve. It is an active and responsive part of the cell; it decides what is inside and what is outside, and what the outside does to the inside.

Cell membranes are much more than passive barriers; they are complex and dynamic structures that control what enters and leaves the cell. This module explores how scientists came to understand cell membranes, including the experiments that led to the development of the fluid-mosaic model of membrane structure. The module describes how the components and structure of cell membranes relate to key functions.

Key Concepts

  • The outer layer of a cell, or a cell membrane, is a complex structure with many different kinds of molecules that are in constant motion, moving fluidly throughout the membrane.

  • Cell membranes form selective barriers that protect the cell from the watery environment around them while letting water-insoluble molecules like oxygen, carbon dioxide and some hormones pass through.

  • Most of the cell membrane is formed by phospholipids that have a unique structure that causes them to self-arrange into a double layer that is hydrophobic in the middle and hydrophilic on the outside.

  • Fricke, H. (1924). A mathematical treatment of the electric conductivity and capacity of disperse systems I. The electric conductivity of a suspension of homogeneous spheroids. Physical Review, 24, 575.
  • Frye, L. D. & Edidin, M. (1970). The rapid intermixing of cell surface antigens after formation of mouse-human heterokaryons. Journal of Cell Science, 7, 319-335.
  • Gorter, E. & Grendel, F. (1925). On bimolecular layers of lipoids on the chromocytes of the blood. The Journal of Experimental Medicine, 41(4), 439.
  • Knoll, M. & Ruska, E. (1932). Das elektronenmikroskop. Zeitschrift für Physik A Hadrons and Nuclei, 78(5), 318-339.
  • Preston, G. M., Carroll, T. P., Guggino, W. B. & Agre, P. (1992). Appearance of water channels in Xenopus oocytes expressing red cell CHIP28 protein. Science, 256(5055), 385.
  • Singer, S. J. & Nicolson, G. L. (1972). The fluid mosaic model of the structure of cell membranes. Science, 175(4023), 720-731.

Nathan H Lents, Ph.D., Donna Hesterman “Membranes I” Visionlearning Vol. BIO-3 (7), 2014.

Top


Page 9

Cell Biology

by Nathan H Lents, Ph.D.

Approximately 30,000 Americans have a disease called Cystic Fibrosis (CF). This is a genetic disease that an individual inherits from both parents and suffers from throughout their lives. People with CF have serious respiratory and digestive problems because they build up a viscous, sticky mucous in their lungs and other organs. Just a couple of decades ago, most individuals with CF did not survive long enough to begin kindergarten. Fortunately, medical research has pushed the average lifespan of a CF sufferer to approximately 35 years. In addition, the root cause of the disease has been identified: The plasma membranes of cells in the affected organs are missing a key component and so do not function properly.

The plasma membrane (also called the cell membrane) is anything but a simple barrier between the inside of a cell and the environment outside of it. As explored in Membranes I: Introduction to Biological Membranes, there is a wide variety of embedded components that are essential to the life of the cell, including lipids, carbohydrates, and proteins – many of which regulate what is allowed to pass into and out of the cell (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Many types of components are mingled throughout the cell membrane.

The plasma membrane of all cells is a barrier to most molecules. Only uncharged, non-polar molecules can easily pass through the membrane. Non-polar molecules are those whose bonds involve equal or symmetrical sharing of electrons so there are no partial positive or negative charges. This includes gases like carbon dioxide and oxygen and a few lipid hormones like testosterone and estrogen.

However, most molecules in our bodies are either charged or polar. For example, water cannot pass directly through a biological membrane because it is a polar molecule, with partial positive and partial negative charges. The interior environment of the plasma membrane is highly hydrophobic because of the close crowding of all of the fatty acid hydrocarbon tails (see Membranes I: Introduction to Biological Membranes). Those hydrocarbon tails are filled with non-polar bonds, and there are essentially zero polar bonds anywhere in the interior section of the membrane. This creates a very hydrophobic environment, and thus water is strongly repelled.

Glucose is another example of a polar molecule that cannot easily pass through the membrane. It is much larger than water with many polar bonds all throughout the molecule. Ions, such as sodium (Na+) and chloride (Cl-), have an even more difficult time going through the membrane than glucose. They are not just partially charged; they are fully charged and thus strongly repelled by the interior of the membrane (see Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Non-polar molecules like oxygen and nitrogen diffuse through a membrane, whereas polar molecules and charged ions do not diffuse through a membrane. image © Visionlearning

However, we also know that water, glucose, sodium, and chloride move in and out of cells all the time, which means that there must be something that assists them. This “something” is a collection of transporters: both passive and active.

Comprehension Checkpoint

It is more difficult for molecules to pass through cell membranes when they

There are transporters embedded in every cell membrane that allow molecules to pass through. In Membranes I, we discussed the water transporter, aquaporin – but there are many more of these transporters within the membranes of all living cells.

Transporters are proteins that are divided into two classes: passive transporters, also called channels, and active transporters, also called pumps. The difference between active and passive transport is whether or not energy is required to move the molecule from one side of the membrane to the other. A channel is passive because it does not require energy to help molecules flow through it. (The aquaporin water transporter is a channel.) Pumps, on the other hand, do require energy to do their work, so they are called active transporters.

In order to function, the heart, nerves, and muscles in a body need to move sodium ions into and out of their cells. However, because sodium ions are charged and cannot get through the membrane directly, cells have a sodium channel that creates a path – a tunnel – through the membrane where ions can flow freely.

Because channels merely provide a path for molecules to flow, they are only capable of allowing those molecules to flow from where they are in high concentration to where they are in low concentration. In other words, channels allow specific molecules to diffuse when they otherwise couldn't because a membrane is in their way. When a channel helps molecules to move through a membrane, this is called "facilitated diffusion." The molecules are passively spreading out evenly, but they are getting a little help from the channels to do so (see Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Regular (the fat soluble molecules) and facilitated (the water soluble molecules) diffusion. image © BruceBlaus

For example, inside of human cells, there is a fairly low concentration of sodium ions, but outside of the cells, in the general fluids of the body, there is a high concentration of sodium ions. This is why tears, sweat, and other body fluids taste salty. Thus, surrounding every cell of your body, there is a concentration gradient of sodium ions – low sodium inside of the cells and high sodium in the surrounding fluid. Channels can allow only the passive flow of molecules down their gradient (from high to low), not the other direction, so a sodium channel would allow sodium ions to flow into the cell, not out of it.

Channels are important for many different types of molecules. In 1989, it was discovered that the basis of Cystic Fibrosis was the lack of a specific kind of passive transport channel in the cell membranes of CF patients. This channel, known as CFTR (Cystic Fibrosis Trans-membrane Conductance Regulator), is actually made in the cells of individuals with CF, but it lacks just one tiny piece: an amino acid in a crucial location. Because of this one tiny alteration in its structure, CFTR is never delivered to the plasma membrane where it would normally allow chloride ions to flow out of the cell (Cheng, et al., 1990). A CFTR channel is shown in Figure 4.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: A CFTR channel. image © Visionlearning

The flow of chloride ions from certain cells in the lungs is essential for making mucus of the proper consistency. Without chloride, the mucus is not as watery as it should be. When chloride fails to flow out from the cells of CF patients, viscous mucous builds up in their lungs, leading to the symptoms and infections associated with CF, such as frequent coughing and wheezing. This underscores how important a job the cell membrane plays. It is much more than a static, selective barrier.

Comprehension Checkpoint

Channels allow molecules to

Many cells, especially neurons and muscle cells, have sodium channels on them, but these are usually held closed by gates. These gates prevent sodium from rushing into the cell so that the gradient can be maintained. However, these gates can also be opened at specific times. Because sodium concentration is higher outside the cell than inside, if the gates on the sodium channel suddenly opened, sodium ions would begin to flow inward.

It is important to remember that molecules move in random paths. While molecules will flow in through the channels from outside the cell, some will also flow back out. It’s just that more ions will flow into the cell than out of the cell because there are more ions outside to start with. Thus, when the gates open, we say that there is net movement of sodium ions into the cell. If the gates were to stay open long enough, the concentration of sodium inside and outside would equal out. There would be no more gradient and no more net movement. This doesn’t actually happen, though, because the gates only open for a brief instant.

How do sodium ions get to be at a high concentration outside the cell in the first place? To answer this, we must consider the topic of active transport. Active transport is exactly the opposite of passive transport. First, it does require the input of energy, rather than relying on the random motion of molecules (and this usually comes in the form of ATP). Second, active transport builds concentration gradients – meaning that it increases the concentration of molecules in a given area – rather than reducing them (see our Diffusion I: An Introduction module). Third, it requires the action of a membrane pump (instead of a channel) to move molecules from one side of the membrane to the other.

Membrane pumps are proteins embedded in the plasma membrane that pump specific molecules or ions into or out of the cell. For example, there are proton (H+) pumps in the lining of the stomach. They pump protons into the stomach cavity, creating a very acidic solution to help digest food (Figure 5). People who suffer from chronic heartburn or indigestion might take Nexium, Prilosec, or Prevacid to treat this discomfort. These drugs work by slowing down the proton pumps in the stomach walls and thus making the stomach less acidic (Peghini et al., 1998). Other examples of pumps are the calcium (Ca2+) pumps in the intestines that help absorb calcium from food, and the glucose pumps in the kidney that grab all the glucose out of the pre-urine fluid so that we don’t lose glucose constantly in our urine. Unlike channels, all of these pumps must use energy to do this pumping.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: A proton pump in the lining of the stomach. image © Visionlearning

Comprehension Checkpoint

The random motion of molecules is associated with _____________ transport.

Perhaps the most important pump of all is the sodium/potassium pump, usually written simply as the Na+/K+ pump. This pump exists in just about every cell membrane of the human body, and indeed in almost every cell membrane of every animal that has ever lived on Earth. This pump is responsible for pumping sodium out of the cell and potassium into the cell. Because it pumps two things in opposite directions, it is called an antiport.

Although there is already a lot of Na+ outside the cell (and very little inside), the Na+/K+ antiport actively pumps Na+ from inside the cell to the outside. The same is true for potassium (K+) – it actively pumps K+ into the cell despite higher concentrations within than without. The antiport is constantly building both gradients by increasing the concentrations of sodium outside of, and potassium inside of, the cell. The Na+/K+ pump works tirelessly on every cell of the human body, constantly maintaining these two crucial gradients (Figure 6).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: The sodium-potassium (Na+/K+) antiport actively pumps sodium from inside the cell to the outside while also pumping potassium into the cell. image © BruceBlaus

Because it is working against the natural flow of diffusion – to balance out the concentration on either side of the membrane – the Na+/K+ pump is said to be engaged in active transport, a process that requires energy. Like most work that cells do, the energy for this transport work comes in the form of ATP.

Why is it so important to keep the interior of the cells low in Na+ and high in K+? The reason is because these two gradients are used for all kinds of important purposes around the body, such as allowing nerves to send messages and muscles to contract. The plasma membrane of neurons and muscles have sodium and potassium channels on them; however, these channels are not always open – they have gates on them that are usually closed. These gates can be suddenly opened, though. For example, muscle cells have a sodium channel with a gate that can be opened by the neurotransmitter acetylcholine. If a neuron suddenly releases acetylcholine onto a muscle, the gate on the sodium channel will swing open. When that happens, sodium ions will then rush into the cell because of the ever-present sodium gradient. The sodium ions (Na+) then cause a rapid chain reaction that leads to muscle contraction. (See Figure 7.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: A neuron releases acetylcholine onto a muscle, causing the gate on the sodium channel to swing open (inset) and sodium ions rush into the cell because of the sodium gradient. image © VL

During normal muscle use, the influx of sodium is temporary and is quickly reversed by the Na+/K+ pump, which is always working to re-establish the gradients as quickly as possible. However, during strenuous exercise, particularly when the muscle is not accustomed to such demanding work, the Na+/K+ pump and other ion pumps that are important in muscle cell function cannot keep up with the ion influx from the gates being opened so much. This leads to a sustained and involuntary contraction of the muscle, also called a cramp, as the sodium ions build up inside the muscle cells. Because the contraction is involuntary and very intense, cramps are painful and usually debilitating. The only way to reverse them is to stop all exercise and massage the muscle, coaxing it into a relaxed state and giving the Na+/K+ pump a chance to get caught up on its job of getting sodium out of the cell, and potassium in. Athletes who are in very good shape have fewer problems with cramping because their well-trained muscles have more Na+/K+ pumps, and other ion pumps, than the rest of us have.

Many neurons in your brain also respond to a sudden influx of sodium ions by releasing neurotransmitters onto neighboring neurons. The crucial importance of these sodium channels is underscored by the fact that some of the most deadly poisonous compounds ever discovered are compounds that block sodium channels, paralyzing nerves and muscles. Tetrodotoxin, one such sodium channel-blocking poison found in Fugu pufferfish, is 100 times more lethal than cyanide. Ingesting even a very small dose of tetrodotoxin can completely paralyze someone by preventing both muscles and neurons from functioning (Narahashi, Moore, & Scott, 1964).

Comprehension Checkpoint

_________ provides energy for active transport.

In the 1950s, scientists knew that ions move in and out of cells and that, because of this, cells had a voltage – a difference in the charge inside of the cells compared to outside the cells. The voltage, also called the resting membrane potential, of nearly all cells is negative – meaning there are more negative charges inside the cell than positive ions. This internal negative charge of cells mostly comes from many of the large macromolecules of life – DNA, proteins, lipids, and sugars – which are all negatively charged. But scientists didn't understand how the cell prevented positive ions from flowing in to cancel out the negative charges, or why all animal cells maintained a low concentration of sodium and a high concentration of potassium.

This changed in 1958 when Jens Skou, a Danish physician, made an accidental discovery while studying how local analgesics worked. Analgesics are substances that prevent or reduce pain; an example of a local analgesic is Novocain, which is used by dentists to numb the mouth during oral surgery. In his laboratory, Dr. Skou noticed that cells have an enzyme embedded in their membrane that consumed a lot of ATP. He then noticed that when he exposed cells to some analgesics, the membrane-bound enzyme stopped consuming ATP, as if it were paralyzed. The effect would slowly wear off as the drug washed away from the cells. The crucial part of the discovery came when he noticed that the drugs didn’t only affect the mysterious ATP-consuming enzyme, but also allowed sodium to build up in the cell and potassium to leak out. No other ions were affected – just sodium and potassium. And once again, the effect wore off over time. With exactly the same timing, the ATP consumption would gradually resume and the Na+ and K+ gradients would be restored. Dr. Skou didn't immediately make the connection and went about studying other painkillers.

It was only after a conversation with another scientist, Robert Post, who was studying sodium transport in red blood cells, that they both realized they could be studying the same enzyme. Dr. Post went back to his lab and tried the same analgesic that Skou used, and it worked – it inhibited sodium transport in the red blood cells. Meanwhile, Skou telephoned his laboratory and instructed them to try the drug that Post had been studying, ouabain (pronounced wah-bain), and a few days later, his laboratory called back to say that it worked the same way (Skou, 1965).

What does inhibiting a sodium/potassium pump have to do with relieving pain? As mentioned above, the gradients of sodium and potassium are crucial for the functioning of neurons. When ouabain and other analgesics slow the Na+/K+ pump on the sensory neurons responsible for sensing pain, they temporarily disrupt the Na+ and K+ gradients. When this happens, the neuron is paralyzed for a while and cannot transmit its message of pain to the brain. Though the Na+/K+ pump is on every cell of the body, these drugs do not affect other cells as powerfully as they do neurons. Most cells don’t rely as much on the Na+ and K+ gradients to function, so these cells are not as affected by the drugs. However, there is one other type of cell that is affected – muscles. Both muscles and neurons are said to be excitable, which means that they are very sensitive to changes in voltage and movement of ions. Drugs that inhibit the Na+/K+ pump can paralyze muscles as well as neurons.

In summary, cellular membranes are neither passive sacs around the cell nor solitary cell parts. Embedded in the membrane are proteins that perform vital functions for the cell. Among the most important functions of these proteins is the transport of various molecules into and out of the cell. As we saw with Cystic Fibrosis, when even just one of the hundreds of transporter types in a cell membrane malfunction, serious disease can result.

At the same time, the functions of these transporters can sometimes be manipulated with pharmaceutical drugs to treat certain medical conditions. Drugs that restrain the proton pumps on the stomach lining are useful in treating acid reflux, and drugs that inhibit the Na+/K+ pump can act as topical pain relievers. Thus, many biomedical scientists study plasma membranes in their pursuit for treatments and cures to common medical conditions.

For living things to survive, different molecules need to enter and leave cells, yet cell membranes serve as a barrier to most molecules. Fortunately, all living cells have built-in transporters that allow water, glucose, sodium, potassium, chloride, and other molecules to cross the plasma membrane. This module looks at how passive and active transporters work. It highlights the importance of the study of cell membranes by looking at advances in treating cystic fibrosis and common digestive ailments as well as the development of effective pain relievers.

Key Concepts

  • Whether or not a molecule is able to pass easily, or at all, into or out of a cell is largely dependent on its charge and solubility in water.

  • The plasma membrane serves as a semi-permeable barrier to the cell. Only uncharged, non-polar molecules are able to pass into or out of the cell without aid.

  • All plasma membranes possess transporters to help move molecules from one side of the membrane to the other. These transporters can be active (pumps) or passive (channels) and are sometimes regulated by gates.

  • The lack of a specific transporter can interrupt cellular functions and cause diseases like cystic fibrosis.

  • Research into pain relievers provided insight into the most important and universal transporter in the human body, the sodium-potassium pump.

  • HS-C6.2, HS-LS1.A1, HS-LS1.C3
  • Cheng, S. H., Gregory, R. J., Marshall, J., Paul, S., Souza, D. W., White, G. A., ... & Smith, A. E. (1990). Defective intracellular transport and processing of CFTR is the molecular basis of most cystic fibrosis. Cell, 63(4), 827-834.
  • Narahashi, T., Moore, J. W., & Scott, W. R. (1964). Tetrodotoxin blockage of sodium conductance increase in lobster giant axons. The Journal of General Physiology, 47(5), 965-974.
  • Peghini, P. L., Katz, P. O., Bracy, N. A., & Castell, D. O. (1998). Nocturnal recovery of gastric acid secretion with twice-daily dosing of proton pump inhibitors. The American Journal of Gastroenterology, 93(5), 763-767.
  • Skou, J. C. (1965). Enzymatic basis for active transport of Na+ and K+ across cell membrane. Physiol. Rev, 45(5), 617.

Nathan H Lents, Ph.D. “Membranes II” Visionlearning Vol. BIO-3 (8), 2014.

Top


Page 10

Biological Molecules

by Anthony Carpi, Ph.D.

In many ways, our bodies can be thought of as chemical processing plants. Chemicals are taken in, processed through various types of reactions, and then distributed throughout the body to be used immediately or stored for later use. The chemicals used by the body can be divided into two broad categories: macronutrients, those substances that we need to eat regularly in fairly large quantities, and micronutrients, those substances that we need only in small amounts. Three major classes of macronutrients are essential to living organisms: carbohydrates, fats, and proteins. In this lesson, we will discuss the carbohydrates; fats and proteins are discussed in another lesson (see our Fats and Proteins module).

Carbohydrates are the main energy source for the human body. Chemically, carbohydrates are organic molecules in which carbon, hydrogen, and oxygen bond together in the ratio: Cx(H2O)y, where x and y are whole numbers that differ depending on the specific carbohydrate to which we are referring. Animals (including humans) break down carbohydrates during the process of metabolism to release energy. For example, the chemical metabolism of the sugar glucose is shown below:

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Animals obtain carbohydrates by eating foods that contain them, for example potatoes, rice, breads, and so on. These carbohydrates are manufactured by plants during the process of photosynthesis. Plants harvest energy from sunlight to run the reaction just described in reverse:

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

A potato, for example, is primarily a chemical storage system containing glucose molecules manufactured during photosynthesis. In a potato, however, those glucose molecules are bound together in a long chain. As it turns out, there are two types of carbohydrates, the simple sugars and those carbohydrates that are made of long chains of sugars - the complex carbohydrates.

All carbohydrates are made up of units of sugar (also called saccharide units). Carbohydrates that contain only one sugar unit (monosaccharides) or two sugar units (disaccharides) are referred to as simple sugars. Simple sugars are sweet in taste and are broken down quickly in the body to release energy. Two of the most common monosaccharides are glucose and fructose. Glucose is the primary form of sugar stored in the human body for energy. Fructose is the main sugar found in most fruits. Both glucose and fructose (Figures 1a and 1b) have the same chemical formula (C6H12O6); however, they have different structures, as shown (note: the carbon atoms that sit in the "corners" of the rings are not labeled):

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Glucose

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Fructose

Disaccharides have two sugar units bonded together. For example, common table sugar is sucrose, a disaccharide that consists of a glucose unit bonded to a fructose unit:

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Sucrose

Complex carbohydrates are polymers of the simple sugars. In other words, the complex carbohydrates are long chains of simple sugar units bonded together (for this reason the complex carbohydrates are often referred to as polysaccharides). The potato we discussed earlier actually contains the complex carbohydrate starch. Starch is a polymer of the monosaccharide glucose.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Starch
n is the number of repeating glucose units
(ranges in the 1,000's)

Starch is the principal polysaccharide used by plants to store glucose for later use as energy. Plants often store starch in seeds or other specialized organs; for example, common sources of starch include rice, beans, wheat, corn, potatoes, and so on. When humans eat starch, an enzyme that occurs in saliva and in the intestines called amylase breaks the bonds between the repeating glucose units, thus allowing the sugar to be absorbed into the bloodstream. Once absorbed into the bloodstream, the human body distributes glucose to the areas where it is needed for energy or stores it as its own special polymer – glycogen.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
The Starch Molecule

Glycogen, another polymer of glucose, is the polysaccharide used by animals to store energy. Excess glucose is bonded together to form glycogen molecules, which the animal stores in the liver and muscle tissue as an "instant" source of energy. Both starch and glycogen are polymers of glucose; however, starch is a long, straight chain of glucose units, whereas glycogen is a branched chain of glucose units, as seen below:

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
The Glycogen Molecule

Another important polysaccharide is cellulose. Cellulose is yet a third polymer of the monosaccharide glucose. Cellulose differs from starch and glycogen because the glucose units form a two-dimensional structure, with hydrogen bonds holding together nearby polymers, thus giving the molecule added stability (Figure 6). Cellulose, also known as plant fiber, cannot be digested by human beings, therefore cellulose passes through the digestive tract without being absorbed into the body. Some animals, such as cows and termites, contain bacteria in their digestive tract that help them to digest cellulose. Cellulose is a relatively stiff material, and in plants it is used as a structural molecule to add support to the leaves, stem, and other plant parts. Despite the fact that it cannot be used as an energy source in most animals, cellulose fiber is essential in the diet because it helps exercise the digestive track and keep it clean and healthy.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
The Cellulose Molecule

Our bodies are efficient chemical processing plants, breaking down nutrients to use and store for energy. This module introduces carbohydrates, an important macronutrient. It explains how different carbohydrates are used by plants and animals. Simple sugars and complex carbohydrates are identified, and their biochemical structures are compared and contrasted.

Key Concepts

  • Carbohydrates are a class of macronutrients that are essential to living organisms. They are the main energy source for the human body.

  • Carbohydrates are organic molecules in which carbon (C) bonds with hydrogen and oxygen (H2O) in different ratios depending on the specific carbohydrate.

  • Plants harvest energy from the sun and manufacture carbohydrates during photosynthesis. In a reverse process, animals break down carbohydrates during metabolism to release energy.

  • All carbohydrates are made up of units of sugar. There are two types of carbohydrates: simple sugars – the monosaccharides and disaccharides – and complex carbohydrates – the polysaccharides, which are polymers of the simple sugars.

  • Examples of complex carbohydrates are starch (the principal polysaccharide used by plants to store glucose for later use as energy), glycogen (the polysaccharide used by animals to store energy), and cellulose (plant fiber).

Anthony Carpi, Ph.D. “Carbohydrates” Visionlearning Vol. BIO-3 (3), 2003.

Top


Page 11

Evolutionary Biology

by Iris Saxer, M.A./M.S., Alfred L. Rosenberger, Ph.D.

For centuries, human beings have looked at the complexity of the natural world in wonder. From the delicate design of the more than 18,000 species of orchids that exist (Figure 1), to the breathtaking flight of birds, humans have struggled to understand what the driving force behind the diversity of life is and why so many remarkably different shapes and features exist in the natural world.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Butterfly orchid, Encyclia tampensis. image © National Park Service, R. Cammauf

In 1802, the English priest William Paley wrote that the complexity of animals and plants is of "a degree which exceeds all computation," and he argued that only a divine being could have created these organisms.

Having been educated in England in the early 19th century, Charles Darwin was not only familiar with Paley's writings, but impressed by them. However, Darwin disagreed with Paley's reasoning. Why would a deity create parasites that would eat away at the insides of an organism, and what would the purpose be of crafting a bird that could not fly?

Darwin knew that other natural historians had begun to ask similar questions during the 18th and early 19th centuries. They had begun the gradual process of figuring out that there was a special connection between organisms and the environment, a kind of fit that explained why particular structural details or patterns exist in nature. For example, why are flowers of a certain shape visited most often by certain moths while others are pollinated by bees, or why do large animals that swim well, whether they are dolphins or alligators or eels or sharks, all have long streamlined bodies?

The answer is adaptation, an idea that Darwin absorbed from his predecessors. Two Frenchmen contributed important ideas on adaptation that Darwin worked into his theory of natural selection (see our Charles Darwin II module for more information). Georges Leclerc, who became famous during the middle and late 1800s for compiling information on the habits and geographical distribution of animals and plants, recognized that the differences between related species of animals living in different parts of the world reflected the different environments that they occupied. He thought that animals would somehow change after migrating from one place to another. Jean Baptiste Lamarck looked at things from a different perspective. He popularized the idea that the world's environment changed and with it the needs of animals living off of the environment, thus animal's characteristics changed to suit their environment.

We refer to the adjustments in the fit between organisms and the environment as evolutionary adaptation, or simply adaptation. Adaptation is the root concept that grew into Darwin's theory of natural selection. Natural selection is the mechanism that explains how things change; adaptation explains why they do.

Adaptation is based on the concept that populations of organisms change over time as a result of natural selection. Adaptive evolution is driven by increased survivorship and/or increased reproductive success. This happens when a collection of individuals in a population gain an advantage because of special traits that they share in common. These traits may be either inconspicuous or quite elaborate. They may, for example, start out as a 2 mm lengthening in the nectar-gathering tongue of a few moths that feed on orchids. If beneficial, over time the tongue may become much longer in that species as those individuals and their offspring out-reproduce others. Eventually the long shape becomes the norm, because the long-tongued adaptation, which allows more efficient feeding, contributes to an increase in reproductive success.

Darwin himself discovered an orchid with a huge, 11 inch long nectar-producing tube in Madagascar. He predicted that there would be a moth that feeds from the tube with an 11 inch proboscis. Almost 50 years later Darwin's prediction proved true when scientists discovered the moth Xanthopan morganii praedicta with a 12 inch proboscis which fed from, and pollinated, Darwin's orchid (Angraecum sesquipedale). Of course, the ultimate source of an adaptation like this, and all others, is genetic, because only traits that can be passed on from one generation to the next are influenced by natural selection.

Darwin's orchid-and-moth example is one of the more visible cases of adaptation. One feature of a plant is associated with a corresponding feature of an animal so that both benefit from their interconnected lives in nature. But more generally, organisms are a mass of adaptations that come together to make a particular lifestyle work. Why? Because there are many factors in the environment that are "problems" that require "solutions." The availability of food, predator-prey relationships, and climate all play an important role in selecting "through natural selection" beneficial characteristics.

Comprehension Checkpoint

Traits that become more widespread in a population over generations are often those that

Let's take penguins (Figure 2) as an example. Although the majority of penguin species live in temperate climates, some of the penguins we are most familiar with live in the extreme conditions of Antarctica. These flippered, flightless birds provide a wonderful example of multiple evolutionary adaptations.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: There are 17 species of penguins, all living south of the equator. The genus Pygoscelis, which is Greek for “elbow leg”, consists of three species found on islands near the Antarctic mainland, chinstrap (P. antarctica - on the left), Adelie (P. adeliae - on the right), and gentoo penguins (P. papua - not pictured).
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Adult penguins have a dense layer of tiny, waterproof feathers that protect them in the water. Penguin chicks are covered in fuzzy, insulating down that is replaced by waterproof feathers as they mature.

One of the most difficult challenges for Antarctic-dwelling penguins is maintaining their body temperature under the vastly different conditions on land, where they live and breed, and in the icy water, where they feed. Like other birds, penguins are homeothermic, maintaining a relatively stable body temperature between 35º and 41º C. However, unlike most other birds, penguins do this in a climate where sea temperatures approach -2°C and air temperatures can range from 0°C to a bone chilling -60°C.

While metabolism and muscle activity generate body heat internally, penguins have unique external adaptations to help them conserve this heat. To avoid heat loss, they are insulated by a thick layer of fat, or blubber, under the skin. This helps retain heat, just as in whales, seals and other large cold water animals. In addition, penguin bodies are covered by a layer of feathers that are more densely packed than in any other birds. The base of their feathers are also downy, to trap air for better insulation. In addition, penguins have evolved behaviors to keep their feathers in good condition and insulate them from the cold wind and water. They waterproof themselves by preening, which involves spreading special oily secretions from the uropygial gland at the base of their tail to other areas of their body.

Penguins have other adaptations that help them control temperature. An elaborate circulatory system allows them to retain and dissipate heat easily. The arteries and veins in their extremities are situated very close together so that they can exchange heat. This is called a "countercurrent" heat exchange system to reflect the to-and-from flow of blood relative to the heart. The layout raises the temperature of blood flowing from the flippers and legs to the body core by drawing it past veins carrying already-warm blood to the extremities. Penguins can also increase blood flow to their flippers in order to cool down when necessary. This is important because not all penguins live in cold climates year round. The Galapagos penguin (Spheniscus mendiculus), for example, lives near the equator where it can get quite hot.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Penguins' feet are poorly insulated and rapidly lose heat, which helps the penguins regulate body temperature: if they get too hot, they can simply expose their feet to rapidly cool off.

There are several behavioral adaptations used by penguins in their constant struggle to maintain a stable body temperature. They shiver to increase metabolic heat production, and they pant and expose their feet to get rid of excess heat (their feet are the only part of their body not covered with insulating feathers). Some species also seek shelter under rocks to avoid temperature extremes, a logical and simple maneuver when possible. Penguins are territorial by nature; however, the Emperor penguin (Aptenodytes forsteri) has evolved the social behavior of huddling together to share body heat in the harsher conditions of mainland Antarctica, where temperatures below -60°C have been recorded and gale force winds can approach 200 to 300 km/hr.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Penguins are adapted to swim rapidly and gracefully, in contrast to most other birds.

Penguins are amphibious birds, feeding only at sea and breeding on land. All three pygoscelid penguins prey primarily on small shrimp-like invertebrates, called krill, and to a lesser degree on a variety of fish. While they forage at sea, they are under constant threat from their predators, including leopard seals, orcas (killer whales), and occasionally fur seals. Consequently, not only are penguins much more adept at swimming than walking, they even consume one-third less energy at sea than on land. On land, penguins tend to inelegantly walk, jump, or toboggan on their bellies, sometimes over long distances, to get to their rookeries, where they breed, or to enter the seas. But in the water they are a marvel of naval engineering. Buoyant, torpedo-shaped bodies and an efficient flipper design allow penguins to "fly" underwater, using their bill, tail, and feet to rapidly change direction pursuing fish or avoiding predators. When traveling long distances, penguins will porpoise, leaping out of the water, to reduce drag and conserve energy.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Penguin mother with baby. The chick is well protected from the cold, sitting on top the mother's feet, and insulated by her fat and feathers.

Comprehension Checkpoint

Even though Emperor penguins are territorial by nature, they huddle together as an adaptation to

The breeding adaptations of penguins also reflect their environment. Most pygoscelid penguins are faithful to both their mate and their nest site, returning to breed in the same spot year after year. They assemble into colonies that can be small, consisting of a few breeding pairs, or quite large, with millions of pairs. The males arrive first and prefer to build the nests, which are made of small rocks piled up in snow-free areas. Females arrive shortly after the males and locate their mate (which may be no easy task among millions of - to us - look-alikes decked out in the same tuxedo).

Emperor and King (Aptenodytes patagonica) penguins carry their eggs, and very young chicks, on their feet. An odd behavior that certainly makes it more difficult for them to walk, however, a necessary practice to keep their eggs and young warm and prevent them from freezing on cold Antarctic rocks. Emperor penguins breed in the harshest conditions on earth, the Antarctic winter. While the exact reasons for this are not completely understood, many scientists believe that the timing allows the new chicks, who become independent from their parents five months later, to set out on their own during the milder Antarctic summer. It's easy to see how natural selection would maximize the breeding success of parent penguins who weaned their chicks just when the climate favored their survival.

Penguins are not unique in their adaptations to the environment. Polar bears evolved white fur because it better conceals them in the arctic. All other bear species are brown or black, so we might presume that, among the remote ancestors of today's polar bears, the whiter individuals probably had more hunting success because their prey found it harder to spot them against the snow and ice. Squirrels evolved the behavior of burying nuts during summer and fall seasons to provide them food through the winter. Even the common dandelion has adapted to its environment by producing a characteristic, white fluff (called a pappus) on its seeds to increase their spread, and thus their chances of survival, in the environment.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: The dandelion has evolved a highly effective way of spreading its seeds: wind-born dandelion fluff can travel for miles.

So, William Paley was not quite right when he suggested that the complexity of the natural world exceeds the capabilities of human calculation. The clue that was missing to him was the concept of adaptation. Darwin put it all together: The features and characteristics that could only be an imponderable source of wonder to Paley actually turned out to be a key to understanding the diversity and complexity of life. That key is adaptation; and all organisms, even human beings, have evolved complex features in response to pressures from their environment.

This module introduces the concept of evolutionary adaptation. It follows the development of Charles Darwin's ideas on how species adapt to their environment in order to survive and reproduce. The difference between adaptation and natural selection is explained. With a look at penguins and other examples from nature, the module explores the processes that influence the diversity of life.

Key Concepts

  • Natural selection is the mechanism that explains how organisms change.

  • The structure of an organism and many of its features are directly related to the environment in which it lives.

  • Numerous environmental mechanisms, both naturally occurring and man-made, influence adaptive evolution.

Iris Saxer, M.A./M.S., Alfred L. Rosenberger, Ph.D. “Adaptation” Visionlearning Vol. BIO-2 (6), 2005.

Top


Page 12

Biological Molecules

by Anthony Carpi, Ph.D.

In addition to the carbohydrates, fats and proteins are the other two macronutrients required by the human body (see our Carbohydrates module).

Fats are a subgroup of compounds known as lipids that are found in the body and have the general property of being hydrophobic (meaning they are insoluble in water). Fats are also known as triglycerides, molecules made from the combination of one molecule of glycerol with three fatty acids (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: A fat molecule. The R in the three fatty acids represents a long C-C-C chain. In the triglyceride, the Rs may or may not be the same.

The main purpose of fats in the body is to serve as a storage system and reserve supply of energy. During periods of low food consumption, fat reserves in the body can be mobilized and broken down to release energy. Fats serve as an insulation material to allow body heat to be conserved and fats line and protect delicate internal organs from physical damage. Fats in the diet can be converted to other lipids that serve as the main structural material in the membranes surrounding our cells. Fats are also used in the manufacture of some steroids and hormones that help regulate proper growth and maintenance of tissue in the body.

Fats can be classified as either saturated or unsaturated depending on the structure of the long carbon-carbon chains in the fatty acids (the R's in Figure 1).

Saturated Fats: Fats that contain no double bonds in their fatty acid chains are referred to as saturated fats. These fats tend to be solid at room temperature, such as butter or animal fat. The consumption of saturated fats carries some health risks in that they have been linked to arteriosclerosis (hardening of the arteries) and heart disease.

Unsaturated Fats: Unsaturated fats contain some number of double bonds in their structure. These fats are generally liquid at room temperature (fats that are liquid at room temperature are referred to as oils). Unsaturated fats can be either polyunsaturated (many double bonds) or monounsaturated fats (one or few double bonds). Recent research suggests that the healthiest of the fats in the human diet are the monounsaturated fats, such as olive oil and canola oil, because they appear to be beneficial in the fight against heart disease.

Comprehension Checkpoint

Fats are classified as saturated or unsaturated, depending on whether or not they

Proteins are polymers of amino acids. Though there are hundreds of thousands of different proteins that exist in nature, they are all made up of different combinations of amino acids. Proteins are large molecules that may consist of hundreds, or even thousands, of amino acids. Amino acids all have the general structure (see Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: General structure of an amino acid.

The R in the diagram represents a functional group that varies depending on the specific amino acid in question. For example, R can be simply an H atom, as in the amino acid glycine, or a more complex organic group. When two amino acids bond together, the two ends of nearby amino acids (shown in red) are released and the carbon (called a carboxyl) end of one amino acid bonds to the nitrogen end of the adjacent one forming a peptide bond, as illustrated in Figure 3.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: A peptide bond.

When many amino acids bond together to create long chains, the structure is called a protein (it is also called a polypeptide because it contains many peptide bonds). Proteins serve two broad purposes in the human body:

  1. Structural proteins form most of the solid material in the human body. For example, the structural proteins keratin and collagen are the main component of your hair, muscles, tendons and skin.

  2. Functional proteins help carry out activities and functions in the human body. For example, hemoglobin is a functional protein that occurs in the red blood cells and helps to transport oxygen in the body. Myosin is a protein that occurs in muscle tissue and is responsible for the ability of muscles to contract. Insulin is a functional protein that helps regulate the storage of the sugar glucose in the human body. A subclass of the functional proteins is the group of polypeptides referred to as enzymes. Enzymes help to carry out specific chemical reactions in the body. For example, amylase is an enzyme that occurs both in human saliva and in the intestines that helps to break apart the glucose-glucose bonds in the carbohydrate starch, thus allowing your body to absorb the glucose and use it for energy.

There are an estimated 100,000 different proteins in the human body alone, and each of them is made up of a combination of different combinations of only 20 amino acids. Each protein has a different structure and performs a different function in the body. When we eat protein-containing foods (such as meat, fish, beans, eggs, cheese, etc.) the polypeptide chains are generally broken down in the digestive tract and the individual amino acids are absorbed into our bodies. These amino acids are then recombined into proteins specific to each individual person in a process called protein synthesis.

Comprehension Checkpoint

There are hundreds of thousands of proteins that exist in nature. This is possible

In order to carry out these very precise jobs in the body, each individual protein has to be unique and specific to the job in question. Four aspects of a protein's structure are specific to the job the protein does in the body.

  • Primary Structure (1°): The first aspect of a protein's structure is called the primary structure (1°). The primary structure of a protein is the sequence of amino acids in the protein. The number of amino acids in a protein can vary from the hundreds to the thousands, and the sequence in which those 20 different amino acids just mentioned occur (obviously one amino acid can occur in a protein many times) is specific to the individual protein, just as the sequence of numbers in your phone number is specific to your phone.

  • Secondary Structure (2°): The secondary structure (2°) of a protein is defined by the way the long strands of amino acids coil about themselves. Just as a phone cord wraps around itself to form a coil, a protein will also wrap around itself, and the degree and tightness of the coil is specific to the protein in question.

  • Tertiary Structure (3°): Once a protein is coiled, the protein will begin to fold onto itself (similar to the way a phone cord tangles around itself); this folding is specific to the protein's function and is called the protein's tertiary structure (3°).

  • Quaternary Structure (4°): Some proteins have an additional layer of structure in which multiple polypeptides, each folded in their own way, come together to form a larger functional unit. This is called the quaternary structure (4°). These large multi-subunit proteins show great complexity due to the unique contributions of each polypeptide. Some examples of proteins with quaternary structure are hemoglobin and antibodies, both of which are made of four separate polypeptides.

Primary Structure Secondary Structure
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Tertiary Structure Quaternary Structure
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Fats and proteins are two of the major nutrient groups that our bodies need. This module provides an introduction to these two macronutrients. The basic chemical structure of fats as triglycerides is presented along with the purposes and types of fat. The module also introduces the amazing structure of protein molecules, including the peptide bond, and explains the purpose of proteins.

Key Concepts

  • In addition to carbohydrates, fats and proteins are the other two macronutrients required by the human body.

  • Fats, a subgroup of lipids, are also known as triglycerides, meaning their molecules are made from one molecule of glycerol and three fatty acids.

  • Fats in the body serve mainly as an energy storage system. They are also used as insulation to conserve body heat and protect internal organs, to form the main structural material in cell membranes, and to manufacture steroids and hormones to help regulate the growth and maintenance of tissue.

  • Fats are classified as saturated or unsaturated. Saturated fats contain no double carbon-carbon bonds in their fatty acid chains and tend to be solid at room temperature. Unsaturated fats contain double carbon-carbon bonds and are generally liquid at room temperature. Unsaturated fats can be either polyunsaturated (many double bonds) or monounsaturated (one or few double bonds).

  • Proteins are polymers of hundreds or even thousands of amino acids. Each protein has a different structure and performs a different function in the body. There are around 100,000 different proteins in the human body, each of which is made up of combinations of only 20 amino acids.

  • Enzymes are proteins that help to carry out specific chemical reactions in the body.

Anthony Carpi, Ph.D. “Fats and Proteins” Visionlearning Vol. BIO-3 (4), 2003.

Top


Page 13

Biological Molecules

by David Warmflash, MD, Nathan H Lents, Ph.D.

Spider silk, hemoglobin, keratin in your nails and hair, actin and myosin in muscle fibers – all these are proteins. As a class of biological compounds, they are vital to essentially every biological process, because they can take so many different forms. Proteins can be long fibers with the ability to slide as in muscles; they can be large and globular, like von Willebrand factor which helps in blood clotting; or they can be small like insulin, which is needed for sugar metabolism. Insulin is one of the most well-known proteins because of its use to treat diabetes, but it is also familiar to biochemists because it was the first complete protein structure discovered.

In 1921, Frederick Banting and Charles Best extracted insulin from the pancreas of dogs and learned that it was a hormone affecting blood sugar levels. Within a year, it was used to save the life of a diabetic boy. This set off a wave of research that put insulin at center stage, peaking in the 1950s when British biochemist Frederick Sanger figured out the precise sequence by which the amino acid building blocks are put together to build insulin.

During World War II, when Sanger turned his attention to insulin, he and other biochemists of the era already knew that this hormone was a protein. Today, we know that proteins are polymers composed of building blocks called amino acids (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: The general structure of an amino acid.

A multitude of amino acids are possible. In fact, the Murchison meteorite (Figure 2), which fell in Australia in 1969, was found to contain seventy different amino acids, but life on Earth uses just twenty, but that’s enough to create an astronomical number of possible proteins.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The Murchison meteorite, which landed in Australia in 1969, has been shown to contain many types of chemicals required by life on Earth. On the right is a pebble-sized fragment of the meteorite; when magnified 10 times and placed in polarized light, a slice of the meteorite reveals various minerals in different colors. image © NASA

The human body alone contains an estimated 100,000 different proteins, because of the numerous ways that the same 20 amino acids can combine. But scientists back in the early 20th century did not think that proteins were structured in any way that affected their function and Sanger was key to changing that idea.

Prior to Sanger’s major discoveries, biochemists learned about a feature in proteins called a disulfide bridge (Figure 3). They also found that treatment with chemicals called reducing agents severs a disulfide bridge between two cysteines and also causes a large proteins to split into smaller proteins, arguing that these bonds exist in proteins to help hold them together.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: A disulfide bridge (the joined S molecules) connecting two cysteines.

Consequently, biochemists in the World War II era believed that amino acids must be linked in chains in a way that today we might liken to beads on a string. They knew that each amino acid in a chain was connected to the next amino acid through a special type of chemical connection called an amide bond, also called a peptide bond.

Comprehension Checkpoint

There are over 100,000 different amino acids in the human body.

To understand a peptide bond, we need to look more closely at the structure of amino acids. As noted earlier, the different types of amino acids are distinguished based on the R group. If R is a hydrogen atom, for instance, the amino acid is glycine. If R is a methyl group (CH3), the amino acid is alanine. If R is the sulfhydryl (CH2SH), the amino acid is cysteine. These are just a few examples, but apart from the R group all amino acids are otherwise the same. At one end, each amino acid has the functional group COOH, called carboxyl. At the other end, each amino acid has an NH2 group, called amino. (See Figure 4 for a peptide bond in an amino acid.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Peptide bond

A peptide bond is formed when the carboxyl carbon atom of one amino acid is joined covalently with the amino nitrogen atom of another amino acid, expelling a molecule of water (H2O). Linking of several amino acids by their carboxyl and amino groups produces a small protein, also called a polypeptide, because it contains several peptide bonds (Figure 5). Joining amino acids in this way produces a chain with a COOH at one end and an NH2 at the other end, called the carboxyl and amino ends, respectively.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: The joining of two amino acids (red) with a molecule of water expelled (blue).

By Sanger’s time, biochemists were using acidic chemicals to break the peptide bonds of a protein, thus separating the individual amino acids. Additionally, they knew that a protein could have more than one polypeptide chain, connected by another by disulfide bonds attaching at areas of a chain that contained cysteine. By treating a protein to destroy disulfide bridges, biochemists in the early 1940s could find out the number of chains in a protein. Also, by breaking apart the peptide bonds and running chemical tests, they could determine the identity of the amino acids of a protein and the relative amounts of each amino acid.

However, this did not tell biochemists the sequence in which those amino acids had been linked together. What set Sanger apart from his contemporaries was an insight that the relative amounts of each type of amino acid and their sequence could be extremely important. It might be the basis of how each protein functioned. If so, then amino acid sequence would also be the key to how life functioned. Given the prevalence of proteins in organisms, the idea made a lot of sense, but now Sanger’s task was to prove it. Doing this would be no easy task, but the first step was to choose a particular protein on which to concentrate his work.

Comprehension Checkpoint

To separate the individual amino acids, scientists use _____ chemicals to break the peptide bonds of a protein.

Because it is small and important in the treatment of a disease, insulin was a logical choice for Sanger to begin his work on amino acid sequencing. He began with bovine insulin, since it was easy to obtain and purify in large quantities. The first thing he did was to treat the insulin with the chemical agent that broke up disulfide bridges. If insulin consisted of just one polypeptide chain, testing the size of the protein before and after chemical treatment would give the same result.

The amino acids in proteins carry electric charges, so a protein, or fragments of a protein, could be propelled in an electromagnetic field with different degrees of strength. This technique is called electrophoresis (Figure 6). It was very new in Sanger’s time but it gave him very clear results. Whereas prior to the disulfide bridge treatment, the insulin behaved in one particular way in electrophoresis, after the treatment, the electrophoresis produced two different results, both different from the pre-treatment result. This meant that the insulin had been divided into two sections, each with a slightly different size. In other words, the insulin consisted of two peptide chains and the task now was to find the amino acid sequence of each.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: A modern example of gel electrophoresis. The laboratory set-up uses an electric current to separate molecules based on size. image © Jean-Etienne Poirrier

Just as large fragments of a protein can be propelled in a particular way by electrophoresis, so can smaller fragments, including when a protein is fragmented down to pieces consisting of 10-15 amino acids each. He did the fragmentation by treating each chain with an enzyme called trypsin, which cuts only next to certain amino acids (lysine and arginine). Subsequently, he could utilize other enzymes to fragment each fragment more, all the way down to individual amino acids. Each fragment has its own pattern in electrophoresis.

With another technique called chromatography (Figure 7), Sanger could identify fragments that were bound to a certain chemical agent that he developed, known as dinitrofluorobenzene (DNFB), which could react chemically with amino groups that were not part of a peptide bond. After performing the first fragmentation using trypsin, but prior to fragmenting each piece further into individual amino acids, he added the DNFB, which altered whichever amino acid was at the amino end of the fragment (also called the N-terminal amino acid). Because of this, when he then broke the fragment into individual amino acids, the amino acid that had been at the N-terminal remained bound to the DNFB. He could identify this DNFB-bound amino acid in chromatography by comparing the chronographic signal of the broken down chain to 20 “standards” –samples of compounds consisting of DNFB bound to one of the 20 amino acids, each of which produced a distinct chromatography pattern.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: A page from Frederick Sanger's notebook, detailing work on cow and pig insulin. On the right is one of his paper chromatograms. image © Frederick Sanger Papers, SABIO/P/1/13, Wellcome Library

If an amino acid had been altered with DNFB, that would be the amino acid at the amino end of the protein chain fragment. Knowing the identity of the amino acid at the N-terminal of the fragment, he could use an enzyme that would cut on the carboxyl end of that known amino acid, thereby producing a fragment featuring the next amino acid as the N-terminal amino acid. On that altered fragment, he could repeat the DNFB binding procedure and chromatography and this way learn the identity of that second amino acid of the fragment.

He repeated the technique for each fragment, thereby obtaining the amino acid sequence of all of them. Then, he repeated the entire procedure using an enzyme other than trypsin to break up the big chain into fragments of 10-15 amino acids, and then again using still a different enzyme. He used four different enzymes, each of which worked by cutting next to certain amino acids and not others, and this allowed only one possibility for the order in the fragments had been linked together in the original chain.

It was a long, tedious process, but Sanger had the amino acid sequence of the both chains in 1952. After another three years of similar chemical tactics, he and several coworkers demonstrated that for the insulin chains A and B to work together as physiologically functional insulin they had to be linked by three disulfide bridges at three distinct points (Figure 8).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: F. Sanger's method of analyzing peptide end-groups. It begins with using his reagent, DNFB, to react with the N-terminal amino acid. The amino acid then remains bound to the DNFB (A). Through hydrolysis (B) he could then identify the amino acid using chromatography.

Insulin is considered small for a protein because together its two chains contain just 51 amino acids, but Sanger’s discovery applied to proteins overall. Small or big, proteins were built of specific amino acid sequences; changing the sequence would make it a different protein. The discovery that earned Sanger his first Nobel Prize in Chemistry in 1958. He would later earn a second Nobel Prize in chemistry for working out a similar approach for the sequencing of DNA, putting him on a very short list of people who have won the Nobel Prize more than once.

Comprehension Checkpoint

Sanger chose insulin for his research on amino acid sequencing because

Sanger’s discovery with insulin revealed not just how proteins have defined chemical structures, but also why different proteins have different functions. Just as different letters of the alphabet have different sounds, the various R chains give the twenty amino acids different chemical properties. Thus, stringing amino acids together in different combinations leads to proteins with extremely diverse properties and shapes.

Sanger’s insulin research acted as a springboard for work by other protein chemists during the 1950s and 60s involving how structure relates to function. By passing X-rays through various proteins, researchers obtained images of their 3-dimensional structures. Studying the images and working out issues related to the physics of chemical bonds, biochemists of the mid 20th century learned that the amino acid sequence represents protein structure on just one level. They started referring to the sequence as the primary structure, since it leads the protein chain to twist and bend in ways that give the protein a more complex shape.

Certain amino acids enable a polypeptide chain to bend, for example, while other amino acids hold the chain more rigid (Figure 9). Some R chains are very hydrophilic; they like being in water and thus make the amino acid water-soluble. Other R chains are hydrophobic; they don’t mix with water. Often, having a hydrophobic area, or “pocket”, within a protein can help the protein do its particular job, for instance grabbing a hydrophobic substrate in order to modify it chemically.

Primary Structure Secondary Structure
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Tertiary Structure Quaternary Structure
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 9: The various protein structures.

Depending on their R chains, amino acids also can vary in terms of their acidity and alkalinity. When the environment is neutral (pH 7), the amino acids aspartate and glutamate act as acids, whereas arginine and lysine act as bases, and this too has major implications for a protein’s properties.

Except for very short chains (so short that they are usually not even called proteins), polypeptides bend and twist into complex shapes almost as soon as they are built, leading to secondary and tertiary protein structure. Secondary structure refers to any of a handful of regular shapes or patterns that form as a direct result of the primary structure, largely through a force called hydrogen bonding.

The most common secondary structure is an alpha helix (Figure 9). Think of it as kind of spiraling staircase. Each turn of the spiral consisting of 3.6 amino acids; in other words, four amino acids comprise more than one turn. Typically an alpha helix contains about 10 amino acids and thus three turns, but they also can be shorter or longer than this. As for their function, an alpha helix can provide shape as well as springy flexibility to the next level of protein structure, the tertiary structure. Consequently, they’re present in many different proteins, even small ones like insulin.

Another common secondary structure is called a beta-sheet, which forms when hydrogen bonds pull various non-adjacent segments, or “beta strands,” of polypeptide chain close together so that the primary structure folds back on itself multiple times (see Figure 9). The result is a ribbon-shaped area, which, like a helix, tends to stiffen and strengthen the protein.

Big proteins typically contain both alpha helices and beta-sheets. The small protein insulin helps to regulate the movement of glucose from the blood into cells by controlling the activity of another protein, the enzyme hexokinase. Unlike insulin, however, hexokinase is huge. Built of more than 900 amino acids, hexokinase has a good mix of both alpha-helices and beta-sheets. Hemoglobin, on the other hand, is almost completely alpha-helical, and antibodies consist almost completely of beta-sheets.

The presence of alpha-helices and beta-sheets, plus interactions between various amino acids not adjacent to one another in the chain, causes the protein to fold and twist still more, but in unique and irregular ways. This is the tertiary structure and it is stabilized not only by the alpha-helices and beta sheets within it, but often also by disulfide (S-S) bridges between cysteines. In explaining the structure of insulin, Sanger found one such S-S bridge contributing to the tertiary structure by connecting two cysteines that are both in the A chain but are not next to one another in the primary amino acid sequence. He also found two other S-S bridges connecting the A chain with the B chain.

Over the years, researchers found that large proteins typically contain many disulfide bridges. Lysozyme, for instance, an enzyme that immune cells use to destroy bacteria, has four disulfide bridges and antibodies have a different amount, depending on the antibody subtype. During the early 1970s, Argentinian researcher César Milstein helped to determine that disulfide bridges in antibodies are arranged in a particular pattern, a pattern that enables each antibody to take on the unique antibody shape (Figure 10). (See our profile César Milstein: Hybridoma Cells to Create Monoclonal Antibodies for more information on Milstein's research.) Disulfide bridges, however are not universal. Hemoglobin and a related protein called myoglobin, for instance, are famous for having no S-S bonds at all.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 10: Schematic diagram of an antibody and antigens.

The final level of structure is quaternary structure (see Figure 9), which exists when two or more polypeptide chains come together. An example is hemoglobin, which consists of four chains. In addition to simply making the molecule big, the four chains of hemoglobin actually influence one another, causing an effect that helps the molecule to grab onto oxygen when blood circulates through the lungs and then give up the oxygen to tissues deep in the body where it is needed.

Not all proteins have a quaternary structure since many proteins consist of just one chain. Although Sanger found that insulin consisted of two chains, those two chains and the disulfide bonds connecting them are actually part of the tertiary structure, not quaternary. The reason is that insulin is made from a larger protein precursor called proinsulin in which chains A and B are connected by a third sequence, chain C. Rather than being made from separate chains, proinsulin is synthesized in cells as just one chain. The chain then bends on itself and the three disulfide bonds help in that process, but then the C chain is snipped out (Figure 11).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 11: Structure of proinsulin showing C-peptide and the A and B chains of insulin. image © Zapyon

Comprehension Checkpoint

Common Secondary protein structures are the

As in Frederick Sanger’s era, research on protein structure today has major implications in clinical medicine. Regarding secondary structure, for instance, researchers are developing an ability to detect and control certain diseases in their early stages. While beta-sheets are normal in many proteins, in some cases they are a sign of disease. A notable example is a substance called beta-amyloid, which is made when an otherwise normal protein in body cells develops beta-sheets that it’s not supposed to have. In Alzheimer disease, beta-amyloid accumulates within brain cells, leading to dementia and physical decline. It’s controversial, but scientists also suspect that beta-amyloid also accumulates in an aging brain in the absence of Alzheimer disease.

Recently, scientists at the University of Washington learned to synthesize an alternative secondary structure within proteins, called alpha-sheet. It is similar to the more common beta-sheet, except that it is flipped geometrically, kind of like a mirror image. Essentially the opposite of a beta-sheet, alpha-sheets can act as detectors for beta-sheets, similar to how a right hand can be used to "detect" the presence of a left hand in the dark. Not only do the researchers expect that proteins synthesized to contain alpha-sheets can be used for early detection of amyloid diseases, but the alpha-sheets also could be used for treatment in the form of drugs consisting of proteins with alpha sheets. Once in contact with the pathological beta-amyloid, such alpha-sheet drugs should disrupt the hydrogen bonding of the abnormal beta-sheets, thereby causing the beta-amyloid to revert back into a normal protein. That could be extremely helpful for individuals plagued with degenerative conditions such as Alzheimer disease, and possibly it may also open a new age of intervention against a more mild, but nevertheless damaging, process that traditionally has been dismissed inevitable for those reaching old age.

This module explores how proteins are polymers composed of building blocks called amino acids. Using the historic research of Frederick Sanger on insulin as a starting point, the complex structures of proteins, due to molecular bonds like the disulfide bridge and the peptide bond, are explained.

Key Concepts

  • Proteins are vital components to nearly every biological process.

  • Proteins are polymers composed of building blocks called amino acids, of which life on Earth uses just twenty.

  • Molecular bonds determine the structures of amino acids and proteins. Peptide bonds link amino acids together in a chain; disulfide bridge bonds hold proteins together.

  • Using techniques like electrophoresis and chromatography, Frederick Sanger discovered that proteins were built of specific amino acid sequences and that changing the sequence would make it a different protein.

  • Proteins can have four types of structures: (1) Primary, the sequence of amino acids, (2) Secondary, hydrogen bonds among the strands of amino acids form beta sheets or alpha-helixes, (3) Tertiary, the three-dimensional, twisted structure based on bonding interactions between amino acid strands, and (4) Quartnerary, the complex structure made up of multiple folded subunits.

David Warmflash, MD, Nathan H Lents, Ph.D. “Biological Proteins” Visionlearning Vol. BIO-4 (9), 2016.

Top


Page 14

Biological Molecules

by David Warmflash, MD, Nathan H Lents, Ph.D.

If you’ve ever had blood drawn as part of a routine checkup, or for donation, you probably recall the procedure being very quick and simple. Today, it is routine to collect blood from people, to separate the blood into its various components, to store those components, and then to infuse them into other people. "Packed red blood cells," "platelets," and "fresh frozen plasma" are terms that you’d hear all day long if you were to volunteer on a medical ward. Along with saline, blood products are among the most common agents infused into patients. Each day, transfusion saves many lives, and one can hardly imagine modern medicine without it.

But it’s one of the most dangerous things that you can do to someone, if you don’t know what you’re doing.

In 1628, William Harvey, an English physician, discovered how blood moves through vessels in the body, that it circulates through arteries and veins, and within just a few years scientists were attempting transfusions. Their rationale was simple and still makes sense today. If somebody is ill, his or her blood could be deficient in something. By giving patients blood from someone else, the deficient component will be replaced and they can get better. By extension, if the patient has a hemorrhage, the deficiency is the quantity of blood itself, so transfusion should also be helpful in this type of patient. It made perfect sense in the 17th century, given the assumption by anatomists of the time that all blood was the same.

All blood certainly looked the same and in 1665, another English physician, Richard Lower, was able to keep dogs alive with blood transfused from other dogs. In the years that followed, Lower and other researchers even succeeded in transfusing small amounts of blood between different animals, including from lambs to humans. But most transfusion attempts had fatal consequences. Sometimes the dogs, lambs, or humans died of a high fever. Other times, death followed other reactions that the researchers could not understand.

For two and a half centuries, doctors experimented occasionally with transfusion and continued finding that small amounts of transfused blood sometimes did not harm the recipient and other times was fatal. In rare cases humans could receive blood even from a non-human animal and live, while others would die after receiving blood from another human. Transfusion was like playing Russian roulette, so it was attempted only in desperation.

In 1881, for instance, the sister of William Stewart Halsted, a 29 year-old New York City surgeon, developed a severe hemorrhage after giving birth. She would have died except that Halstead drew his own blood and injected it immediately into his sister’s vein. The transfusion saved her life because she and her brother had compatible blood types, although he did not know about blood compatibility at the time. Halsted got lucky with his sister, but science was only years away from unraveling a mystery that would make transfusion safe.

That research happened at the turn of the 20th century, in connection with work on a phenomenon called hemagglutination. This is a clumping of blood cells that researchers were observing in the blood of victims of mismatched experimental transfusions, and it happens because all blood is not the same. Blood has thousands of different components and slight differences in some of them can spell failure if blood or a blood product is given that is inappropriate for the recipient. On the other hand, all blood is similar in its basic components.

Comprehension Checkpoint

Early blood transfusions were safe provided that only a small amount of blood was transfused into the person.

If you know anyone who is diabetic, you may have heard something about that person’s blood sugar, or blood glucose. Glucose is a type of sugar (see our module Energy Metabolism I: An introduction). It’s the main source of energy in cells, and since its concentration in the blood should not be too high, nor too low, diabetics check their glucose levels frequently. Usually, they do this with a device that requires only a drop of blood. It’s called “whole blood,” because an individual needs only to prick his or her finger to release a drop. Nothing is separated out of the blood sample, so the machine reads the concentration of glucose in blood the way it exists within the body. For other blood tests, though, you may have heard your doctor or nurse mention plasma or serum levels. On routine exams, they tell you about your serum cholesterol or your serum triglycerides. On other occasions they may mention tests for plasma levels of certain chemicals, or you may have heard of somebody either donating plasma or receiving it.

In addition to water with numerous dissolved compounds such as glucose, blood contains cells. Physicians commonly talk about the blood cells collectively as a solid or cellular component of blood, because they can be easily separated from the liquid component. The liquid component is mostly water, but two different “versions” of this liquid can be prepared, depending on how the separation is performed.

The term plasma refers to everything in the blood without the cells. It is obtained by drawing a blood sample into a tube that has an agent that slows clotting, then spinning the tube in a centrifuge. During spinning, everything in the tube becomes many times heavier than its normal weight under Earth's gravity. Since blood cells, cell fragments, and very large molecules are denser than water, as they get heavier they move toward the bottom of the tube much faster than they would without spinning. What’s left on top is the plasma. (See Figure 1 for a diagram.) The percentage of whole blood volume that is packed cells is called the hematocrit and its value usually correlates with how well a person is making and maintaining hemoglobin and red blood cells (more on that below).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: The average composition of blood. In this example, the blood is shown after spinning in a centrifuge so the different elements are separated: the heavier red blood cells at the bottom, then the white blood cells and platelets in the center, and the plasma at the top. The percentage of red blood cells is also known as hematocrit. image © Pirumbaut

Plasma includes not just water, but also numerous agents called clotting factors that are involved in forming blood clots. In contrast to plasma, serum lacks many of the clotting factors. Serum is obtained by drawing a blood sample into a tube that is not treated to prevent clotting, but rather designed to encourage clotting. The sample is allowed to sit while it clots over time, thereby consuming most of the clotting factors. Then, the sample is centrifuged and liquid that ends up on the top of the tube, called serum, is free of most clotting factors. Thus, plasma minus clotting factors equals serum (Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: A researcher piping blood serum into a test tube. image © U.S. Air Force/Keenan Berry

In laboratory medicine, the decision on whether to use whole blood, plasma, or serum for a certain test often involves a tradeoff of various advantages and drawbacks of each. Serum takes longer than plasma to prepare, for instance, and this can be a problem during emergencies or when measuring the concentration of a blood chemical that changes rapidly over time. If clotting factors are what you’re trying to measure (in a patient with hemophilia, a disorder where the blood doesn't clot normally, for instance) then you must use plasma, not serum, because the latter lacks clotting factors.

There are several settings when serum is preferable, for example the need to measure antibodies in a patient’s blood. The term serology, though its literal meaning is the study of serum, often refers to the diagnostic assessment of serum for antibodies.

Comprehension Checkpoint

_____ contains clotting factors.

The age of serology began in Austria, at the University of Vienna, where physician-researcher, Karl Landsteiner worked in forensic anatomy (Figure 3). In 1900, Landsteiner noticed that blood cell clumping, or agglutination, following mixing of blood samples from different patients released toxins into the blood sample. He started mixing blood from patients, not just with whole blood samples, but also with serum from other individuals. He observed that when mixed with serum from a different person, blood cells would either clump or not clump, and that the clumps could be either small or large. Because the clumping had to be the result of the cells reacting with something in the serum, Landsteiner wondered if perhaps blood might indeed differ between individuals, an idea that went against the common thinking of his era.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Karl Landsteiner (1868-1943), Austrian biologist at the University of Vienna. Landsteiner pioneered research on blood types. image © National Academy of Sciences

Landsteiner set up a series of experiments using blood from just six volunteers (himself included), but mixing different blood components in various combinations and careful repetition of each mixing experiment turned those six volunteers into one of the greatest medical discoveries of the early 20th century. Based on his findings, Landsteiner proposed that there were three types or groups of blood that differed by the presence of factors in the serum that today we call antigens. Blood could be mixed between two people, without agglutination, he said, so long as the people were of the same type. He named the three types "A," "B," and "C" (the latter was eventually changed to "O").

It was a watershed study that ushered in an era of experimental blood transfusions in hospital settings, leading to the first ABO-matched transfusion, carried out at New York’s Mount Sinai Hospital in 1907. Transfusion could become routine only after physicians gained some understanding of the complexity of serum and blood cells. Such an understanding began when Landsteiner defined his blood groups and began systematic experiments aimed at honing in on the causes of hemagglutination. That research would quickly enable a revolution in medicine and especially in surgery.

Comprehension Checkpoint

Blood mixed between two people formed clumps when the people had

The cellular components that are separated from plasma or serum include various types of cells and cell fragments. The main types are red blood cells (also called erythrocytes), white blood cells (also called leukocytes), and platelets (also called thrombocytes).

Platelets are actually cell fragments because they are pieces of precursor cells called megakaryocytes that break up during maturation. What is left are tiny pieces, generally 2 microns across, called platelets. Essentially, a platelet is a package enclosed by a membrane. Inside and on the surface of the platelet are various clotting factors and other proteins important to the stopping and prevention of bleeding. Clotting factors are also present in the cells that line blood vessels and, as noted earlier, clotting factors are also dissolved in the blood itself, outside of cells. The clotting factors from all three of these sources come together to stop the bleeding whenever a blood vessel breaks.

When a blood vessel wall is damaged, a certain protein called fibrin is exposed that sticks to platelets. This attracts platelets to the injured vessel wall. Not only do platelets stick to the damaged area, but they also become sticky to other platelets. The result is called a platelet plug, which results in hemostasis, or the stopping of bleeding (Figure 4). The most obvious example of this is the scab that forms on your skin when you skin an elbow or knee.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: The process of hemostasis, or the stopping of blood flow in the body. When a blood vessel wall is injured, platelets stick to the damaged area and they become sticky with other platelets. The result is called a platelet plug, which stops the bleeding. image © ttsz/istockphoto

As cells go, red blood cells (RBCs) are very small - 6-8 microns in diameter. Mature RBCs are filled entirely with the protein hemoglobin and their job is to transport oxygen in the blood. A person with an abnormally low number of RBCs, or a low concentration of hemoglobin in the blood or in RBCs, is said to have anemia. There are many different kinds of anemia, which can result either from reduced production of RBCs or accelerated destruction of RBCs.

Regardless of the condition that causes it, anemia can vary in its effects from very mild to very severe. When the number of RBCs or the amount of hemoglobin is just slightly below normal a person may feel totally normal, or may feel fatigue only with strenuous activity. But as the anemia worsens, a person will feel very sick and appear pale. With fewer RBCs and/or less hemoglobin, their hematocrit will be below normal and their muscles fatigue easily, because less oxygen is delivered to the muscle cells. To compensate for the decreased oxygen-carrying ability of the blood, the heart beats faster in order to move more blood, but the decreased ability to carry oxygen also can affect the heart itself.

RBCs are particularly relevant to Landsteiner’s work on transfusion research, since they comprise most of the cellular component of blood and account for much of what early transfusions provided to recipients. Transfusions in those early years consisted of whole blood, though today RBCs are stored and infused as packed red blood cells (PRBCs) in most transfusions.

White blood cells (WBCs) have their name because they are more of a clear color and are not red since they do not contain hemoglobin. They are bigger than RBCs and are part of the immune system. WBCs are classified into two groups: granulocytes and agranulocytes. Each group consists of different subtypes (see Figure 5) and their numbers and proportions are what physicians want to see when they order a complete blood cell count with differential, (often abbreviated as “CBC w/diff”).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Leukocytes, or white blood cells, are classified into two groups: granular and agranular. Each of these groups are further broken down into different subtypes. (Leukocyte images via "Blausen gallery 2014" in the Wikiversity Journal of Medicine, DOI:10.15347/wjm/2014.010.) image © BruceBlaus

Granulocytes are WBCs that show granules, little dots in their cytoplasm, when viewed under a microscope. They are thus called "granular." The dots are secretory vesicles filled with various enzymes and other compounds that vary among three types of granulocytes that exist in blood:

Neutrophils are the most abundant type of WBC, accounting for 40-70 percent of all WBCs. They are much bigger than RBCs and also very short-lived. While RBCs live in the blood an average of 120 days, a typical neutrophil lives only for 6-10 hours. The function of neutrophils is to eat up bacteria and damaged tissue. They do this by releasing from their granules enzymes that break down the bacteria and cytokines, which amplify the antibacterial response, partly by telling the body to manufacture still more neutrophils. On account of this function, neutrophils are produced more rapidly than usual when the body is fighting a bacterial infection and the number of neutrophils in the blood can rise quickly and dramatically. Because they are so short-lived, the neutrophil count also drops quickly when the infection is brought under control. Thus, because neutrophils are the most abundant WBC and because they are so short-lived, the neutrophil count is a very good indicator for determining whether a patient has an infection.

The other two types of granulocytes are called basophils and eosinophils. The function of basophils is to escalate the body’s inflammatory reaction and have been implicated in allergies. Their granules contain an anticoagulant called heparin and special compounds called histamines, which cause blood vessels to dilate (become wider). Eosinophils fight multicellular parasites such as hookworms and tapeworms and so their granules contain enzymes that are particularly effective against these organisms.

Agranulocytes are WBCs that do not show granules when viewed under a microscope, and they come in two subtypes. (Hence, they are "agranular.")

Monocytes account for 2-10 percent of WBCs, making them the third-most abundant WBC after neutrophils and lymphocytes. Monocytes circulate in the bloodstream and then move into other tissues when an infection is detected. When they arrive at the infection site, they transform into another type of cell, usually a macrophage, and begin to engulf and digest bacteria, dead or dying cells, and other infectious material. Some monocytes migrate into bones where they transform into special bone cells called osteoclasts, whose function is to degrade calcified parts of the bone. This is important in the bone remodeling process by which the bone changes its shape in response to stress and exercise, but it also happens in certain bone diseases, such as osteoporosis.

Lymphocytes are the second most abundant type of WBCs, accounting for 20-50% of the WBC count. They are subdivided into B-lymphocytes and T-lymphocytes (aka, B-cells and T-cells), each of which is yet further divided into various subtypes. The role of B-cells is to produce antibodies, which attach to agents that the body’s immune system considers foreign. This helps to defend the body against infection. However, it can also lead to problems when antibodies are made against an individual’s own tissue, or against something else that benefits the individual, such as a tissue or organ transplant. T-cells are involved in cell-mediated immunity, fighting against infections from viruses and bacteria, and may help the body attack cancer.

Comprehension Checkpoint

To determine if a patient has an infection, a doctor may take a _____ count.

A good example of B-cells making antibodies against foreign tissue is the reaction of blood transfusion recipients to donor blood of a different type. Prior to the late 19th century, nobody had a clue as to why a transfusion would succeed or fail because, as noted earlier, they assumed all blood to be identical. But with improvements in the microscope and in the dyes used to stain cells, this view started to change. In the years prior to Landsteiner’s discovery, pathologists could see that RBCs were not always exactly the same. Sometimes RBCs would look slightly bigger or smaller than usual, or would stain darker or lighter. They wondered whether these observable differences might have something to do transfusion outcomes, but they had not devised a way to test the idea.

Even with the hand-cranked centrifuges available in those days, Landsteiner could separate the cells from the liquid in blood fairly easily. That produced plasma, and after separating it from the cells, Landsteiner could keep the cells alive for short periods by suspending them in saline (salt water). He found that mixing cells with plasma would cause clotting, even when the plasma and cells were from the same volunteer. However, if a blood sample was left to clot prior to centrifugation, the resulting liquid extract did not cause clotting of fresh RBCs from the same volunteer. That’s because the liquid extract was serum; it lacked clotting factors because the clotting factors had been consumed before Landsteiner had separated out the liquid.

Landsteiner did not know about the clotting factors, but he could deduce that serum and plasma must be different, and this led him to ask a question: What would happen if serum from one volunteer were mixed with saline-suspended RBCs from other volunteers? As happens often in science, a simple question would prove to be the key, since it was a question that Landsteiner was equipped to answer. He needed only to take blood from himself and five other volunteers, extract several samples of serum and blood cells, mix the samples in various combinations, and observe the mixtures both with the naked eye and a microscope.

Certain donor serum samples mixed with blood cells from other donors resulted in no hemagglutination. However, serum from those same samples would agglutinate cells from the other donors. Landsteiner also found that some of the volunteer samples could be mixed with one another with no agglutination. (See Figure 6 for a visual chart of the results.) The presence or absence of hemagglutination sorted the six subjects into three categories that Landsteiner called blood groups.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: This table illustrates the results of Karl Landsteiner's 1901 experiment using his research group as subjects. Landsteiner mixed the blood cells and sera of his employees and, using a microscope, observed whether there was clumping. image © Biologie/Schulbuch-O-Mat

Further testing showed that one group differed more from the other two, more than those two differed from one another. When exposed to blood cells from the two other groups together, serum from one group of donors would form very big clumps, whereas serum from the other two groups would form only small clumps when exposed to cells from the two other groups together. To explain the results, Landsteiner reasoned that the cells from the volunteers differed in chemical agents present on the cell's surface. He called these chemical agents "haptens." Today they are called antigens and we know that they’re present not just on RBCs, but also on the membranes of all our cells.

Landsteiner named the smaller clumping groups A and B and reasoned that the serum from each must be reacting to the presence of just one hapten that was not its own. When it came to the third group, however, which he first called C (later changed to O), Landsteiner reasoned that their serum must be reacting to the presence of two foreign haptens, thereby resulting in stronger hemagglutination. Group C donors, he suggested, must have no haptens on their RBCs. Thus, when serum from a type A donor is mixed with B and C cells, it reacts only to the cells of type B donors, whose RBCs have a type B hapten. Similarly, he said that group B serum reacted to cells from group A donors, because those cells possessed type A haptens. In contrast, he proposed, for type C individuals haptens A and B were both foreign, so their sera reacted more strongly.

Using multiple samples from blood drawn over several weeks from all six volunteers, Landsteiner repeated the experiments and found that grouping pattern always came out the same. Once that was certain, he published the findings with his proposal that the success or failure of transfusion depended on the A/B haptens, but a study of just six subjects did not provide enough confidence for anyone to attempt a transfusion in humans based on the experimental results. More data were needed, and Landsteiner knew it.

He had two of his trainees/assistants recruit twenty-two additional blood donors and repeat the process that they had used on the original six. It’s fortunate that they did, because in 1902 analysis of the results from the expanded study led the team to define a fourth blood type. They called it "AB" since it consisted of people whose RBCs had both antigens; their cells would agglutinate if mixed with any serum but their own type, but their serum would not cause agglutination in cells of any type (Figure 7).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: ABO blood groups and the antibodies and antigens present in each. This chart tells us that, for example, people with type A blood have the A antigen on the surface of their red cells and anti-B antibodies in their plasma. So if type B blood is mixed with this type A blood, the type A will attack the type B blood by agglutinating the introduced red cells. The same is true if type AB blood is added, but type O will not result in agglutination since it lacks the anti-B antigens on the cell surface. image © InvictaHOG

After a few more years of testing samples from an increasing number of volunteers, Landsteiner and a growing association of colleagues were confident that all humans must fit into the A, B, O, or AB group, and this is what led to the successful 1907 transfusion at Mount Sinai in New York. Leading the Mount Sinai team was Landsteiner’s colleague, Reuben Ottenberg. Like Landsteiner, Ottenberg hailed from Vienna and both were at the beginning of careers that would last a half-century and enable medicine and surgery to advance more in a few decades than it had in all the previous ages of human civilization.

Comprehension Checkpoint

What Landsteiner called haptens we now call

As for why the various sera reacted this way to Landsteiner’s "haptens," scientists eventually worked out that the reason was antibodies. Also known as immunoglobulins, antibodies are proteins produced by a type of B-lymphocyte called plasma cells. While some antibodies circulate attached to the surface of the cells that make them, other antibodies detach and float freely in the blood. Thus, they are present in serum.

A person in blood group A does not make antibodies against antigen A, but they do make antibodies against antigen B, and thus against RBCs from group B donors. With blood group B, the scenario is the opposite; they make antibodies against antigen A and thus against RBCs from group A donors. People in blood group O (what Landsteiner called group C) make antibodies against both A and B antigens, because both antigens are foreign to them, while people in group AB do not make antibodies against either antigen (Figure 8).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: The compatibility of different blood types. image © InvictaHOG

Landsteiner’s discovery of the ABO groups eliminated the Russian roulette quality that had characterized blood transfusion over the centuries – up to and including Halsted’s courageous but lucky experience in transfusing his hemorrhaging sister. After Ottenberg’s transfusion milestone in 1907, surgeons knew that they could replace lost blood without killing the blood recipient. This allowed them to develop a multitude of new operations that otherwise would have been impossible. Medicine changed profoundly and Landsteiner would be awarded the Nobel Prize in Physiology and Medicine in 1930.

This was not the end of the story, however, either for Landsteiner or his colleague Ottenberg. For the bulk of the population, ABO grouping alone worked well enough, but by the 1930s the understanding of blood was growing still more complex. One reason for this was another surface antigen on RBCs that also could come into play when blood mixed. It’s called the Rh factor and both Landsteiner and Ottenberg would be central to its discovery. Another reason was that scientists would also come to understand the genetic basis underlying the existence of the RBC antigens.

Knowledge of blood components brought about a revolution in surgery through safe transfusion. The module traces the development of our understanding of blood over centuries, beginning in 1628 with English physician William Harvey's breakthrough research on circulation. With a focus on early 20th-century experiments by Austrian researcher Karl Landsteiner, the module shows how the identification of clotting factors, blood types, and antigens was critical to medical science. Whole blood, plasma, serum, and different types of blood cells are defined.

Key Concepts

  • Blood is a complex fluid with many different components, but can be divided into solids (red blood cells, white blood cells, and platelets) and liquid (plasma).

  • Blood plasma includes clotting factors (agents that help to form blood clots) and when these are removed, the remaining liquid is known as serum.

  • The main cellular components of blood are: red blood cells (erythrocytes), white blood cells (leukocytes), and platelets (thrombocytes).

  • The Austrian researcher Karl Landsteiner studied agglutination, or clumping together of blood cells with certain antigens. Based on his findings, he proposed that there were three types of blood (A, B, O) and later added a fourth type (AB).

  • Antibodies are proteins produced by plasma cells, a type of B-cell lymphocyte, and are present in the blood serum. These antibodies are important for blood transfusion, since the blood type of a patient and the type of antibodies present in the donor’s blood will determine whether or not it agglutinates or clumps.

David Warmflash, MD, Nathan H Lents, Ph.D. “Blood Biology I” Visionlearning Vol. BIO-4 (8), 2016.

Top


Page 15

Cell Biology

by David Warmflash, MD, Nathan H Lents, Ph.D.

How do you discover something extraordinarily fundamental that nobody has ever known or seen before? If you have a pretty good idea of what you’re seeking, you might take Walther Flemming’s approach. In Cell Division I: The Cell Cycle, we learned that Flemming observed how chromosomes became visible in patterns that repeated each time the cells of fire salamanders divided. This important discovery was made possible by using various dyes, a technique that Flemming pioneered (Figure 1). This is a good example of how a new instrument or technique can facilitate a discovery, provided that the researcher already knows more or less what he or she might find.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Flemming's drawing of an insect cell treated with an aniline dye as he saw it under the microscope image © Wikimedia Commons

This was the case with Flemming. Scientists in the preceding years had already been seeing faint structures in cells, but their dyes were not good enough to reveal what any of these structures did. Throughout the 19th century, as microscopes developed, scientists had been seeing clues of structures in dividing cells of eukaryotes. Like Flemming, earlier scientists had been experimenting with dyes. These were not as good as the aniline dyes that would facilitate Flemming’s discovery, but they helped the scientists to see something. Unfortunately, the dyes killed the cells, and since the structures under the microscope were difficult to see as it was, Flemming’s forerunners weren’t sure they were seeing anything characteristic of a live, functional cell. Were they simply artifacts, something that formed only after the cells died? If so, that would not explain how a cell replicates in a living organism, or in vivo.

Knowing what he wanted to find, Flemming set out to do a better job of staining the internal details of cells. By doing so, he realized that he could also determine whether the structures were artifacts or part of cellular function. Using the fire salamander embryos, through a long, painstaking process, he cut his samples into very thin slices and treated them with his new dyes. This killed the cells, just as the earlier dyes had killed the cells of other laboratory animals. However, Flemming repeated this technique with many embryos, arresting their life process at different points in time. This protocol was as much a novel technique as his utilization of the aniline dyes. By stopping the life process at different points, he could investigate whether the structures looked any different at Time A compared with Time B or Time C and so forth.

It turned out that they did look different, and this proved that the structures were not artifacts. They were part of the life process of the cells. Coupled with the improving resolution of microscopes of the era, the aniline dyes could make the differing structures clearly visible. This led Flemming to discover the cell process that we call mitosis: division of the eukaryotic cell nucleus that occurs just prior to cytokinesis, which is the division of the cell itself. So revealing were the new dyes and so meticulous was his technique that Flemming was able to define the phases of mitosis that we still talk about today (Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Flemming's diagram of eukaryotic cell division (1888).

Flemming coined the term chromatin to describe the material of which chromosomes are made. When he observed cell division in the fire salamander embryos, he saw the same pattern of events occur in each cell, beginning with the appearance of visible chromosomes. He described the events as four periods of time, which he named prophase, metaphase, anaphase, and telophase. Today, we speak of five phases, since we split up Flemming’s prophase, the longest phase, into prophase and prometaphase.

It’s important to remember that the process of cell division is cyclical, with one phase feeding into the next. For example, telophase overlaps with cytokinesis, the splitting of the rest of the cell that generates the two new daughter cells. Following cytokinesis, the two new cells then go through a long period called interphase, during which each new cell carries out normal life functions and replicates its chromatin, eventually leading to prophase and another cycle of mitosis. Thus, as mitosis begins, the nucleus already contains a double set of chromatin. Since chromatin contains the genes that give organisms their characteristics, this means that a cell entering prophase contains two copies of what is called the genetic sequence, or the genome, of an organism. What happens from this point forward is simply a matter of repackaging and relocating the chromatin.

Taking the five phases of mitosis plus interphase, you can remember the entire cell cycle with the phrase “Please Pour Me Another Tea Instead!” (Figure 3)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: An illustration of the phases of mitosis: interphase, prophase, prometaphase, metaphase, anaphase, and telophase. This process then leads to cytokinesis. image © NIGMS

Prophase is the time when we can first see the chromosomes under an optical microscope. As noted above, the cell’s genetic sequence replicates prior to prophase (during interphase). During interphase, the chromatin is relatively decondensed, bundled loosely, like spaghetti, and dispersed throughout the nucleus. With the onset of prophase, the chromatin folds up into a compact form that, when stained with a dye, can be seen as individual chromosomes, even with the primitive microscopes available in Flemming’s era. Each chromosome consists of a pair of sister chromatids, each containing the same genetic sequence that was duplicated during interphase, and these two chromatids are connected by a structure called a centromere. Also, during prophase, a prominent structure called a nucleolus disappears from the nucleus.

Prometaphase is marked by the breakdown of the membrane that surrounds the cell nucleus. Additionally, pairs of protein complexes called kinetochores bind to the centromere of each chromosome, one kinetochore for each chromatid. These two key events will allow for connections to form between the chromosomes and special structures located just outside of the nucleus.

Metaphase is characterized by a repositioning of the duplicated chromosomes so that they are ready to be pulled apart. During interphase, most animal cells contain a structure called a centrosome, located near the nucleus but outside of its membrane. Like the chromatin, the centrosome also replicates toward the end of interphase, and by the onset of metaphase each of the two daughter centrosomes has migrated to opposite ends of the nuclear membrane. Throughout the cell cycle, the centrosome acts as the control center for microtubules, a complex system of protein fibers that make up the part of the cytoskeleton. Just as bones give shape to your body on a large scale, the cytoskeleton provides each cell with a shape, while also helping to transport materials. With the nuclear membrane now dissolved and the two centrosomes positioned on opposite sides of the cell, the condensed chromosomes line up along an imaginary line in the center of the cell called the metaphase plate. Microtubule fibers then begin to extend from each centrosome toward the centromere that connects the two sister chromatids of each chromosome. This cage-like structure of microtubules is called the mitotic spindle. Specifically, the microtubule fibers attach to the kinetochores; as noted above, there are two kinetochores, one for each chromatid. This provides the setup for the chromatids to be pulled apart during the next phase.

Anaphase is characterized by the separation of the two identical chromatids of each chromosome. With the mitotic spindle complete, the two centrosomes start moving outward, pulling each chromatid away from its sister and toward opposite ends of the cell.

Telophase begins when the two sets of chromatids reach distinct regions of the cell and a new nuclear membrane starts to form around each set. Cytokinesis also begins during telophase, even before the new nuclear membranes are complete. Once formed, however, each new nuclear membrane encloses a full set of chromosomes. These then decondense into the ordinary chromatin of interphase, a nucleolus appears in each newly formed nucleus, and the cell cycle begins anew.

Interphase is not a part of mitosis, but is the cell's state between nuclear divisions when it is preparing for mitosis and cytokinesis. Interphase is discussed in more detail below.

Comprehension Checkpoint

What name did Flemming give to the material that forms chromosomes?

Chromatin consists of DNA and special proteins called histones. DNA is a long molecule consisting of two strands of repeating chemical units called nucleotides. There are four types of nucleotides, and the genetic sequence is based on the order in which these four types of nucleotide are connected, one after the other, over the length of the molecule. It’s like a language built of words composed of only four possible letters, but it works well, because the DNA molecule allows each word to be very long (learn more in our series on DNA, specifically DNA II: The Structure of DNA). The density of chromatin changes throughout the cell cycle; this depends on how tightly the DNA strand is wrapped and tethered to histones and other associated proteins (Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: The substance within chromosomes, chromatin, is made up of DNA (genetic information) and proteins (called histones). image © Darryl Leja, NHGRI & www.genome.gov

While chromosomes are a way of organizing the chromatin of eukaryotic organisms into individual packages, the number of chromosomes varies widely among eukaryotes. Humans have 46, cats and other felines have 38, dogs have 78, and wheat has 42, while the Jack Jumper ant has only 2, and a certain kind of protozoan is famous for having nearly 16,000.

It should be emphasized that mitosis occurs only in eukaryotic cells, since only eukaryotes have membrane-bound nuclei. Bacteria and Archaea, the other two domains of life, have chromosomes that are not separated from the rest of the cell; consequently, they can reproduce through a simpler process called binary fission (to learn more, see our module The Discovery and Structure of Cells).

Comprehension Checkpoint

All living organisms have the same number of chromosomes.

Just as DNA is a large molecule constructed of building blocks, microtubules are made of repeating units of protein called tubulin. In addition to playing a structural role akin to the skeleton of your body, large molecules built of tubulin subunits are vital to mitosis and several other dynamic cell functions. They actually move, which is why chromosomes can be pulled apart, and why the entire cell can be made to divide.

All of this takes a great deal of organization, and so eukaryotic cells depend on components known as microtubule organizing centers (MTOCs). In animal cells, the centrosome is one of the main types of MTOC. As we shall see in the next section, two centrosomes are needed during mitosis of an animal cell, each member of the pair using microtubules to pull a set of daughter chromosomes toward one end of the dividing cell. A centrosome consists of two centrioles that are made of tubulin. The two centrioles are arranged at right angles, or orthogonally, and are surrounded by other proteins that make the centrosome more than just a bent section of microtubule (Figure 5).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Illustration of a centrosome, which consists of two barrel-like centrioles (each made of tubulin) at right angles to each other. image © Darryl Leja, NHGRI & www.genome.gov

Although not part of mitosis, interphase is important to discuss because it places mitosis into context with respect to the cell cycle. For vertebrates (the subphylum of animals to which humans belong), the duration of the life cycle of each cell varies, depending on the cell type. Certain white blood cells may live and be replaced over a period lasting less than a day. Most other body cell types have life cycles ranging from days to months. Others, such as bone cells, typically are replaced in cycles measured in decades, while certain brain cells and muscle cells will endure for the entire lifespan of the organism. These cells are said to be in a permanent interphase; specifically, they are locked in a phase of interphase known as G1.

For cells that will be moving from interphase into a new round of mitosis, the G1 phase ends at what’s called the restriction point, when the cell commits to replication, and enters the phase of DNA synthesis, or S phase. Throughout G1, sections of the decondensed chromosomes are accessed as needed by enzymes using the DNA sequence to make proteins, but in the S phase the entire collection of genetic material is copied. Thus, by the end of the S phase, each decondensed chromosome exists in duplicate, the two copies destined to become the two sister chromatids when the chromosome condenses at prophase. Generally the S phase leads into a transitional phase known as G2, although the cells of some animal species proceed from the S phase directly into mitosis. During G2, proteins are synthesized that will support mitosis and cytokinesis. Additionally, many cell types undergo a kind of self-testing to make sure that everything is correct before mitosis begins, and certain cancers are thought to result from cells missing the G2 phase and thus avoiding the testing that would prevent mitosis in cases when all is not right. (You can learn about interphase in detail in our Cell Division I: The Cell Cycle module.) A representation of cell cycle phases is shown in Figure 6.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Relative lengths of the cell cycle phases, including the G1, S, and G2 phases that make up interphase. Mitosis, here noted by M, is a relatively short period.

Comprehension Checkpoint

____________ remain in a permanent interphase.

Painstaking, systematic work like Flemming’s is one way to make a discovery. Indeed, in modern science, it’s the most common way. But it’s not the only way. One major discovery very relevant to mitosis came unexpectedly, and from a surprising source: tea leaves. Not conventional tea, but leaves of the Madagascar Periwinkle, a plant known for its beautiful flowers.

In many parts of the world, people brew tea from leaves of the Periwinkle (Figure 7), previously called Vinca rosea and now designated as Catharanthus roseus (we'll use the older Vinca name here). This tea is used as a folk remedy for a plethora of ailments, but especially for diabetes when insulin and other conventional treatments are not available. It’s an ancient remedy whose potential in diabetes treatment science has only recently begun to uncover, but it first met the scrutiny of modern research back in the 1950s. Fascinated to hear of the tradition, a Canadian endocrinologist from Toronto, Clark Noble, accepted a sample of 25 Periwinkle leaves from a patient who had acquired them in Jamaica. Although recently retired from endocrinology research, Noble had been a key player in the discovery of insulin 30 years earlier, but the Nobel Prize for this milestone medical advance had eluded him. Others who had worked closely with Noble had received the award, but he was remembered as merely a sideline figure. If diabetics in Jamaica and elsewhere really were benefiting from the Vinca plant, Noble wanted to know how it worked. Lacking a lab of his own, he sent the envelope to the lab of his younger brother, Robert.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: The Madagascar Periwinkle plant (Catharanthus roseus, previously called Vinca rosea).

Also an endocrinology researcher, Robert Noble jumped at the opportunity to study the leaves. As noted above, insulin treatment had been available for only 30 years at this point. It was obtained from pigs, and supplies were not particularly abundant. Moreover, it didn’t work well for all diabetic patients. Today we know this has to do with the fact that there are two main types of diabetes, both of which manifest as an inability to absorb sugar from the blood into the body’s muscle cells, leading to a range of long-term complications in many body systems. Some diabetics are unable to produce insulin, so taking insulin works very well for them. In others, however, the problem is that their muscle cells do not respond well to insulin. They produce insulin, and yet their blood sugar levels are still high. Insulin may help them a little, but not completely, and for some it does not help at all. Today, we have drugs to make their muscle cells more sensitive to insulin, but the situation was very different back in the 1950s. And thus, Robert Noble happily set out to study the Vinca leaves that his older brother had sent him.

Noble started by formulating questions that could be answered through experiments on laboratory animals such as rabbits and mice. When injected, would an extract of the leaves lower an animal’s blood sugar? Would it prevent the development of diabetic symptoms like excessive urination? Would it prevent the development of blood circulation problems and blindness? Or, injected into an animal that already has full-blown diabetes, would it reverse the condition? Using laboratory animals that have a certain medical condition in order to test an agent that might affect that condition is known as an animal model. In this case, Noble was employing rabbit and mice models of diabetes.

After running a series of experiments, the younger Noble found that the Vinca rosea extract actually had no effect on diabetes whatsoever. In fact, at very high doses, it made the animals really sick. They were dying of infections, because their white blood cell counts were too low. Something from the Vinca leaves was preventing the bone marrow from producing new white blood cells, which form the basis of the immune system.

Noble didn’t know why the Vinca extract killed the white blood cells of mice, but he wondered if this property could be useful for people who have too many white blood cells. In other words, he wondered if the Vinca extract could be used to treat leukemia, a type of cancer characterized by excessively high numbers of white blood cells? To find out, Noble joined forces with chemist C.T. Beer to isolate the specific chemical compound from the Vinca extract that caused the effect.

They found the compound that belongs to a class of chemicals known as alkaloids, and they named it vinblastine. Switching from an animal model of diabetes to one of leukemia, Robert Noble began a new series of experiments looking at the effects of vinblastine on leukemia and some other diseases that are caused by uncontrolled replication of cells.

Following success with the animal experiments, vinblastine proved to be very effective in clinical trials of cancer patients in Toronto. Soon, a related compound called vincristine was isolated by another investigator. A whole range of additional Vinca compounds followed, and each proved useful against various types of cancer, though vinblastine and vincristine are the most famous.

How well did they work? To give you an idea, Vinca drugs are still used today, often in combinations with other chemotherapy drugs, and they have led to dramatic increases in cancer survival. Vincristine, for instance, is part of the combination cocktail against the most common childhood leukemia known as acute lymphoblastic leukemia (ALL). In 1950, an ALL diagnosis was a virtual death sentence for a child, with a survival rate of 5 percent. Today, the survival rate of ALL is up to 95 percent. Similarly, Hodgkin disease – a type of cancer of the lymph nodes that often affects young adults – had a pitiful survival rate in the 1950s, but by 1980 the death rate from Hodgkin disease had decreased by 75 percent, thanks in large part to vinblastine, the drug that Noble discovered in the Vinca leaves.

All of this came from two brothers who had not even set out to do cancer research. Unlike Walther Flemming, who had a plan and knew precisely what he was looking for, the discovery of vinblastine is a story of serendipity, or a fortunate accident.

Comprehension Checkpoint

The Vinca extract was effective in treating

How could a chemical drawn from a plant be so effective against leukemia? What does vinblastine do to the cells of rabbits, mice, and people with cancer? Today, when pathologists look at suspected cancer under a microscope, they pay a lot of attention to mitosis. Each time mitosis occurs, it leads to the parent cell splitting into two new daughter cells. While that formula is always the same, the rate at which mitosis occurs varies substantially. Just like other cells in a body, the life cycle of different cancer cells can vary. Some have a very short life cycle, with mitosis occurring frequently, while in other cancer cells mitosis is infrequent. When cancer is suspected, the pathologist looks at how fast and how often mitosis occurs. Cancer cells that undergo more mitosis tend to be more aggressive than cancer cells in which mitosis is more relaxed. This means that if you slow down mitosis, you might then be able to slow down, or even reverse, the progression of cancer.

It turns out that this is exactly how the Vinca alkaloids work. When Robert Noble gave the Periwinkle tea to laboratory animals, and later when he gave the isolated vinblastine compound to human patients, cell division slowed down in the white blood cells. It was later discovered that the compound interferes with mitosis. In addition, the various Vinca compounds that were eventually discovered each interfere with mitosis at different phases and for different reasons. It turns out that the compounds disrupt the assembly of microtubules – the special fibers that provide structure in the cell. Vinblastine binds to the tubulin subunits, preventing them from coming together.

The questions that Robert Noble and the generations of cancer researchers who stood on his shoulders were inspired to ask ultimately were investigated in a very systematic way. Having an idea of what they were looking for, researchers isolated new drugs and honed in more closely on the workings of the microtubule system. So while it may start with a lucky find, ultimately scientific advancement requires a clear plan, and long lasting, painstaking work.

Beginning with the discovery of mitosis, the module details each phase of this cell process. It provides an overview of the structure of cell components that are critical to mitosis. The module describes Clark Noble’s experiments with the Madagascar Periwinkle, which led to the discovery of an effective cancer treatment drug. The relationship between mitosis and cancer is explored as is the mechanism by which anti-cancer drugs work to slow down or prevent cell division.

Key Concepts

  • The term mitosis refers specifically to the process whereby the nucleus of a eukaryotic cell splits into two identical daughter nuclei prior to cell division.

  • Mitosis is a cyclical process consisting of five phases that feed into one another: prophase; prometaphase; metaphase; anaphase; telophase.

  • The rate at which mitosis occurs depends on the cell type. Some cells replicate faster and others slower, and the entire process can be interrupted.

  • Chromosomes are made of a material called chromatin, which is dispersed throughout the cell nucleus during interphase. During mitosis, however, the chromatin condenses making individual chromosomes visible under an ordinary light microscope.

  • HS-C6.2, HS-LS1.A1, HS-LS1.B1

David Warmflash, MD, Nathan H Lents, Ph.D. “Cell Division II” Visionlearning Vol. BIO-4 (2), 2015.

Top


Page 16

Cell Biology

by Nathan H Lents, Ph.D., Donna Hesterman

It’s hard to imagine, but the cells present in a tiny embryo ultimately generate all of the cells that make up the body of an adult human being.

That’s right, the hundreds of millions of cells that make up the bone and flesh of your body are products of thousands of generations of cell division that began when you were smaller than the period at the end of this sentence. It started when a single cell cleaved into two parts, then quickly reorganized and split again into four new cells (Figure 1). Four cells became eight; then eight became 16 individual cells with identical DNA. The cascade continued until several weeks later, millions of cells were dividing – powering the exponential pattern of growth that eventually formed all of the organs and tissues of your body.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Most plant and animal cells replicate by splitting into two identical daughter cells.

Walther Flemming (Figure 2), a 19th century professor at the Institute for Anatomy in Kiel, Germany, was the first to document the details of cellular division. The use of microscopes to study biological tissues was an emerging technology in Flemming's day, and he was highly regarded as an innovator in the field.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Walther Flemming image © Wikimedia Commons

As a professor at Kiel, Flemming experimented with a technique for using dyes to color the specimens he wanted to examine under a microscope. Microscopes in the 1870s were not equipped with electric light sources as they are today, so dying the specimens allowed him to see them in greater detail. He found aniline dyes particularly useful because different types of tissues absorbed the dyes at varying intensities depending on their chemistry. The effect was that different parts of a cell would absorb more dye, in effect "highlighting" them, as in Figure 3, to reveal structures and processes that were invisible before.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Unstained (right) versus stained cells (left) image © Judith Beekman

Flemming used these dyes to study cells. In particular, he was interested in the process of cell division. He began a series of live observations under the microscope using dyed samples of animal tissues and found that a particular mass of material inside the nucleus of cells absorbed the dye quite well. He didn't have a name for it at the time, but later came to call the material "chromatin," from chroma, the Greek word for color (Zacharias, 2013). Flemming drew pictures of what he saw under his microscope to illustrate various publications he produced in his research (Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Flemming's drawing of an insect cell treated with an aniline dye as he saw it under the microscope image © Wikimedia Commons

Flemming did many of his experiments with tissue samples from Fire salamanders, a common species in Northern European forests, because the chromatin in their nuclei was large in comparison to other available study organisms. After many hours of observation, Flemming began to see a pattern whereby cells would periodically transition from a resting stage to a period of frenzied activity that turned one nucleus into two, and then pulled the entire cell apart creating two identical cells – each with its own complement of chromatin enveloped within its nucleus.

Today we call the process of the nucleus splitting into two nuclei mitosis, and the cell split itself, cytokinesis. The terms came into use years after Flemming's discovery, but he described the process fully in his book Zur Kenntniss der Zelle und ihrer Theilungs-Erscheinungen (To the knowledge of the cell and its phenomena of division) (Flemming, 1878).

The alternating patterns of activity and inactivity that Flemming saw in his samples are now commonly referred to as a cell's life cycle, or often just called the cell cycle. Different types of animal cells – like bone, skin, heart, or nerve cells – all have different life cycles. Life cycles vary between types of cells, but all eukaryotic cell cycles can be broken down into four distinct phases: the G1 phase, when the cell grows in preparation for an eventual split; the S phase, where DNA inside the nucleus makes a complete copy of itself; the G2 phase, when the cell checks and corrects any errors that may have occurred during DNA duplication; and an M phase (for mitosis), when the cell’s nucleus splits into two identical nuclei, immediately followed by cytokinesis – cell division. The length and frequency of these phases are different for different types of cells.

At this point, it is necessary to point out that, while all living cells are remarkably similar, cell division is one of those areas where eukaryotic cells (plants, animals, fungi, and protists) are very different than bacteria and other prokaryotes. This is because bacteria and other simple cells do not have a nucleus, so the process can be much simpler. In effect, bacteria simply grow and divide continuously with no distinguishable phases between one division and the next. The process by which prokaryotes divide is called binary fission, and the term “mitosis” never applies to them.

Another difference between prokaryotes and eukaryotes is that prokaryotes have one main circular chromosome, while eukaryotes typically have many linear chromosomes. When a prokaryote divides, it must copy its genetic material and separate the two copies between the two new cells that result from the division, just like eukaryotes (Figure 5). However, the process is different. In prokaryotes, the circular chromosome is physically attached to a certain point of the inside of the plasma membrane of the cell. As the cell copies the chromosome in preparation for cell division, it attaches the new copy in a separate place. This way, the two copies of the chromosome are attached away from each other. Then, when the cell splits into two, the bacterium is careful to ensure that each of the two new cells will have one copy of the chromosome.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Binary fission of bacterial cells

In the more complex eukaryotic cells, the G1, S, and G2 phases are collectively referred to as interphase, as these phases cannot be distinguished by just looking at the cells under the microscope. Even cells that are growing and dividing very quickly in our bodies spend approximately 78% of their lives in interphase. During interphase, eukaryotic cells double in size, synthesize new strands of DNA, and prepare for mitosis and cytokinesis.

Some cells, like human skin cells, will enter the mitotic phase and divide frequently throughout life in order to accommodate changes in size as an organism grows or to generate new cells to repair tissues damaged by illness or injuries. Other cells, like muscle, nerve, and red blood cells, will remain in a permanent G0 phase without ever re-entering the mitotic phase. Even cells that are busy reproducing constantly throughout their lives spend very little time in the actual mitotic phase (M phase) as compared to the other phases of their life cycle (Alberts, et al., 2002). Figure 6 illustrates how the various phases compare in length.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Relative lengths of the cell cycle phases

Comprehension Checkpoint

The process of cell division is more complex in __________ cells.

So what causes one cell to linger in G0 instead of launching into the phases of G1 to S-phase, G2 and on to mitosis? Arthur Pardee, an American biochemist working at Princeton University, was one of the first to examine that question. He experimented with live cultures of hamster cells to find what he called the "restriction point." Pardee hypothesized that there must be a single decision point in a cell's life cycle where a cell commits to one of two paths: one path that leads toward cell division and another that keeps the cell in a quiescent, or inactive, G0 state (Figure 7).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: The restriction point, “R,” late in G1 phase

Pardee began by restricting the amount of nutrients and hormones available to the experimental cultures to see if he could stop the cells' progress toward cell division. He did this by removing the cell growth signals at different time intervals. After the cycles were stopped, he attempted to restart the cycle by adding back the growth signals. Throughout these experiments, Pardee was careful to time each culture to see how long it took to reenter S-phase and mitosis.

Pardee found that it made no difference at all as to when in the cycle he removed the growth signals. All of the samples took the same amount of time to re-enter mitosis. This result led Pardee to conclude that all of the cells must have ended up at the same point, regardless of where they were in their cycle when he first removed the growth factor.

Pardee called the point where the cells halted the "restriction point," and he hypothesized that it functioned as a “point of no return.” In other words, if growth signals are present, cells will proceed forward, and once they pass the restriction point, they will complete their current cycle – even if you remove the growth signals. Some people still refer to this restriction point found in the G1 phase of all mammalian cells as the "Pardee point." It is the point in the life cycle at which a cell either commits to a path toward division, or stops proliferation and enters the G0 phase. Scientists later found another checkpoint at G2 that halts cell division if DNA was not synthesized properly during S-phase.

Pardee published his results in 1974 (Pardee, 1973). At that same time, scientists at the University of Colorado Medical Center began experimenting with a special line of human cancer cells, called HeLa cells, to see if they could get cells to go backward in the cell cycle or jump from one stage to another out of order. They used HeLa cells because they proliferated quickly and could be kept alive indefinitely in a laboratory setting. In their experiments, the team fused different HeLa cells together that were at different phases of the cell cycle. They wanted to see if they could “trick” a nucleus in one phase of the cell cycle to enter another phase by fusing it with the cytoplasm of a cell in a different phase.

What they found was very interesting. They found that when they fused a G1 cell together with an S-phase cell, the nucleus of the G1 cell quickly entered S-phase. They predicted that something in the cytoplasm of the S phase cell caused the G1 nucleus to begin DNA synthesis and enter S-phase. However, when they fused a G2 cell with an S-phase cell, the G2 nucleus would not enter S-phase. Because the G2 nucleus had already duplicated its DNA, it would not enter another S-phase and re-duplicate its DNA.

Because the nucleus could be tricked into moving forward in the cycle, but not backward, this clever experiment revealed that cells can proceed through the cell cycle in only one direction. In addition, their results confirmed what many scientists had suspected – there are factors in the cytoplasm of cells that control the progression through the phases of the cell cycle (Rao & Johnson, 1970). The hunt was on to find them.

Comprehension Checkpoint

What is the result of removing growth signals in a cell after it has passed the "restriction point"?

Several years after the experiments in Colorado, Tim Hunt, an English biochemist, began to look for the cellular factors that control cell division and other life cycle activities. He found his answers while conducting research as a visiting professor at Woods Hole Oceanographic Institute in Massachusetts.

Hunt began by looking for a protein that might be responsible for triggering the various stages of cell division. He got the idea from research that showed cells would not enter the mitotic phase if treated with drugs that inhibit protein synthesis. This meant that the cells had to make some new proteins in order to begin mitosis. The question became, “What are these mitosis-causing proteins?” Proteins, however, cannot be seen under a microscope in the bustling environment of living cells. So Hunt, like Flemming, had to be an innovator and adapt a tool from biochemistry, called radioactive tagging, for use in his experiments.

Hunt injected radioactive amino acids into sea urchin eggs (Figure 8) to help him “see” proteins in much the same way that Flemming had used his dyes to highlight the chromatin he wanted to see. As eggs used the radioactive amino acids to synthesize new proteins, the newly generated proteins would be tagged with radioactivity and visible when viewed with x-ray imaging devices.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: Eggs from sea urchins like this are often used in research because they are almost completely transparent. image © Wikimedia Commons

Using the bio-tagging technique, Hunt tracked the new proteins as they developed in the sea urchin eggs over time and found that levels of one protein in particular would rise and fall at regular intervals as the cell entered the mitotic phase. The levels would build dramatically just prior to mitosis and then fall suddenly just prior to cell division. It seems that Hunt had found his mystery protein (Evans, et al., 1983).

Hunt called the protein "cyclin" – one that we now know to be an integral part of the cell cycle control system. Cyclins work in tandem with a family of enzymes called kinases to control the cell cycle. These kinases are found in a cell's cytoplasm; but unlike cyclins, kinases do not build up and disappear over time. The cell cycle kinases exist at relatively constant levels in a dormant state in a cell's cytoplasm until they are activated by cyclins. When activated, these cyclin-dependent kinases, or CDKs, trigger the chain reactions that initiate DNA replication, mitosis, and other events in the life cycle of a cell.

Comprehension Checkpoint

Hunt injected radioactive amino acids into sea urchin eggs in order to see

Although it is the cyclins and CDKs that manage when eukaryotic cells enter each phase, the system relies on checkpoints like the one discovered by Pardee to ensure that all systems are ready before launching into the most critical phases of the cycle – DNA synthesis, and following that, mitosis. The cell cycle control system keeps the life cycle moving forward in an orderly manner, sort of like the mechanical timer on a washing machine ensures that clothes are washed, rinsed, and spun dry in the correct order. The cell cycle control system, like a washing machine timer, is automatic, unidirectional, and dependent on signal inputs at certain checkpoints to keep the process moving forward (Figure 9).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 9: Checkpoints ensure that the cell cycle can be halted if damage or errors are detected.

Tim Hunt, who discovered the cyclins, won the Nobel Prize in medicine 2001, together with Paul Nurse, who discovered the cyclin-dependent kinases (CDKs). They also shared the prize with Leland Hartwell, who pioneered the research into the checkpoints of the cell cycle.

The network of proteins that make up the cell cycle control system manage an extremely complex series of operations that allow the cells in our bodies – and those in all the plants and animals around us – to grow and sustain life. From the careful replication of DNA that becomes the blueprint of life for new daughter cells to the final cleave that pinches one cell into two during cytokinesis, every phase must go off without a hitch – millions and millions of times during the life of an organism. Most the time the process goes smoothly. However, occasionally errors occur or the cell cycle control systems get damaged. When this happens, the result can be disastrous for the cell and can even lead to cancer. In fact, because the main feature of a cancer cell is constant unrestrained growth, cancer is often referred to as a disease of the cell cycle.

Cell division is an enormously complex process that must go on millions and millions of times during the life of an organism. This module explains the difference between binary fission and the cell division cycle. The stages of cell division are explored, and research that contributed to our understanding of the process is described.

Key Concepts

  • Most of the cells that make up higher organisms, like vertebrate animals and flowering plants, reproduce via a process called cell division.

  • In cell division, a cell makes a copy of its DNA and then separates itself into two identical cells – each with its own copy of DNA enveloped inside a nucleus.

  • The term mitosis refers specifically to the process whereby the nucleus of the parent cell splits into two identical nuclei prior to cell division.

  • Alberts. B., Johnson, A., Lewis, J., et al. (2002). Molecular Biology of the Cell, 4th edition. New York: Garland Science; Accessed online at http://www.ncbi.nlm.nih.gov/books/NBK26824/ on March 20, 2013.
  • Campbell, Neal A., & Reece, Jane B. (2005). Biology, seventh edition. Pearson Benjamin Cummings.
  • Evans, T., Rosenthal, E., Youngblom, J., Distel D., & Hunt, T. (1983). Cyclin: a protein specified by maternal mRNA in sea urchin eggs that is destroyed at each cleavage division. Cell, 2, 289-386.
  • Flemming, W. (1878). Kiel. Zur Kenntniss der Zelle und ihrer Theilungs-Erscheinungen. Accessed online at: http://www.schriften.uni-kiel.de/Band%203/Flemming%20(23-27).pdf March 20, 2013.
  • Jackson, Peter K. (2008). The Hunt for Cyclin. Cell, 134, 199-202. http://www.uam.es/personal_pdi/ciencias/jmsierra/documents/Jackson2008Cell.pdf.
  • Pardee, A. (1973). A Restriction Point for Control of Normal Animal Cell Proliferation. Proceedings of the National Academy of Sciences, 71, 1286-1290.
  • Rao, P. & Johnson, R. (1970). Mammalian cell fusion: Studies on the regulation of DNA synthesis and mitosis. Nature, 224, 159-164.
  • Zacharias, Helmut. Famous scholars from Kiel. Accessed online at http://www.uni-kiel.de/grosse-forscher/index.php?nid=flemming&lang=e

Nathan H Lents, Ph.D., Donna Hesterman “Cell Division I” Visionlearning Vol. BIO-3 (5), 2013.

Top


Page 17

Evolutionary Biology

by Alfred L. Rosenberger, Ph.D.

Few people have changed the world with the power of an idea. Charles Darwin, the British naturalist who lived during the 1800s, was one of them. While we might equate the idea of evolution with other revolutionary scientific breakthroughs, such as Einstein's general theory of relativity, people seem to care less about what it means to live in a universe where the speed of light is fixed than in a world in which humans descended from hairy apes.

That is a tricky question because of its implications about the very nature of life, humanity, and religion. It is the reason why some greet Darwin's name with a gut-level sense of distrust even though his contributions to our understanding of life are as solidly confirmed as are Einstein's contributions to our understanding of the universe. So, it is no surprise that more people have an inkling – too often wrong – of what is meant by Darwin's concept of natural selection than by the terms of Einstein's famous equation E = mc2.

Darwin's legendary book, On the Origin of Species by Means of Natural Selection; or, the Preservation of Favoured Races in the Struggle for Life, is frequently listed as one of the greatest books ever written. The three critical ideas he developed in it are:

  • The fact that evolution occurs.
  • The theory that natural selection is the driving force or mechanism behind the process of evolution.
  • The concept of phylogeny, that all forms of life are related to one another genealogically, through their pedigree or "family's roots."

Darwin began developing these ideas as a result of his experiences during a five-year voyage on the British survey vessel H.M.S. Beagle, which sailed around the world on a mapping expedition during the early 1830s (Figure 1). Darwin was on board to work as the ship's naturalist, to record information about the geology, sea life, land animals and plants, and people that the Beagle would discover. When he set sail in 1831, Darwin was twenty-two years old, fresh out of college, fascinated with science, and deeply interested in geology and natural history. He was planning to become a clergyman, partly because he thought it would allow him enough free time to pursue his other interests.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: The HMS Beagle, a 90.3 ft, 10 gun brig-sloop of the British Royal Navy. This file comes from Wellcome Images, a website operated by Wellcome Trust, a global charitable foundation based in the UK. image © Wellcome Images/CC-BY-4.0

Comprehension Checkpoint

One of the main ideas of Darwins' book On the Origins of Species was that all forms of life share family roots.

Darwin was keenly aware that the idea of evolution was in the air and was being hotly debated in some circles. Actually, it had been part of Western thought for more than 2,000 years, at least since the Greek philosopher Aristotle proposed there were natural laws that explained how the world came to be. These laws were meant to be alternatives to the usual myths and stories about the origins of the universe and of people that all native cultures seem to generate. Some of Aristotle's proposals were quite specific. He believed, for example, that there were "higher" species and also "lower" species, and the lower ones gave rise to the higher.

As Europe emerged from the Middle Ages, scientists interested in biology considered evolution an idea of historical importance. One of Darwin's own grandfathers, Dr. Erasmus Darwin, had even written extensively about evolution. But what changed the climate of Darwin's times was that the natural sciences were becoming modernized and professionalized, with their own societies, meetings, and publications. This allowed the fuzzy notion of evolution to rise to the level of a scientific hypothesis, which might be proven or disproven by research, evidence, and a method of reasoning.

As the mid-1800s approached, the idea of evolution posed a serious challenge to the then-popular view that species were unchanging fixtures of nature. This concept, called the Fixity of Species, was a perspective that European zoologists and botanists adopted as part of their culture, to reflect Western religion and the story of creation as laid out in the Bible. A key feature of the scientific argument for "fixity" was the notion that the structure of each species was based on a model, ideal form. In other words, botanists would make the case that all wild briar roses were supposed to look like replicas of one another because a wild briar rose was meant to be built in a precise, definite way or it would not be a wild briar rose. Why? Because each wild briar rose was a product of God's "perfect" acts of creation. And if each was meant to be perfect, there was no reason for any to change, and no possibility that they ever did.

The fixity idea, however, was not satisfactory to all. Some geologists and zoologists thought that species might actually change over time. In fact, the possibility of evolution being a fundamental feature of nature eventually became the crucial question of nineteenth-century science. One of the reasons why this happened was that fossils were slowly being discovered, some in highly "imperfect" environments that seemed not to follow the logic of creation – such as the occurrence of ocean seashells found buried on the tops of mountains such as the Alps and the Himalayas.

Darwin allowed himself to wonder if species were fixed or prone to evolution. With the intense experience of five years of living and working on the Beagle, collecting and describing a vast number and variety of natural history specimens, he developed into a first-rate naturalist – actually, the best in the world. He came to see species differently than those who saw perfection in them. Darwin did not focus on the sameness of individuals; rather, he thought it was important that individuals, like you and me, vary in spite of the fact that we belong to the same species. He realized that the variations could become the raw material for evolutionary change.

Comprehension Checkpoint

In Darwin's day, most people believed that

One of the clues that moved Darwin to totally accept the principle of evolution involved a group of small birds called mockingbirds. Mockingbirds are unspectacular animals with a wingspan of about 10 inches. They live in many habitats in North, Central, and South America, from southern Canada to Chile and Argentina. Darwin observed and collected them on the Galapagos, a cluster of small islands off the coast of Ecuador (Figure 2), and sent his specimens back to London for study.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The Beagle's route through the Galapagos in 1835. Red triangles indicate volcanic peaks on the islands. Darwin's observations of differences between animals inhabiting the different islands in the archipelago was instrumental to his development of his theory of evolution. image © Emory University

After the voyage, Darwin consulted one of the most experienced ornithologists (bird specialists) in England, John Gould, about their taxonomy (see our Taxonomy module). Darwin was surprised to learn that he had misclassified some of the birds because it was difficult for him to tell the species apart from the subspecies. The physical traits of mockingbird species and subspecies blended into one another. For Darwin, this meant that the guidelines he had been trained to use to identify and classify animal and plant species, based on the idea that each one ought to have an idealized "perfect" form – Fixity of Species – was an arbitrary rule created by taxonomists, nothing more than an untested assumption. It logically followed that if species were not designed to be a series of perfect individual replicates, then evolutionary change – or "transmutation" of one species into another – was a possibility. Darwin saw immediately that some of Gould's species could have come into existence if one subspecies changed a little bit more than usual, perhaps as it got isolated on a separate island.

A second clue that led Darwin to embrace evolution had to do with fossils. Fossils are formed when an organism dies and its remains become hardened by absorbing minerals from the earth in which they were buried. Thus, fossils are direct evidence of life in the past and have great importance when considering a time-dependent concept such as evolution. In Argentina, Darwin collected fossils of gigantic armor-plated beasts, megatheres (Figure 3), which were unlike anything else anywhere in the world – nearly. Only the tank-like armadillos, which Darwin had also seen in South America, bore any resemblance to them. Considering these extinct and living forms together, Darwin theorized that megatheres and armadillos might be related. He thought they might be part of a large group of South American mammals that had evolved body armor as a protective adaptation. He speculated that an ancient "cousin" of the megatheres might have been the ancestor of the armadillo.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: A fossil of the now-extinct Megatherium americanum, or giant ground sloth, that inhabited what is now South America from about 23 million years ago until around 12,000 years ago. image © LadyofHats

The Galapagos mockingbirds and the Argentine megatheres provided Darwin with two complementary views of evolution. One helped him picture biological change by comparing living animals. The other helped him see it by comparing an extinct species with one that was living. Darwin collected pieces of the evolutionary puzzle during his five years of sailing on the Beagle, but to solve the puzzle by putting the pieces together into a basic model for the public to see would take him several more decades of effort. His work was capped by publication of Origins in 1859, more than twenty years after he began his voyage on the Beagle.

Origins was immediately recognized as a major scientific success. In one of the quirkiest episodes in the history of science, this happened to be the second time that Darwin published his explanation of evolution. A year earlier, Darwin learned that another naturalist, Alfred Russell Wallace, had also thought of evolution by natural selection, and they eventually wrote a joint paper on the subject in order to share the credit. But the Darwin-Wallace essay did not compare with Origins, which included examples and reasoning that Darwin developed over a twenty-year period. Origins was much more than a statement on the controversial idea of evolution; it laid out a new system of thought, another way of asking scientific questions, assembling scientific evidence, and scientifically testing hypotheses.

Some people were less than happy with the book's publication. Since its central idea was that evolution is an ever-present, unstoppable, fundamental law of nature, Origins became an angry flashpoint for those who cared less about the biological history of animals and plants than they cared about the deeper implications of the really big idea it represented – that in the middle 1800s there were new, logically sound, evidence-based ways of looking at life that challenged the religious ways of thinking that had been broadly accepted for centuries. (See Figure 4 for a parody of his theory of evolution.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Darwin's theories received some strongly negative reactions when they were published. Here Punch's Almanack, a satirical publication, satirizes Darwin's theory of evolution. image © Punch's Almanack

This makes it all the more interesting that the "Question of Questions" was not at all touched on in Origins. Darwin knew all along that this new science of evolutionary biology could be applied to human beings precisely the way he had applied it to mockingbirds and armadillos. Like the mockingbirds, people vary in appearance across countries and continents, and from one island to another. Like the armadillos and megatheres, the skeletons of modern humans closely resemble extinct fossils then being discovered in the Neander Valley of Germany, fossils that would come to be known as Neanderthal man. Darwin said nothing about this in Origins for, in his extraordinary thoroughness, he wasn't ready yet. He was also unprepared for the difficult personal battle that would have resulted if he had.

Comprehension Checkpoint

When Darwin's book On the Origin of Species was published,

About twelve years later, in 1871, Darwin did publish a book specifically about human evolution, Sexual Selection and the Descent of Man. By then, the fury against his ideas had died down in England, and evolution was not a hotly contested issue any longer. By then, other highly accomplished scientists had written about people evolving, most notably Thomas Henry Huxley, in Evidence as to Man's Place in Nature, which appeared in 1863. The idea was slowly being absorbed by society. But nothing could match Darwin's brilliant thinking about the evolutionary process, so no one could match what Darwin would have to say about the subject of man.

Descent of Man was as much about bringing out the few facts then known about human evolution as it was about the meaning of evolution as a way of thinking about our ethics and personal values. Darwin knew that evolution was one of the most important ideas for the human species to comprehend. He knew that seeing us from an evolutionary perspective was more than peering through a telescope to look back at our own primitive origins. Evolution was also a mirror and a microscope for looking at ourselves as we are today.

The experiences and observations of Charles Darwin significantly contributed to his theory of evolution through natural selection. This module explores those influences and describes evolution as a force for biological change and diversification. The first in a series, it details how the theory challenged the cultural mindset of the time, including the effect of his major works: On the Origin of Species by Means of Natural Selection and Sexual Selection and the Descent of Man.

Key Concepts

  • Charles Darwin played a key role in supporting and explaining the theory of evolution through natural selection.

  • Darwin's skills of observation and ability to record data accurately allowed him to create a comprehensive model of the mechanism by which evolution occurs.

  • The theory of evolution through natural selection explains how all forms of life are related to one another genealogically, and emphasizes that variation within a species is the root for evolutionary change.

Alfred L. Rosenberger, Ph.D. “Charles Darwin I” Visionlearning Vol. BIO (3), 2003.

Top


Page 18

Evolutionary Biology

by Alfred L. Rosenberger, Ph.D.

How Charles Darwin came to understand evolution is a fascinating and important story. In our Charles Darwin I module, we focused on how he arrived at an alternative to the idea that each species was uniquely created and unchangeable. Here we look more closely at how Darwin came to propose the mechanism of evolutionary change, which he called "natural selection." Natural selection is the force that promotes changes in a species over generations. It is also the force that produces new species from the changes that accumulate in a population over long periods of time.

Darwin learned the importance of natural selection in bits and pieces as he developed his scientific skills and credentials. He lived during one of the most interesting times, the heyday of Great Britain's Victorian era, from 1809 to 1882, when the sciences, and openness to questioning the status quo, were growing cultural forces. He had a long, productive, brilliant career, and was almost famous even before returning from his five-year voyage around the world on the H. M. S. Beagle. Luckily, his correspondence, diaries, and personal workbooks, as well as the writings of his relatives, friends, colleagues, and rivals, document Darwin's adult life extensively. They tell us that every facet of the man reflected his passion for the patterns of evolution, its rules and consequences. Once he fully grasped how it worked, Darwin's life became so steeped in thinking about evolution that today we might call his fascination an obsession.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: The title page of Darwin's most famous book, On the Origin of Species by ^~Means of Natural Selection.

Although we properly credit Darwin with being the founding father of evolutionary theory, one of his own great gifts was being able to spot a good idea and synthesize information from many fields of knowledge. Darwin's success was due, in part, to having learned from others, just as the great physicist Isaac Newton claimed to have stood "on the shoulders of giants."

Thus, to develop the concept of evolution by natural selection, Darwin did not have to invent the idea that animals and plants were adapted to their environment because that was already recognized in the late 1700s. He did not have to buck the Biblical story of a seven-day creation because the father of modern geology, Charles Lyell, had already shown that Earth's history extended over at least millions of years, not the thousands implied by the Bible. Darwin did not even have to come up with the idea of natural selection by himself – it was inspired by someone else! Another Englishman, Thomas Robert Malthus, who was a clergyman and an economist, wrote Essay on the Principle of Population in 1798. Malthus argued (from an economic standpoint) that human population growth, if it were not reigned in by disease, starvation, war, and other factors, would naturally expand beyond our capacity to produce the food we need to sustain it. In other words, societies of people also are locked in a "struggle for existence." In his autobiography, Charles Darwin acknowledges this thought as the beginnings of natural selection:

In October 1838, that is, fifteen months after I had begun my systematic inquiry, I happened to read for amusement Malthus on Population, and being well prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The results of this would be the formation of a new species. Here, then I had at last got a theory by which to work.

Charles Darwin, 1876

Comprehension Checkpoint

Darwin is credited as being the first person to recognize that plants and animals adapt to their environment.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Down House - Charles Darwin's home and research laboratory.

After his famous five-year voyage around the world on the Beagle, most of Darwin's life was spent at his home on the outskirts of London, which he used as a base of scientific operations. His efforts involved much more than writing about big ideas like natural selection. He worked hard to build his knowledge of all manners of animals and plants from the ground up, learning lessons from many diverse research projects that were always underway in the Darwin household. Many of them might seem small and trivial, but they left him with enormous insight and they added up to a vast body of experience that earned Darwin a great reputation among the public, as well as among scientists from many different fields. Darwin pioneered studies of barnacles, coral reefs, hybridization between species, orchid fertilization, human origins, animal behavior, and other topics that are now basic to oceanography, botany, genetics, ecology, geology, and psychology.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Annie Darwin (March 2, 1841 to April 23, 1851), the second child and eldest daughter of Charles and Emma Darwin.

With his interest in the behaviors of organisms, even Darwin's family life provided lessons about evolution. While he was a doting father to all of his ten children, he also studied them carefully for clues about how nature gave way to nurture. From watching them he theorized that some human behaviors, such as a young child's selfishness, were based upon instincts that were adaptations, while other behaviors were learned, shaped by culture. The death of one of his daughters, Annie, at the age of ten was also a painful reminder to Darwin that all species are captives of their environment and undergo a "struggle for existence" during each generation. Disease was an environmental hazard to all individuals, a potential obstacle to their success. Some individuals were better able to cope with disease than others, just as some are better able to escape predation. Some won and some lost; some grew up and others did not; some lived to have many children while others had few or none. These natural differences that always exist among individuals are at the heart of the principle of natural selection as the engine of evolutionary change.

Comprehension Checkpoint

In Darwin's family life, he found

The idea of natural selection rests on several key points:

  • More individuals are born to a species in every generation than actually live to reproduce.
  • All individuals differ in structure and behavior, and many of these variations are inherited.
  • Some individuals have a greater ability to survive and reproduce than do others because their inherited traits are better adapted to the conditions of the environment than the other traits present in different individuals of the same population.
  • Because the rate at which offspring are produced in every species is greater than the rate at which the environment can provide food, shelter, and other needs, individuals who carry the advantageous traits will come to outnumber those without them, causing a shift in the common characteristics of the species over time.

That shift is a change, an evolutionary adjustment that takes place across generational time. The process behind it is natural selection. Darwin chose the term because the process works much like "artificial selection," the methods people have long used to produce and maintain the breeds of animals and plants that we live with. Both rely on differential reproduction to have an effect. That is, both promote the reproduction of certain members of a population with a desirable set of characteristics. For example, dogs are commonly bred to be protective but not overly aggressive. Eventually, those traits become established as key features of the breed or population.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Dogs with different traits. The collie (left) is frequently used to herd sheep and other livestock. They are bred for their thick coats, which protect them from injury and from intemperate weather while they work, and for their intelligence. The Dalmatian (middle) is similarly intelligent, and is bred for its distinctive black and white coat. The dachshund (right) is a short-legged, elongated dog, originally bred to chase rabbits and other small game living in burrows. image © Corel Corporation

Unlike artificial selection, natural selection is ever present, ongoing, long-term, and utterly beyond human control or prediction. After all, there is no telling what new disease might pop up to threaten a population, how severe a drought might be to limit the food supply during a bad summer, or if the predator from the next valley should decide to swim the river and hunt in a new territory just when vulnerable babies are being born. At the same time, there is no telling how well a species can resist the disease, how many nearly starved individuals are able to travel a long distance to the next food-rich plateau, or how clever some individuals might be in protecting their kids from the new carnivore that is tracking them.

Archaeologists have shown that artificial selection of animals and plants has been going on for at least 10,000 years. But Darwin knew that the Earth was far older than that – at least millions of years old – thus a lot of change can accumulate in a species through natural selection. In some years, food may be abundant and disease rate low, so the environment exerts less of a "pruning" effect on individuals. A species' total population size may then grow unchecked. However, this means that more individuals who are less fit for lean times will survive, and selective pressure, the forces that shape reproductive success, will be greater when conditions shift. So, it is difficult to tell what types of traits will be favored by natural selection in the long run.

Comprehension Checkpoint

Selective pressure weeds out traits that do not help a species to

The scientific method has its own ways of pruning, as lesser ideas are separated from good ones that explain the data in better ways. The idea of natural selection has survived many tests and challenges as progress in many fields leapt far beyond what was known in Darwin's day. One might have guessed, for example, that the principle of natural selection would fail when we finally learned the basics of heredity decades after Origin was published: Darwin didn't have a clue how traits were passed down across the generations. Yet the theory still stands. For every decade that passes, it only becomes stronger as genetics, molecular biology, geology, paleontology, and other disciplines continue to explain phenomena new and old without having to invent another evolutionary mechanism to replace natural selection.

The second in a series discussing the work of Charles Darwin, this module takes a deeper look into the processes that led to Darwin's theory of natural selection and examines specific mechanisms that drive evolutionary change. Key points on which the idea of natural selection rests are outlined. Examples from Darwin's personal life shed light on his thinking about change within a species and the "struggle for existence."

Key Concepts

  • Variation within a species increases the likelihood that at least some members of a population will survive under changed environmental conditions.

  • The common characteristics of individuals within a population will change over time, as those with advantageous traits will come to be most common or widespread.

  • While evidence of evolution by natural selection exists, its effects cannot be predicted.

  • HS-C7.2, HS-LS4.B1, HS-LS4.C1

Alfred L. Rosenberger, Ph.D. “Charles Darwin II” Visionlearning Vol. BIO (4), 2004.

Top


Page 19

Evolutionary Biology

by Alfred L. Rosenberger, Ph.D.

The first edition of Charles Darwin's groundbreaking book, On the Origin of Species by Means of Natural Selection, had only one illustration in it – a picture of a family tree, or descent, also called phylogeny. For the book publisher, this must have been an expensive investment and a somewhat worrying choice. The diagram was printed on oversized paper that had to be unfolded out of the volume to be seen, an expensive printing task. In addition, to have a single drawing in a book was unusual in the middle 1800s because realistic illustrations of plants and animals were considered to be highly artistic. Illustrations were an important selling point of popular books about natural history such as Origin, a non-technical work written for the general public to read. Yet, all 1,250 copies of Origin sold in a day. For Darwin to have placed only a single picture in the book, he must have considered it crucial to his discussion.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: This was the only illustration that appeared in Origin of Species.

Darwin's single picture was a chart, not a portrait of exotic species or even a map. With it, Darwin sought to explain a new concept in science: how various pieces of biology fit together to explain the origins and evolution of species. This involved many details and a lot of ideas. His one drawing was meant to illustrate all of the following:

  • How natural selection works over generations to promote structural variation in the physical form or behavior of an organism.
  • How these variations accumulate to change a species over time.
  • How populations within a species tend to become different from one another.
  • How structural change eventually produces new species.
  • How several species can arise over time from a single ancestral species.
  • How a new genus can evolve from a line of new species.
  • How extinction is a natural part of the evolutionary process.
  • How all species are actually related to one another.
  • How clusters of similar species can form because they have a common origin.

Evolution is a complex, multi-faceted process, and this list is a complex set of ideas to relate. So it is no wonder Darwin focused on producing a graphic to help explain them to the world. He also wrote about five pages explaining how the diagram was to be read.

Darwin chose his words carefully. Here and elsewhere in Origin he used a certain phrase – "descent with modification" – over and over again as an expression for "evolution". Why this instead of the simple one-word term "evolution"?

Part of the answer must be that evolution was a still a fuzzy concept, and it was Darwin's job to make it clear.

At that time, scientists were commonly using the term evolution in discussing physical growth, the changes an individual goes through as one matures. The other meaning of evolution referred to structural change in a species that took place over time, which some, including Darwin, also called transmutation. So, there was a reason for his preference for "descent with modification" over "evolution". First, he wanted to make clear that his discussion of evolution dealt with transmutation (modification), not growth and development. Second, Darwin meant to emphasize that the big picture of biological evolution was far more complicated than the image of the fur of a fox transmuting from reddish brown to white as an adaptation to life in the Arctic. It also involved the production of a pedigree that linked species because they are genetically related, through the process of descent.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The beaks of four species of Galapagos finches, from Darwin's Journal of Researches, 1839. Darwin found that the beaks of finches on islands throughout the Galapagos were specialized to optimize the diet available to them. Thus, finches on islands where large, hard-shelled nuts were prevalent developed robust beaks (far left), and finches on islands where insects or flowers were available developed delicate, pointy beaks (far right).

The concept of phylogenetic descent was a new idea that made Darwin's theories of evolution more sensible than previous proposals that tried to explain certain observations. For many decades before Origin appeared, natural scientists had wrestled with a puzzling problem of biodiversity. While taxonomists who classified organisms never intended to find patterns, it was clear to all those who studied taxonomy that there was a "natural order" that grew out of the process of classifying animals and plants (see our Taxonomy I: What's in a name? module).

Scientists wondered why, when classified, groups of species seemed to form clusters, as if some sort of biodiversity magnet pulled them together and put them in one place. Within clusters, species tended to be similar to one another by different degrees. Surely it was no coincidence that all species of cats are alike, from the alley cat to the lion to the prehistoric saber tooth that roamed the western United States. Chance could not be the reason why dogs, wolves, and coyotes are all variations on the theme of "Dog." Similarly, chance could not explain the similarities and differences of the Galapagos finches that Darwin collected while he was with the Beagle expedition. What was behind the repeated pattern of species clusters that was so common in nature?

Comprehension Checkpoint

Darwin preferred the term "descent with modification" over "evolution" because

Before Darwin, there was only one available model that naturalists used to explain species' similarities and differences, and that would not work to solve this problem. Scientists had thought that the most important pattern of biodiversity was what they called the Scale of Nature. This was the notion that the vast range of living organisms – say, from snail to ant to fish to mouse to monkey to man – was a feature of divine creation meant to highlight our own superiority. Life, they thought, was arranged like a set of stairs, with "lower" forms situated on the bottom and "higher" forms, humans, appearing at the top. This idea can be traced as far back as Aristotle, more than 2,000 years ago, and it was popularized in the 1800s by writers like Robert Chambers. Of course, it was all based on assumption. No evidence was provided to support the model, but it was generally accepted by tradition.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3a: Scale of Nature Model
Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3b: How the Scale of Nature and Phylogenetic Models interpret the anatomical similarities and differences between apes and humans. The Scale of Nature model assumes a hierarchy of lower and higher organisms, while the Phylogenetic model does not. The current phylogenetic relationship among chimps, gorillas, and humans is different than that believed to be true in Darwin's day and is shown in the green inset.

The Scale of Nature, which was actually more of a "macro" view of life, would not work for Darwin because it did not relate to the clusters of similar species that he had observed. Why would so many types of cats or finches exist? The Scale of Nature suggested an unchanging, linear quality to evolution, but that surely could not explain the explosive variety of adaptations that Darwin saw among the finches he found on the Galapagos Islands during the Beagle voyage. Darwin observed finches that were adapted to feeding on different things: birds with beaks that were specialized to eat seeds, leaves, insects, or nectar. The list of items could not be interpreted as a scale-like linear climb involving a worse-adapted food source to better-adapted food source, or from poor-food to rich-food.

Comprehension Checkpoint

The __________ model assumes a hierarchy of higher and lower life forms.

Instead of the macro view offered by the Scale, Darwin was focused on a "micro" view of biodiversity: What could explain the small variations distinguishing species that actually resembled one another? He came to see that evolutionary changes on the micro level would add up to the differences that were obvious at the macro level. So, instead of a stairway or ladder as a metaphor for understanding the cluster pattern of biodiversity, Darwin pictured a tree.

This was a brilliant insight. Rather than being arrow-like and linear, a tree has many elements that spread out in different directions. Rather than being static, it is dynamic. It grows over time, just as evolution is embedded in time. It sprouts branches, as if it were generating new varieties and new species. Or, it may have branches that do not subdivide. Some branches grow straight up, parallel to the trunk, while most head off in different directions as they develop, resembling alternative adaptations. Some branches grow into stumps and die out, becoming extinct. Others may grow long and last for generations, thousands of years, tens of thousands of years, and even longer. None of the branches of a tree is judged to be any better than others; none is superior and none is inferior. They are all simply different.

Crucially important is the fact that all the branches of a tree are interconnected. You can trace their origins from their endpoints to the parent shoots from which they grew, just as you might trace the roots of dogs, or cats, or Galapagos finches to their original ancestral species.

The Origin tree diagram illustrates how a branching pattern of evolution can produce a greater number of species over time than what was there to begin with. It shows how some lines of species, or lineages, split more frequently than others. It shows that some lineages do not split at all but evolve almost like a column. It shows that extinction is a basic property of descent: Many populations are left behind and do not reach the top because they have died out.

Comprehension Checkpoint

Darwin saw evolution as a

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Darwin's earliest depiction of the tree of life, showing how many species, closely or distantly related, might evolve from a single ancestor.

Coming up with this tree-model of evolutionary process and pattern was not easy. In fact, Darwin's personal notebooks reveal how his own understanding grew over the years. In one notebook, which he began writing soon after returning from the Beagle expedition, he drew a crude stick-like diagram to show that many species could evolve from a single ancestral species by somehow splitting apart.

This early graphic, shown in Figure 4, is shaped like a cross between a tree and a starburst. It seems as if Darwin was trying to form an idea of how the great diversity of species could come about naturally from a single origin, rather than having each species being specially created. It is a flat image, as if he were drawing the diversity of Galapagos finches on a map of the islands. That, in fact, was key to his figuring out that evolution had occurred on the Galapagos, that is, how the birds had evolved across space. But his 1859 drawing, clearly for the first time, provided the blueprint of evolution though time. It illustrated his notion of descent with modification, how natural selection produces change and also a pedigree of connections between species that shows where they came from historically, meaning phylogenetically.

To better understand the one illustration that Darwin included in Origin, it is helpful to picture the finches that he studied on the Galapagos. What would this diagram look like if he had illustrated it with species of finch? At the bottom of the figure we would see the ancestral finches. As the lines diverge and branch out higher in the diagram, new species of finch would appear, leading to the array of modern birds at the top of the picture. To better illustrate this idea, work your way through "Darwin's Finches," the interactive animation linked below.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Interactive Animation:Darwin's Finches interactive animation

With the tree of life as a metaphor for evolution, Darwin changed the way both scientists and the public view the origin of species. There would no longer be a need to interpret the biodiversity of nature as a ladder or scale, with some species better or worse than others due to the details of their size, fur, and teeth or as measured by their intelligence or ancestry. All are adapted to their specific environments, even though some might not survive. And all, at some level, share a common source of origin.

Our understanding of the term evolution has changed significantly since Darwin's time. This module explains how Darwin's work helped to give evolution the meaning it has today. It details the concept of "descent with modification" that Darwin described with the one figure originally included in Origin of Species. The module discusses how this model revolutionized scientific thinking about the similarities and differences between and within species, laying the foundation for our current understanding of biodiversity.

Key Concepts

  • Darwin's theory of Descent with Modification shows how as organisms reproduce, slight changes create variation, which could lead to new species over time.

  • Darwin provided the first model that could logically account for biodiversity, explaining lineage and the small variations that distinguish one species from another, similar-looking one.

  • Darwin's work radically changed thinking regarding the Scale of Nature, a model that suggested that some species were naturally inferior to one another, and showed species evolved in response to environmental pressures, not because of some hierarchy of order.

  • HS-C2.1, HS-C2.2, HS-LS4.B1, HS-LS4.B2, HS-LS4.C1, HS-LS4.C2

Alfred L. Rosenberger, Ph.D. “Charles Darwin III” Visionlearning Vol. BIO-2 (5), 2004.

Top


Page 20

Evolutionary Biology

by David Warmflash, MD, Nathan H Lents, Ph.D.

Beginning with Stanley Miller’s famous experiment in 1952, origins of life researchers have shown that all major classes of life’s chemical building blocks can form spontaneously under conditions found on the young Earth. While sugars, amino acids, and fatty acids are necessary for abiogenesis (the spontaneous emergence of life) to occur, they are not all that is needed for life to happen. Another big jump must occur: They must form long polymers and fold into complex shapes. These large molecules made of small building blocks are called macromolecules.

Some macromolecules are important for their catalytic abilities, which means they facilitate chemical reactions that cannot occur without help. These catalysts are called enzymes and are usually made of protein, although on early Earth the first catalysts may have been made of RNA. Other molecules are important because they store and process genetic information so that it can be retained and passed on for generations. Like letters of the alphabet linked into words, small building blocks such as amino acids and nitrogenous bases can hold information only when they are linked together to form large polymers.

While organic building blocks like sugars, amino acids, fatty acids, and nitrogen bases formed easily in multiple environments found on the primeval Earth, polymerization into functional macromolecules was a little trickier and certainly took more time. Experiments, like the one conducted by Tracey Lincoln and Gerald Joyce in 2009, tell us that groups of RNA molecules can grow, sustain one another, and even evolve, all in the complete absence of protein enzymes (see Origins of Life I: Early Ideas and Experiments for more details on this experiment). This is key to our understanding of how the chemistry of life may have started, and most scientists support the notion that RNA was the first large functional macromolecule to have appeared on early Earth.

There is just one problem, however. The experiments that show that catalytic RNA molecules can form, copy each other, and evolve all take place in test tubes. A confined space is crucial for these reactions to occur so that the molecules can accumulate to a high concentration, bump into each other frequently, and perform chemical reactions on each other. There were no test tubes on early Earth, so how could the RNA enzymes have ever formed in the first place? Once formed, the small building blocks would simply diffuse away from each other and be diluted into the vast depths of the ocean.

But nature can make its own version of a test tube: Our cells are tiny compartments, separated from the outside environment by membranes, which serve as barriers, like the glass around a tube. Membranes are made of lipids, which existed on the ancient Earth, alongside amino acids, sugars, and nitrogenous bases. For this reason, abiogenesis is linked to the question of whether membranes might have formed spontaneously from lipids in Earth’s primeval environments. If so, a drop of ancient water containing building blocks of proteins and RNA would also have contained trillions of tiny membrane-bound compartments. Holding the building blocks inside, those compartments would have acted as nature’s laboratories, and this scenario would have been vital to the origin of living cells.

In the early days of abiogenesis research, the emergence of membranes sounded like a chicken and egg problem. Scientists understood that modern cells build and maintain their membranes using protein enzymes; however, without membranes, there is no compartment in which enzymes can be built. Without enzymes, there is no way to make membrane lipids. Also, modern membranes include various proteins interspersed with the lipid molecules. These membrane proteins perform a variety of specialized functions, such as catalyzing biochemical reactions and acting as “gates” to permit specific molecules and ions into and out of the cell (read our modules Membranes I: Introduction to Biological Membranes and Membranes II: Passive and Active Transporters to learn more).

Early membranes must have been very different from modern ones. Unlike the complex systems of lipids and proteins that comprise living membranes, simple kinds of membranes can actually form spontaneously and may have been forming all over the primeval Earth.

Anybody who makes bubbles with soap has seen the tendency of lipid molecules to form spherical shapes when they are in contact with water (see our Lipids: An introduction module). Soap molecules are amphipathic, meaning they possess both a water-loving and a water-hating portion. Each molecule has a “head” section that is hydrophilic (water-loving), because it contains polar covalent bonds using atoms such as oxygen, nitrogen, and phosphorus. Each molecule also has a “tail” that is hydrophobic (water-hating), because it consists of nonpolar bonds of only carbon and hydrogen atoms (Figure 1; see our Membranes I: Introduction to Biological Membranes module to learn more).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: A phospholipid. Note the hydrophilic, or water loving, "head" section and the hydrophobic, or water hating, "tail" section. image © OpenStax College

The simplest amphipathic molecules are a type of lipid called a fatty acid. Each fatty acid molecule is a hydrocarbon chain with a carboxyl (COO-) group attached to one end. This charged group is very soluble in water. However, the hydrocarbon tail, with its many nonpolar C-H bonds, is very hydrophobic. This is why fatty acids are useful in making soap. They can dissolve oils using their hydrocarbon tails, but are still soluble in water because of their COO- head groups. These two abilities allow soap to break up oil deposits and stains and then to allow the oil to be washed away with water.

It has been known since the 1960s that fatty acids in a watery solution will be attracted to each other and form spherical structures called micelles. The shape of the micelle tucks the hydrophobic tails together, away from the water solvent (Figure 2).

The COO- group of fatty acids can also serve as an attachment point for connecting with other molecules, leading to much more complex amphipathic molecules with extremely hydrophilic heads. Phospholipids are the complex amphipathic molecules that make up cell membranes today. Living cells use enzymes to build these phospholipids out of fatty acids, glycerol, and phosphate as substrates. Although all of these building blocks existed on Earth prior to life, it was not known if they could spontaneously form into phospholipids without the help of an enzyme.

Comprehension Checkpoint

The earliest membranes on Earth were most likely just as complex as modern membranes.

In 1977, chemist David Deamer mixed glycerol, phosphate, and several different fatty acids in a test tube. When he put this mixture under primeval conditions (a mixture of the gases thought to exists in Earth’s primeval atmosphere), he found that phospholipids emerged spontaneously.

This was a key discovery because phospholipids will then spontaneously self-assemble into three-dimensional membranes whenever they are placed in water. Just as fatty acids are attracted to each other and form micelles under most conditions, phospholipids are attracted to each other and form membranes. The difference is that the membranes are called a “bilayer” because the phospholipids arrange themselves in two rows facing back-to-back. Like the micelles, this tucks the hydrophobic tails inward, away from the water (Figure 2).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Three of the different structures phospholipids can form in an aqueous solution: micelle, liposome, and bilayer sheet. In this depiction, the hydrophilic heads are round and white and the hydrophobic tails are yellow wavy lines.

In his experiment, Deamer also included amino acids in the mixture. The amino acids didn’t contribute to the formation of membranes, they were included to see if some of them would be “trapped” inside the membrane compartments when they spontaneously formed. They did. This was a crucial discovery because it showed that the early building blocks found on primordial Earth could spontaneously give rise to membrane spheres and that these spheres would randomly trap nearby molecules inside them. These structures are called liposomes and many believe that they were the ancient ancestors to living cells.

While this was a very encouraging finding, phospholipids are fairly complex molecules and Deamer needed to use the ingredients at very high concentrations to get them to form liposomes. Many have doubted that these molecules would have been found at such high concentrations on early Earth.

Hope of resolving this conundrum literally fell out of the sky in 1969 when a meteorite landed in the town of Murchison, Australia (Figure 3). A piece of this meteorite was delivered to NASA’s Ames Research Center in Mountain View, California, and scientists began to analyze its chemistry. One NASA scientist, George Cooper, discovered sugars in the meteorite, while others discovered amino acids, not just the 20 found in life on Earth, but 70 different kinds. Being a professor at nearby University of California at Davis, Deamer had colleagues at Ames and eventually he obtained his own sample of the meteorite on which to conduct his own tests.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: The Murchison meteorite, which landed in Australia in 1969, has been shown to contain many types of chemicals required by life on Earth. On the right is a pebble-sized fragment of the meteorite; when magnified 10 times and placed in polarized light, a slice of the meteorite reveals various minerals in different colors. image © NASA

Deamer began by grinding up a small part of the meteorite and performing an organic extraction to pull any hydrophophic compounds out of the dust. Then he put those organic molecules into water, to see how the molecules would behave. This was a vital experiment, since the molecules extracted from the meteorite were as ancient as the meteorite itself. They were molecular remnants of the solar system when it was young, four billion years ago, when the same kind of molecules were being delivered to a primordial Earth that was also accumulating water from comets.

Immediately, the material coalesced to form micelles and membrane liposomes. Analysis of the membrane-forming compounds showed that they consisted of various fatty acids, with hydrocarbon tails of varying lengths. While fatty acids are not as efficient as phospholipids at forming membranes, their hydrophobic tails do gather together spontaneously and create tiny compartments, either micelles (hydrophobic tails fill inside of the sphere), or liposomes (spheres surrounded by a lipid bilayer like a cell).

Whether free fatty acids in water form micelles or liposomes depends on various chemical conditions in the aqueous solution where they are placed, most importantly, the pH. COO- groups in fatty acids are usually un-protonated (that is, they do not have a hydrogen (H) atom attached). But under acidic conditions, a proton will join to form COOH instead. For any mixture of free fatty acid molecules, like those extracted from the Murchison meteorite, as the pH of the environment drops, more and more of the fatty acid molecules pick up a proton so their carboxyl groups go from COO- to COOH. This makes the carboxyl less hydrophilic, making it harder to form a bilayer, so the fatty acids form micelles instead. On the other hand, free fatty acids form bilayers very easily to create liposomes at pH ranges between 7 and 9, depending on the exact type of fatty acid. By tweaking factors other than pH in his solutions, Deamer was able to get liposomes from the Murchison fatty acids, even at low pH.

Comprehension Checkpoint

When phospholipids are placed in water, they spontaneously form

Given that organic solvents were abundant on early Earth, as was water, and that meteorites bombarded Earth much more often than they do now, it appears possible, likely even, that Earth received membrane-forming materials via meteorites. Whether the fatty acids formed naturally on Earth or came from a meteorite, the formation of membranes was a crucial step in the origin of living cells.

Without time-travel, we’ll probably never know the precise order of events and source of the biomolecules that gave rise to cells. However, Deamer did the next best thing to going back in time. He visited some volcanoes, which exhibit some peculiar chemical conditions that are rare on Earth now, but were more common four billion years ago.

In places such as Iceland, Hawaii, and Mount Mutnovsky in Kamchakta, Russia, volcanic eruptions heat the surrounding land to such an extent that the area is sterilized for years to come. Little ponds of hot, acidic water often form in the area, constituting an environment similar to what scientists think existed on Earth at the beginning of the Archean eon just before the emergence of life (Figure 4). On one of his trips in 2009, Deamer conducted an experiment. Into pools of hot, acidic (pH 3) volcanic water, he poured in mixtures of the kinds of fatty acids, amino acids, and other compounds present in the Murchison meteorite. Just as had happened in the lab, lipid membranes spontaneously formed, producing tiny liposomes despite the low pH. And just as they did in his 1977 laboratory experiment, building blocks such as amino acids often were enclosed within them.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Artist's rendering of ancient hydrothermal pools. The inset picture shows a modern analogue, Bumpass Hell in California. Image from Damer, B. & Deamer, D. (2015). Coupled phases and combinatorial selection in fluctuating hydrothermal pools. Life, 5(1): 872-887. image © Bruce Damer and David Deamer

Comprehension Checkpoint

Volcanoes have chemical conditions similar to those of early Earth.

Though highly sensitive to pH and less efficient than phospholipids at forming membranes, free fatty acids on early Earth would have offered a distinct advantage to an emerging system of prebiotic liposomes. Genetics professor Jack Szostak of Harvard Medical School achieved celebrity status for his work conducted in the 1980s showing how telomeres protect chromosomes. The research earned him a Nobel Prize in 2009, but since the 1990s he has been working on the origins of life. Being a geneticist, Szostak started out from an RNA/DNA perspective, trying to understand how RNA could form on the early Earth from its building blocks. But like Deamer, Szostak recognized early on that primeval lipid compartments would be needed to make the whole thing happen. So he began studying fatty acids and phospholipids and the different kinds of liposomes that they could form under simulated primeval conditions.

Labeling both free fatty acids and phospholipids with special fluorescent molecules, Szostak was able to monitor and compare the physical behavior of liposomes formed from fatty acids with those formed from phospholipids. He found that both kinds of liposomes could form in a wide range of sizes, but that the liposomes made of free fatty acids were more dynamic. Their fatty acids jump around constantly, not only within the membrane of each liposome, but also between different liposomes.

Thus, liposomes are constantly exchanging fatty acids, and yet as a unit each liposome remains stable. On the pre-biotic Earth, this would allow individual liposomes to exist on a continuous basis, while “trying out” different types of membrane fatty acids. At the same time, anything inside the liposomes – RNA or DNA building blocks, for instance, and any polymers made from them – would also be exchanged between liposomes. Effectively, spontaneously forming primeval liposomes would be “protocells,” tiny laboratories for isolating and testing the stability and chemical abilities of macromolecules like RNA (Figure 5).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: An artist's rendering of a protocell, showing the fatty acid membrane surrounding RNA ribozymes. image © Janet Iwasa, Szostak Laboratory, Harvard Medical School and Massachusetts General Hospital

Comprehension Checkpoint

Which would make a better primeval membrane?

The incorporation of building blocks of RNA, DNA, and proteins within spontaneously forming liposomes would not have been enough to trigger abiogenesis. To move from prebiotic chemistry to a truly living cell, a system of molecules would need to develop copying capabilities, and not just the ability to copy any molecule, but specifically to copy themselves. Such ability requires polymers because only polymers can store and manage information. This raises a question, namely how could a collection of building blocks polymerize while enclosed within a tiny compartment? RNA can have catalytic ability, and Lincoln and Joyce have shown that such catalytic ability can evolve. But the Lincoln-Joyce experiment started with RNA enzymes, not individual nucleotide building blocks.

A major insight toward unraveling this conundrum came when Szostak started working with Montmorillonite clays. These clays are created when volcanic ash is subjected to weathering. Researchers think they were present on early Earth because of high levels of volcanic activity during that era. Montmorillonite clays are capable of facilitating chemical reactions on RNA nucleotides – indeed on many different types of biological building blocks. For this reason, Szostak began working with Montmorillonite both with RNA/DNA nucleotides and fatty acids. He found that not only do the clays catalyze nucleic acid polymerization, but they also help bring fatty acids together to form the lipid bilayers. In other words, the same entity that helps form membranes also helps RNA form repeating molecules, called oligomers. Oligomers are structurally similar to polymers, but are smaller in size.

On top of this, both Szostak and Deamer found in slightly different ways that lipids themselves can help RNA nucleotides link up to form RNA polymers. The process is simple: fatty acids are placed on a microscope slide and allowed to dry out. Then, RNA nucleotides are added and the slide is dried out again. Then water is added and the slide is dried out yet again. As these steps are repeated in succession, gradually, nucleotides link up to form strands of RNA. The linking happens during the drying in a reaction called dehydration synthesis, and the shape of the fatty acid molecules helps the reaction occur.

The fact that both fatty acids and Montmorillonite clay can assist the formation of long RNA molecules shows that there are different possible pathways to the same result, which is a good thing in origins of life research. In order to go from molecules swirling in a soup to a living cell, many chance events are necessary, so the more ways that something is possible, the more likely it is to occur.

Furthermore, this result also argues that, once RNA nucleotides formed and became enclosed within a primitive membrane, this would not only have concentrated the nucleotides, but also helped them unite into RNA molecules (Figure 6) that could then evolve into enzymes and/or store information. If this happened over and over, the RNA molecules would get longer and longer and recombination of different sections would occur, as occurred in the Lincoln–Joyce experiment. Effectively, the primeval liposomes would be natural laboratories for a wide variety of creative RNA chemistry.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: A single strand of ribonucleic acid (RNA) showing the four bases: adenine (A), cytosine (C), guanine (G), and uracil (U).

Comprehension Checkpoint

During a reaction called dehydration synthesis, nucleotides link up to form

In prebiotic times, the lipid membrane-bound liposomes that formed spontaneously and enclosed biological building blocks and catalyzed their polymerization would have allowed nature to run multiple chemical “experiments.” When we talk about a “test tube,” for instance in connection with the Lincoln-Joyce experiment, we don’t really mean just one glass container. Usually biologists run multiple samples side by side in test tube-like wells on plates containing multiple wells. This allows for multiple copies of the same kind of sample and also for different conditions and mixtures that can be compared. When working manually, biochemists commonly use 96-well plates.

In certain areas of research, particularly in the pharmaceutical industry, robotic technologies are used to run thousands or hundreds of thousands of differing reactions side-by-side. The lipid-bound liposomes are nature’s way of doing the same thing, but with a scale no pharmaceutical company could ever match. Over the course of hundreds of millions of years, the entire world was like a giant laboratory with trillions, even quadrillions of liposomes acting as tiny test tubes. When the numbers of “trials” is that high, even extremely rare and improbable events are sure to happen at least occasionally. You might think that shuffling a deck of cards and ending up with all four aces on top would be impossible. It’s not impossible; it’s improbable. If you try thousands of times, it will eventually happen.

So it was with early Earth. Trillions of liposomes were forming, growing, splitting, and capturing inside them whatever molecules happen to be nearby. Once formed, the liposomes were helping the captured building blocks to polymerize. Since the liposomes were dynamic, exchanging fatty acids between their membranes, along with various molecules from within their interiors, each liposome had a chance of capturing something novel and add it to its interior. If a liposome then captured an RNA- or protein-based enzyme that helped the lipid metabolism even further by converting the energy of sugars into the formation of lipids, that would be a further enhancement. It would allow the construction of more membrane material that could separate into a daughter liposome. Such liposomes would be protocells at that point. Not life, but well on the pathway to it.

Any protocell with an RNA enzyme that could make crude copies of itself, either acquired from outside, or built inside, would accumulate multiple copies of that RNA enzyme. The copies would recombine like the RNAs in the Lincoln-Joyce experiment and Darwinian selection would be in full swing. Selection would favor molecules, or systems of molecules, that could self-copy with ever increasing accuracy and efficiency.

If all of this happened in one protocell, it would also happen in many. And so, Darwinian selection would expand from the molecular level to the protocell level. Those protocells with the best copying molecules inside, and also with the best helping molecules, would have a competitive advantage. Gradually, the sieve of natural selection would favor protocells with ever-increasing abilities that could help the process of reproducing the self-replicating molecules. That would mean reproducing the entire system within the protocell's membranes.

By reproducing the entire system within such a protocell, by building new membranes to enclose new copies of the entire system, effectively, the protocells would be reproducing just like tiny organisms. In such a system, chemical reactions and the molecules needed to catalyze them would all depend on one another. Unlike the earliest liposomes containing mere building blocks, evolution at the protocell level would work in favor of everything with the protocell membrane. Complexity would increase incrementally to improve the reproductive capabilities and all protocell functions needed to support those capabilities. Gradually, imperceptibly, the first living cells would have emerged.

Building on earlier experiments showing how life’s chemical building blocks could form from nonliving material on early Earth, this module explores theories on the next steps needed for life. These include the formation of long polymers, which then fold into complex macromolecules. The module describes experiments in an environment like that of primordial Earth, resulting in the spontaneous emergence of phospholipids, which could form into membranes, paving the way for RNA duplication and the eventual emergence of living cells.

Key Concepts

  • For life to occur, smaller molecules must join together and form polymers, which then fold into complex shapes. These large molecules are called macromolecules.

  • Simple membranes made of lipids may have served as nature’s test tubes, providing the enclosed environments necessary for RNA enzymes to develop.

  • The possible ancestor to living cells, liposomes, may have been created from phospholipids formed from the gases of Earth’s primeval atmosphere or from free fatty acids delivered to ancient Earth via meteorites.

  • To trigger abiogenesis, a system of molecules would need to develop the ability to copy themselves using polymers.

  • Protocells made of liposomes that exchanged fatty acids between their membranes possibly absorbed RNA enzymes and made copies of themselves, leading to the evolutionary development of living cells.

David Warmflash, MD, Nathan H Lents, Ph.D. “Origins of Life II” Visionlearning Vol. BIO-4 (7), 2016.

Top


Page 21

Evolutionary Biology

by David Warmflash, MD, Nathan H Lents, Ph.D.

The work of Darwin and Wallace went a long way in answering the question of how species evolved over time. The theory of natural selection provided a mechanism by which complex life forms, including humans, could arise from simpler organisms. But that still left open a more difficult question, namely, what is the origin of life itself? It’s one of the most challenging questions in science, even today when we can say confidently when life appeared on Earth.

Microscopic fossils called stromatolites and remains of communities of microorganisms called microbial mats suggest that Earth harbored microorganisms 3.5 billion years ago (Figure 1). Also, the presence of particular carbon isotopes in certain metamorphic rocks in Greenland tell scientists that some kind of life may have been present as much as 3.8 billion years ago. This means that 700 million to one billion years after Earth had formed, life was here. It makes sense, because it corresponds to the time when the planet had reached a cool enough temperature for any life to survive. But honing in on the time when life appeared on this planet still does not tell us how life came to exist.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Stromatolites in the Soeginina Beds of Estonia showing the characteristic layered structure due to the accretion of microbial mats. image © Mark A. Wilson

Since prehistoric times, people sought mostly spiritual answers to this question. Around campfires during the Stone Age, each budding culture told and retold tales of how the gods created life from some kind of nonliving material, be it mud, clay, rock, or straw. The details of the ancient creation stories changed noticeably over time, but religion was still the mode of thinking by Darwin’s era when it came to the initiation of life itself. Darwin did consider the origin of life and speculated that it had occurred in a warm pond. He suggested that phosphoric salts and ammonia in the pre-biotic pond somehow had been changed chemically by heat, light, and electricity, leading to the synthesis of the organic compounds needed to produce the first living cells. Darwin was not a chemist and this was a very cursory speculation about Earth’s pre-biotic chemistry. It contrasted sharply with the detail and systematic approach of Darwin’s own theory of natural selection.

Even so, the pond idea was a start. Despite living in a society that almost universally assumed Earth had an intelligent creator, scientists in Darwin’s time were already comfortable with and accustomed to considering the possibility of life getting started without intervention from the gods. The idea was called spontaneous generation and, while it was already very well established by Darwin’s time, it dates all the way back to the time of the ancient Greeks.

About 2,600 years ago, in the Ionian city of Miletus (Figure 2), the natural philosopher Anaximander (c. 610–546 BCE) pondered how human babies were born utterly helpless. Without their parents, young humans had no chance to survive and the state of helplessness continued for years. That reality made for a dilemma when considering the first generation of humans, which, Anaximander assumed, must have begun as infants. To grow up and have their own babies, human ancestors in the very distant past must have been more independent as newborns, Anaximander reasoned. They must have been more like certain other animals whose young are born ready to survive on their own.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Location of Miletus on the western coast of Anatolia, what is now part of modern Turkey. Miletus was the home of three ancient philosophers: Thales, Anaximander, and Anaximenes.

Considering the various animals, Anaximander decided the ancestors of humans had to be fish. Unlike mammals, which needed their mothers to get started in life, fish simply emerge from their eggs and either die or survive. This means that distant human ancestors could survive as infants if they were more like fish than like humans.

Even in Anaximander’s time, people saw skeletons from long-dead creatures. Fossils of extinct life were found long before paleontologists went looking for them. Ancient Greeks lived by the sea, and often the sea washed up skeletons or eroded the ground to expose buried bones. Living in this environment, Anaximander had a general idea of skeletal anatomy and how it was similar and different between humans and other animals. Because of this, he decided that the transition from fish to humans must have been gradual. In other words, humans descended from fish through an evolutionary process.

Since Anaximander proposed no idea of how the apparent evolution from fish to human had taken place, it was not an early form of Darwin’s theory of natural selection. But it was the beginning of thinking that life on Earth began with small organisms. Anaximander’s idea quickly led to the idea that small organisms were generated through a natural process from nonliving matter, such as the mud at the bottom of the sea.

Over the next centuries, Greek thinkers such as Anaximenes (588–524 BCE), Xenophanes (576–480), Empedocles (495–435), Democritus (460–370), and finally Aristotle (384–322) developed and modified the spontaneous generation idea so that it corresponded to what people often observed on land. Farmers leaving grain in an open container noticed that pretty soon mice appeared, as if the grain generated the mice. People leaving meat untended returned to find maggots infesting the meat, as if the meat generated the maggots.

Comprehension Checkpoint

What observation prompted Anaximander to propose that humans came from fish?

Testing spontaneous generation

By the 18th and 19th century, the older Greek idea of spontaneous generation was well ingrained in the minds of everyone who ventured to think that the origin of life might not have required the gods. And living at a time when science was coming to age, some early modern thinkers started treating spontaneous generation less like a philosophy and more like a scientific hypothesis. Gradually, they began subjecting the idea to scientific experimentation.

An early attempt at testing spontaneous generation occurred in the 17th century, when the Italian scientist Francesco Redi (c. 1626–1697) looked carefully at the meat-maggot phenomenon. After leaving meat in an open jar, he observed that maggots did indeed appear, and that the maggots then developed into flies, which then flew away. However, when he left meat in a sealed jar, the maggots did not appear. Nor did maggots appear when he left the meat in a jar covered with a mesh screen, a precaution he took just in case spontaneous generation required fresh air for some reason. In the terminology of today’s science, we say that the mesh-covered jar “controlled for” the possibility that spontaneous generation required fresh air (Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Francesco Redi's spontaneous generation experiment using jars of meat. In the first jar, with the meat sealed inside by a stopper, maggots did not appear on the meat; in the second jar, covered with mesh, maggots also did not appear on the meat; but in the third jar, without a cover, maggots did appear on the meat and developed into flies.

Since the mesh cover prevented the appearance of maggots, it meant that the maggots were not coming from spontaneous generation, but simply from eggs of adult flies. By the standards of experimental methods in contemporary science, it was a rudimentary experiment, but it was as good as it could be given the equipment available in Redi’s time.

Despite the result of his maggot experiment, Redi still believed that smaller creatures, called “gall insects” came from spontaneous generation. At the same time, a developing invention, the microscope, allowed scientists to focus on creatures even smaller: microorganisms. Using his microscope, an English experimenter, John Needham, noticed that broths made from meat were teeming with microorganisms, so he put spontaneous generation to his own test (see our module Experimentation in Scientific Research). Needham heated a bottle of broth to kill any microorganisms, and left the bottle for a few days. Then, he looked at the broth under the microscope and found that, despite the earlier heating, the broth contained microorganisms again (Figure 4a).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4a: Needham's spontaneous generation experiment. Needham heated the broth, let it sit uncovered for several days, then observed microorganisms in the fluid.

In Needham’s mind, this finding suggested that the lifeless broth had given rise to life. But another scientist, an Italian named Lazzaro Spallanzani, thought that Needham must have done something wrong. Perhaps, he hadn’t heated the broth to a high enough temperature or for a long enough time. To find out, Spallanzani performed his own experiment. He boiled broth in two bottles, left one bottle open and one closed, and found that new microorganisms appeared only in the open bottle. His conclusion: the microorganisms entered the bottle through the air; they were not generated spontaneously in the broth (Figure 4b).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4b: Spallanzani performed Needham's experiment, but also tested a bottle of broth that was closed after boiling. He found no microorganisms grew in the closed bottle.

Experiments seeming to prove or disprove spontaneous generation of life went on for another century. Because of the difference between closed and open vessels, arguments focused on the possibility that spontaneous generation of life might require fresh air. Thus, lack of air in Spallanzani’s closed bottle could have been a factor confusing the results. This possibility attracted the attention of the 19th century’s most famous microbiologist: Darwin’s contemporary, Louis Pasteur.

Pasteur was drawn to the issue, but once involved he knew he that needed to control for the possibility that air was needed to generate life from nonliving matter. To do this, he designed flasks with long, specially curved, swanlike necks. This allowed sterilized broth to be exposed to fresh air from the outside, but any microorganisms from the air would be trapped in a pool of water in the neck. (See our module Experimentation in Scientific Research for more information on designing experiments.)

The sterilized broths in Pasteur’s special flasks did not become infested with microorganisms despite being exposed to fresh air (Figure 5). And so, after a run of more than 24 centuries, the hypothesis of spontaneous generation was finally laid to rest.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Pasteur designed flasks with long, swan-like necks that allowed the sterilized broth to interact with fresh air, but trapped microorganisms in the flask's curved neck.

This meant that scientists no longer thought that microorganisms, or small animals, could suddenly emerge with no parents, but it didn’t stop people from thinking about life coming from nonliving matter. Pasteur’s publication of his experimental results disproving spontaneous generation of microorganisms came in the very same year as Darwin’s Origin of Species. This made for paradox. Around the world, scientists were fairly certain that evolution really happened, that all modern species came ultimately from pre-existing, living forms. However, as for the question of how life started in the first place, scientists had just disproved the only explanation they had.

Darwin’s pond idea was completely speculative. There was no way to test it the way he tested natural selection through years of observation of numerous species. And so, when it came to the initiation of life itself, scientists of Darwin’s era were stumped. All they could do was to throw up their hands, or chalk it up to the creation stories of their religions.

Comprehension Checkpoint

The experiments of Spallanzani with broth in bottles showed that microorganisms

In addition to spontaneous generation, the ancient Greeks produced another idea for the origin of life on Earth: panspermia. An Ionian Greek named Anaxagoras (510–428 BCE) thought that life arrived on Earth as seedlings that came through space from other worlds. Often people think of panspermia as an alternative to the idea of life emerging from nonliving matter, but it’s actually not. Instead, panspermia only moves the origin of life off the Earth to another planet or moon, and further back in time. Thus, after Pasteur’s disproval of spontaneous generation, the motivation was stronger than ever to determine how life got started.

By the late 19th century, English biologist Thomas Henry Huxley (1825–1895) coined the term abiogenesis to describe life forms emerging from non-living chemical systems. On first hearing the term, it may sound as if abiogenesis is merely a more modern take on spontaneous generation, but there is a major difference. With spontaneous generation, the idea was that certain materials, be it meat, grain, or mud, were capable of constantly producing some kind of creature. What Huxley had in mind was the chemical reactions of life slowly emerging on the early Earth over a long period of time. Huxley knew that the mixture would have to be more complex than Darwin’s ammonia and phosphoric salts, but he did not attempt to work out the details. Somehow, though, he thought an optimal mixture of simple chemicals generated the complex chemicals needed for life, such as enzymes, and the earliest living cells.

Comprehension Checkpoint

Abiogenesis is just another name for spontaneous generation.

As for how abiogenesis could occur on the primordial Earth, serious thinking about this began in the 1920s with two scientists working entirely independently of one another.

In 1922, Aleksandr Oparin, a Russian biochemist, gave a lecture on the origins of life, which was published as a booklet in 1924. For several years, the booklet was not translated from Oparin’s native Russian, so his ideas were unknown outside of the USSR. Meanwhile, British biochemist John Burdon Sanderson Haldane (usually known by his initials JBS Haldane) was working on similar ideas. Unlike his Russian counterpart, Haldane and his work were extremely visible. He was a great popularizer of science, doing for the early 20th century what astronomer Carl Sagan did later, making science understandable and fascinating for the masses. Haldane’s hands were in numerous areas of life science. He was the author of dozens of scientific papers and spent a great deal of time explaining his work and its importance to people outside the scientific world.

In connection with other questions of biology, Haldane was working with enzymes, which he thought were on the border between living and nonliving chemistry. Consequently, he hypothesized that abiogenesis took place through a complex mechanism involving enzymes and viruses. By Haldane’s time, scientists figured out that the atmosphere of the primordial Earth had been a reduced atmosphere. This means that it contained reduced carbon chemical compounds, such as methane, in contrast to oxidized chemical compounds, such as carbon dioxide (which could be present, but in much lower quantities compared with methane). It also contained hydrogen, ammonia, some water vapor, and, importantly, no oxygen.

Oxygen can come only from organisms that carry out photosynthesis to make their own food. Such organisms are called autotrophs. Haldane reasoned that the first cells must have been heterotrophs, organisms that take their food from the surrounding environment. Methane is a gas, but other simple, organic compounds made from it are liquid and would have rained down on the early Earth. They accumulated as pools of liquid on the surface, forming a kind of organic broth that became known as “Haldane’s soup” (Figure 6).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: An image of the Grand Prismatic Spring in Yellowstone National Park, an environment similar to the organic “soup” Haldane proposed.image © Jim Peaco, National Park Service

Because there was no oxygen in the atmosphere, the early Earth lacked a layer of ozone to block out powerful ultraviolet radiation from space. Haldane hypothesized that the ultraviolet radiation from space, along with lightning constantly hitting the primordial organic soup, delivered energy to the various simple organic compounds. This caused chemical bonds between the atoms of the molecules to break and reform, creating new and different molecules, leading to extremely large, complex organic molecules. Haldane speculated that this happened over millions of years, until finally a molecule arose that could copy itself crudely using other molecules in the “soup” as building blocks.

Molecules that could copy better than their neighbors multiplied and gradually dominated the soup. Some of these self-copying molecules became surrounded by a kind of barrier, the precursor to what we call a membrane. This happened by accident, so it was very rare, but when it did happen, Haldane explained, the enclosed, self-copying molecules had an enormous survival advantage. So they came to dominate, ate up the soup, and life had begun.

Haldane’s idea was purely hypothetical. No one tested it yet, but it was far more elaborate than Darwin’s phosphoric salt idea. Moreover, it was perfectly consistent with the state of science in the 1920s and 30s regarding the chemistry of the early Earth. Then, in 1936, Oparin’s work was finally translated from Russian. It turned out that he was proposing almost the same thing as Haldane, so the idea became known as the Oparin-Haldane hypothesis.

Comprehension Checkpoint

Primordial Earth lacked an ozone layer because there was no ________ in the atmosphere.

As for testing the Oparin-Haldane hypothesis, that role fell into the hands of a graduate student, Stanley Miller. In the early 1950s, Miller was looking for a thesis project in the Department of Chemistry at the University of Chicago. In 1952, his academic mentor, Professor and Nobel laureate Harold Urey, suggested that he try putting the origins of living molecules to a test. That meant recreating the kind of atmosphere that scientists thought had existed on primordial Earth: hydrogen, methane, ammonia, and water. It also meant providing what Haldane thought set the stage for creating more complicated molecules needed for life: lightning and ultraviolet light.

Once the ancient atmosphere was created and contained in a flask, Miller and Urey exposed the mixture to powerful ultraviolet light. They also put electrodes inside the flask and sent an electric current through the apparatus, creating sparks to simulate lightning, which interacted with the gases in the flask. After several days, they checked the contents of the liquid that accumulated at the bottom of the apparatus (Figure 7). They found that different molecules had been created, including various important biological molecules, such as the amino acids glycine, alanine, and valine. They ran the experiment over and over and, depending on how they changed around the gas mixture, different varieties of amino acids and other biological molecules were created. This showed that it was possible for biologically important molecules to form on a planet under abiotic conditions.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: Miller and Urey's experiment to test the Oparin-Haldane hypothesis by recreating early Earth's atmosphere and adding ultraviolet light and electric currents.

Over the years, as Miller progressed through his career, scientists studying planetary atmospheres and the ancient Earth had second thoughts about Earth’s primordial atmosphere. Perhaps it had not been dominated by methane, hydrogen, and ammonia, and possibly it could have been more oxidized as opposed to reduced. But as theories about the ancient atmosphere were refined, Miller tried variations of his original experiment with the adjusted gas mixtures. Although chemical products changed with each new mixture, in each case they included compounds that were vital to life, such as amino acids, or nitrogenous bases, the building blocks needed to make DNA and RNA. The emerging answer seemed to be that, almost regardless what the precise mixture and conditions were, complex organic molecules would result.

Comprehension Checkpoint

What formed when electric sparks were created in a flask of hydrogen, methane, ammonia, and water?

While ideas about Earth’s primordial atmosphere were in flux from the 1970s onward, NASA’s exploration of the outer Solar System revealed some amazing things about the moons orbiting Jupiter and Saturn. In particular, the space probes Voyager 1, Voyager 2, and Cassini and an atmospheric entry probe to Saturn’s moon Titan called the Huygens probe revealed the exact makeup of Titan’s atmosphere. This inspired other scientists, such as Carl Sagan, to redo Miller’s 1952 experiment with a Titan atmospheric mixture. This too produced important biological compounds. Thus, today, the moon Titan is a prime focus for astrobiology studies in the Solar System. It may have exotic life forms, or it may be a model of how Earth was prior to life.

Several years after the original Miller-Urey experiment, another investigator, Sidney Fox, ran experiments showing that some of the Miller-Urey compounds – the amino acids – could join together to form polymers, bigger molecules known as peptides, or small proteins. This happened when amino acids made through a Miller-Urey mechanism were splashed onto surfaces of clays and other materials, under hot, dry conditions. On the ancient Earth, such conditions would have occurred at the boundary between ancient ponds or seas and ancient land. Given enough time, complex proteins could arise.

Other researchers later found that spheres of lipids (the class of organic molecules that includes fats) also could form under conditions thought to exist on the ancient Earth. This would create a water environment inside the sphere that was separated from the outside. In other words, crude membranes can form spontaneously under the same conditions in which biological compounds like amino acids and small proteins can form. The fact that membranes can form spontaneously is key to origins of life research. This is because to move from non-living chemistry to biology, very complex networks of chemical reactions need to emerge. Like a car being made on an assembly line, biological molecules are put together section by section. They also are converted into different molecules section by section, so there is a series of intermediate chemicals in addition to a starting molecule (called a substrate) and final product of each reaction.

In an open environment like Haldane’s primordial soup, or in an ocean, the various intermediates would simply diffuse away before the chemical pathway had a chance to evolve. But a membrane would enclose all of the chemicals within a compartment. That compartment would then act as a chemical laboratory, holding inside any reactions that happened to emerge. Since we know that membrane spheres can spontaneously form, the primordial soup of early Earth must have had billions of these little chemical laboratories in which the chemistry of life was sputtering along.

Comprehension Checkpoint

Why are membranes so important in origins of life research?

Demonstration that biological molecules and membranes can arise in an abiotic environment is not a demonstration of the emergence of life. It shows only what might have happened in the transition from non-living chemistry to the eventual formation of life. It does, however, show that a necessary step in abiogenesis – the spontaneous emergence of complex organic molecules – is not only possible, but likely under the right conditions.

Theoretically, continuous rearrangement and construction of larger and larger organic molecules from chemical building blocks that would form on the early Earth should eventually lead to molecules that can copy themselves. That’s because the bigger an organic molecule gets, the more functional chemical groups it has. Functional groups are sections of molecules with atoms other than carbon, such as oxygen, nitrogen, and phosphorus, which like to hold onto electrons. This allows for electrons to be moved around between parts of the molecule and between the molecule and other molecules. Also, the bigger a molecule gets, the more it’s able to bend and twist around. This capability, together with the capability to move around a lot of electrons, &^means it’s possible, simply by luck, for any random, very large organic molecule with a lot of nitrogen, oxygen, and phosphorus atoms to have some enzymatic capability –that is, to be able to catalyze chemical reactions.

Certain sets of reactions catalyzed by a molecule can result in the molecule making a copy of itself. Thus, with plenty of building materials in a Haldane soup, as time goes on, it is likely that self-replicating molecules would emerge. The first self-replicating molecule would have only crude copying ability. But, since it would not copy itself exactly, each new “copy” would be a little different than the “parent” molecule. Randomly, a newly copied molecule might have the ability to copy slightly better than the molecule that made it. Natural selection would then work for non-living chemical molecules similar to how Darwin described it working for living organisms. Those molecules copying better would make more copies using building blocks taken from the breakdown of other molecules that could not copy themselves so well.

Self-copying molecules enclosed in membranes would fare even better because they would be held close together with other chemicals. But for life to really begin, there has to have been a molecule whose copying ability was extremely good. Today, there is such a molecule: DNA. However, DNA is incredibly complex and this makes for a chicken and egg kind of dilemma.

In the 1980s, scientists began to realize that not all enzymes are proteins. Scientists dissected some cell components called ribosomes and found that they are made of protein and RNA. What was strange was that some of the RNA molecules actually work as enzymes. They can catalyze chemical changes in themselves and in other RNA molecules.

Like DNA, RNA can hold genetic information, but RNA is less complex than DNA (Figure 8). Consequently, a hypothesis called the “RNA world” was proposed independently by three different researchers: Leslie Orgel, Francis Crick, and Carl Woese. It’s a keystone in origins of life research today. The idea is that RNA emerged on Earth prior to DNA and was the genetic material in the first cells (or in the first cells on a different world, if life began somewhere else).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 8: A comparison of Ribonucleic acid (RNA) and Deoxyribonucleic acid (DNA).

Today, no known bacterial cell or other fully-fledged life form uses RNA the way that we use DNA, as the storage molecule for genetic information. But there are RNA viruses. Not all viruses are RNA viruses; some use DNA to hold genetic instructions, just as our cells do. But if RNA is adequate as the only genetic material in some viruses, it’s easy to imagine RNA also being the only genetic material in an early bacterium, or other singled-celled creature that could have existed on the early Earth.

It’s not hard to image how the transition from RNA to DNA might have occurred. As with the evolution of everything else, there would have been mistakes. In living organisms today, DNA stores genetic information over the long term and DNA sequences are transcribed into RNA sequences, which then are used to put together sequences of amino acids into proteins (see our Gene Expression: An overview module). Essentially, DNA is an additional layer beyond RNA and the proteins that RNA makes. RNA sequences could have been the genes before a mistake created DNA. Being more stable chemically than RNA, DNA took over the job of storing genetic information. This gave RNA a chance to get better at translating genetic information into proteins.

That would have been an enormous step in life’s evolution. It also would mean that life was not here all at once. Rather, abiogenesis occurred in increments or steps during prebiotic, chemical evolution. Thus, entities must have existed along a spectrum from nonliving to living, just as viruses today have characteristics of both living and nonliving entities. We don’t know the precise abiogenesis pathway, but scientists have worked out each of the major steps necessary to go from nonliving chemistry to self-sustaining cells. Importantly, scientists also have conducted laboratory experiments demonstrating that each step is possible. Unlike the days of Anaximander, Darwin, or even Haldane, there are no big holes or theoretical barriers to abiogenesis. Scientists have a good idea of how it probably happened. Still, in terms of the details within each major step, that is where science is now focused on getting some answers.

Since prehistoric times, people have pondered how life came to exist. This module describes investigations into the origins of life through history, including Louis Pasteur’s experiments that disproved the long-held idea of spontaneous generation and and later research showing that the emergence of biological molecules from a nonliving environment – or abiogenesis – is not only possible, but likely under the right conditions.

Key Concepts

  • Theories about the origins of life are as ancient as human culture. Greek thinkers like Anaximander thought life originated with spontaneous generation, the idea that small organisms are spontaneously generated from nonliving matter.

  • The theory of spontaneous generation was challenged in the 18th and 19th centuries by scientists conducting experiments on the growth of microorganisms. Louis Pasteur, by conducting experiments that showed exposure to fresh air was the cause of microorganism growth, effectively disproved the spontaneous generation theory.

  • Abiogenesis, the theory that life evolved from nonliving chemical systems, replaced spontaneous generation as the leading theory for the origin of life.

  • Haldane and Oparin theorized that a "soup" of organic molecules on ancient Earth was the source of life's building blocks. Experiments by Miller and Urey showed that likely conditions on early Earth could create the needed organic molecules for life to appear.

  • RNA, and through evolutionary processes, DNA and the diversity of life as we know it, likely formed due to chemical reactions among the organic compounds in the "soup" of early Earth.

  • HS-C4.2, HS-C4.3, HS-ESS2.E1

David Warmflash, MD, Nathan H Lents, Ph.D. “Origins of Life I” Visionlearning Vol. BIO-4 (6), 2016.

Top


Page 22

Energy in Living Systems

by David Warmflash, MD, Nathan H Lents, Ph.D.

The discovery of ATP, glycolysis, and the Krebs cycle during the first half of the 20th century went a long way in answering the question of how energy from food molecules, such as glucose, is harnessed by the cell. But a huge question remained –namely, how is the bulk of the energy of food molecules converted to ATP?

Chemical energy is contained in electrons. Since electrons can move between different molecules, that energy can travel as well. By harnessing high-energy electrons from each glucose molecule, glycolysis generates ATP from a precursor molecule called ADP (Figure 1). Similarly, the Krebs cycle and other energy pathways also generate ATP, which serves as a kind of energy currency for the cell.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: A diagram of the glycolysis process that occurs in the cytoplasm of a cell. image © RegisFrey

Figuring out the ATP generation process took decades and involved many researchers, but it was off to a good start by the 1930s. At that time, Sir Hans Adolf Krebs was beginning his research. Using newly available instruments, such as the manometer (developed by Krebs’ mentor, Otto Warburg, one of the giants of biochemistry of that era, see Figure 2), biochemists were able to hone in on specific quantities of ATP that are made in different biochemical reactions. They found that glycolysis (the splitting of a glucose molecule into two molecules of pyruvate) generates two or three molecules of ATP for each molecule of glucose that is consumed. The Krebs cycle also generates two ATP molecules for each glucose molecule that is broken down. Additionally, one other reaction – the breakdown of pyruvate to produce acetate that goes into the Krebs cycle – generates two ATP molecules for each glucose molecule that is consumed.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Dr. Otto Warburg and his manometer. The instrument, adapted from devices that measure gases dissolved in blood, determines the rate at which living cells produce oxygen. image © History of Medicine (NLM)

Adding up the ATP molecules generated for each glucose molecule during glycolysis (2-3 ATP), the Krebs cycle (2 ATP), and the conversion of pyruvate to acetate (2 ATP) yields 6-7 ATP molecules per glucose molecule. However, using the Warburg manometer in slightly different experiments (mostly involving liver and muscle tissue), Krebs and his colleagues realized that more than 6-7 ATP molecules are actually generated from each molecule of glucose, a great deal more. Their measurements told them that each glucose molecule actually generates well over 30 ATP molecules, provided that oxygen is available to the cell.

To mid-20th century biochemists, the discrepancy between 30 molecules of ATP generated by the cell and just 6-7 ATP molecules generated by known reactions could mean only one thing. Clearly, the remainder of the ATP must be generated indirectly from other chemical products generated during glycolysis, the Krebs cycle, and the conversion of pyruvate to acetate.

These “other chemical products” are nicotinamide adenine dinucleotide (NADH) and FADH2. For each glucose molecule consumed, Krebs worked out that his cycle generated six molecules of NADH and two molecules of FADH2 (Figure 3). Additionally, it was known that NADH also was produced during glycolysis and during the conversion of pyruvate to acetate. Scientists also observed that NADH and FADH2 are produced during the breakdown of fats (a process called beta oxidation). Just like ATP, both NADH and FADH2 seemed to be all over the cell and connected with energy reactions. This was the state of knowledge on cellular energy during the late 1940s, when Krebs was tweaking the details of his famous cycle.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: The Krebs Cycle within the mitochondria, showing the generation of NADH and FADH2 molecules. image © RegisFrey

But as for the specific role of NADH and FADH2 – how the cell could use them to obtain energy – that was a mystery. It was clear that they were carrying the bulk of the energy extracted from glucose, fats, and other body fuels, but it was not clear how that energy was harnessed to produce ATP.

Comprehension Checkpoint

Each molecule of glucose generates ______ molecules of ATP.

At the midpoint of the 20th century, Krebs and other biochemists knew that NADH and FADH2 disappear after their production and transform into slightly different molecules. Using very straightforward chemistry techniques, they saw that both NADH and FADH2 undergo a chemical process called oxidation. In the oxidation reaction, NADH is converted, or oxidized, to a compound called NAD+ and FADH2 is converted (oxidized) to a compound called FADH+. In becoming NAD+ and FADH+, NADH and FADH2 each give up one hydrogen atom and two electrons.

Thus, biochemists began describing NADH and FADH2 as carriers of a sort: carriers of electrons. In their oxidized forms, NAD+ and FADH+, the electron carriers contain less energy than they do in what’s called their reduced forms, NADH and FADH2. The reduced forms of the molecules are more energetic than the oxidized forms, because the electrons physically hold the energy. By liberating a pair of electrons, NADH and FADH2 return to their oxidized forms. That much was clear to Krebs, but this raised the question of what happens to the electrons. They don’t just disappear into nothingness. Clearly, their energy must be used to generate all the rest of the ATP that is not made directly during glycolysis and the other reactions.

Figuring out exactly how this worked required a new researcher, and his name was Peter Mitchell. In the 1960s, Mitchell introduced an idea that he called “chemiosmosis.” It explains how the cell harnesses energy from electrons, and it depends on another phenomenon called electron transport, which had to be discovered first, by other researchers.

Born in England, September 29, 1920, Mitchell showed an interest in science throughout childhood and entered Cambridge University in 1939 at age 19. There, he was influenced especially by two instructors: biochemist Ernest Baldwin and nerve physiology instructor Edgar Adrian. As a child and throughout most of his undergraduate career, Mitchell often did not perform well on tests, but he improved enough for admission to graduate school where he pursued biochemistry.

As a graduate student, Mitchell conceived of numerous, novel experiments but often failed to complete them. Generally, it was his assistant, Jennifer Moyle, who completed the needed laboratory work. On top of that, his first PhD thesis idea was rejected and he was directed to spend an additional three years researching penicillin, a topic that didn’t excite him very much. Mitchell lacked the patience for the nitty-gritty work that goes along with experimental science. But he put in the needed laboratory work to earn his PhD and later to show the world that his chemiosmosis idea was correct.

Comprehension Checkpoint

Which form of the molecules have more energy?

Tracking the pathways of NADH and FADH2 over the next few years led to the discovery that these two compounds were actually electron carriers. Like ATP, they were a kind of energy currency, but the currency of NADH and FADH2 is less versatile compared with that of ATP. Imagine ATP as a universal energy currency for the cell; it can be used in many different ways, and thus each molecule of ATP can be likened to a dollar bill. In contrast NADH and FADH2 can be likened to a store-specific gift card. They carry energy value, but that value can be used only for a special purpose. That purpose is the generation of ATP, and the way that ATP is generated begins with the high-energy electrons that NADH and FADH2 acquired during glycolysis, the Krebs cycle, and other pathways.

Once scientists realized that energy could be carried through the cell by transferring electrons between different molecules, researchers began contemplating how this could happen. Harnessing the energy from the high-energy electrons of NADH and FADH2 occurs in two interconnected processes: electron transport and oxidative phosphorylation.

In the 1930s, two Soviet researchers, Vladimir Aleksandrovitch Belitser and Elena Tsybakova, identified the movement of electrons through a series of special enzymes. This transfer of electrons from one enzyme molecule to another was called “electron transport.” In 1953, Dutch researcher Edward Charles Slater identified the various enzymes of the chain and began researching how they operate. The enzymes are embedded in the inner of two membranes that surround each mitochondrion, the powerhouse organelle of eukaryotic cells. They are also in the membranes of certain microorganisms. Within the membrane, the enzymes are lined up, forming a chain, known as the electron transport chain (ETC) (Figure 4). Like NADH and FADH2, each electron carrier enzyme of the ETC is capable of accepting electrons from other molecules, holding those electrons temporarily, and then releasing them to a different electron carrier.

Arriving at the membrane of each mitochondrion, both NADH and FADH2 easily unload their high-energy electrons to the ETC. In addition to being lined up, the electron carriers of the mitochondrial ETC are arranged into four groups known as complexes. Today, we know that NADH “drops off” its electrons at complex I, while FADH2 drops off its electrons at complex II. This is because the electrons donated by NADH actually have more energy than the electrons donated by FADH2. NADH is like a high-energy package; whereas FADH2 is like a lower energy package.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: At the barrier to the intermembrane space of the mitochondria exists the Electron Transport Chain (ETC), where electrons move through a series of special enzymes. Both NADH and FADH2 unload their high-energy electrons to the ETC. image © RegisFrey

Comprehension Checkpoint

Electrons can be transferred from one enzyme molecule to another.

By the mid 20th century, biochemists had an idea that electrons give off their energy gradually while moving through the ETC. The generation of ATP from ADP is called “phosphorylation,” which refers to the addition of a group of atoms known chemically as a phosphate (PO43-) group. Biochemists observed that electrons move through the ETC as ADP is phosphorylated into ATP. Since it was observed to happen in the presence of oxygen, researchers began using the term “oxidative phosphorylation” to describe the generation of ATP connected to electron transport. This contrasts with ATP production that occurs directly in biochemical reactions, such as glycolysis and the Krebs cycle, which is called “substrate level phosphorylation.”

In certain types of human cells, glycolysis (but not the breakdown of fats) can proceed in the absence of oxygen. This means that substrate level phosphorylation of glycolysis can occur in the absence of oxygen. This is called anaerobic glycolysis and when it happens, small amounts of ATP are produced along with pyruvate. This keeps cells alive and working, but there are consequences. Without oxygen powering the ETC to draw the high-energy electrons from NADH, the cell needs a different way to oxidize NADH back to NAD+. That’s because the supply of NAD+ is limited. If NAD+ runs out because it has all been converted to NADH, glycolysis will stop.

To solve the problem, when oxygen supplies are low, the cell converts pyruvate (made during glycolysis) into lactic acid. In being converted to lactic acid, pyruvate receives electrons from NADH. Thus, NADH is oxidized, converted back to NAD+, which then is available to glycolysis. But it’s only a temporary solution. During anaerobic glycolysis in muscle cells, lactic acid builds up, causing pain and cramps. That’s why you feel pain if you start exercising too quickly. But as you exercise more, oxidative phosphorylation kicks in gradually as mitochondria start working harder. NADH from glycolysis moves into mitochondria and delivers electrons to the ETC. Large amounts of ATP are then generated. By giving up electrons, NADH is converted back to NAD+, which then is available to glycolysis, so the production of lactic acid stops.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: During exercise, muscle cells build up lactic acid during anaerobic glycolysis, leading to pain and cramps. image © Jan-Otto, iStockPhoto

The big question in the mid 20th century, though, was how does oxidative phosphorylation work? How is energy of the electrons that are delivered to the ETC harnessed for the production of ATP? That’s the question that Peter Mitchell set out to answer.

Comprehension Checkpoint

Lactic acid is produced when oxygen is

By the early 1960s, a chemist named Robert Joseph Paton Williams proposed a new idea: The energy from electrons delivered to the ETC is converted to ATP using protons (hydrogen atoms without their electrons) as intermediates. To explain how the proton idea might operate, exploring the possible ways protons act as intermediates for ATP production, Williams proposed a very complex chemical mechanism. At the same time as Williams developed these ideas, however, Mitchell also independently proposed that protons couple electron transport with ATP production, but through an entirely different mechanism. Mitchell came up with something simpler called the “proton motive force.”

Imagine blowing up a balloon - forcing air into the balloon stores up energy. And, once forced into the balloon, air will flow out through any hole with great force. Similarly, Mitchell imagined energy being stored, not with air, but with protons forced into the space between the two membranes of a mitochondrion, using energy obtained during electron transport. If there is an opening in the membrane, then the protons will stream out, like air from a balloon, with force. In chemistry, this is known as a proton gradient, but Mitchell used the term proton motive force, because he imagined cells using it for power.

Mitchell used the term “chemiosmosis” to describe the overall mechanism of ATP generation that he imagined taking place within mitochondria, and within microorganisms that thrive in oxygen. He hypothesized his proton motive force being harnessed by enzymes to convert ADP to ATP, and also to power other cell processes. For instance, in chloroplasts (organelles that use sunlight to make food in cells of plants and certain other eukaryotes) and photosynthetic bacteria, he imagined the proton gradient transferring sunlight energy to energize various molecules. He also hypothesized proton gradients being used to transform chemical energy into mechanical energy. Many bacteria and other microorganisms move around with a tail like structure called a flagellum, and Mitchell imagined the tiny protons causing a flagellum to move.

When Mitchell proposed chemiosmosis in 1961, his colleagues thought the idea was crazy. Mocking the idea of the proton motive force, which Mitchell abbreviated PMF, his colleagues joked that PMF stood for the “Peter Mitchell Force.” This was mostly because Mitchell lacked evidence to support the idea at the time, but also because he looked and acted rather unorthodox.

Holding fast to chemiosmosis and ignoring those who mocked him, Mitchell did the needed lab work and also watched carefully for discoveries by others that could be relevant to his idea. During the 1960s and 1970s, such relevant discoveries revolved around how the ETC could transfer energy from electrons into a gradient of protons. While the physics is beyond the scope of this module, the take-home message is this: As electrons move along the chain, giving up their energy gradually, special enzymes take that energy and literally pump protons into the intermembrane space (see Figure 4). Recall that the ETC enzymes are lined up, embedded in the membrane. As electrons move along the ETC in a conveyor belt fashion, protons are pumped in a direction perpendicular to the movement of the electrons.

The other research relevant for Mitchell revolved around an enzyme called ATP synthase. Located in the inner mitochondrial membrane, this enzyme proved to be a key component since it acts as a doorway for protons which want to move out from between the two membranes (Figure 6).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: The world's smallest motor, the enzyme ATP synthase, generates energy for the cell. image © National Institute of General Medical Sciences

Thinking of the balloon analogy from earlier, imagine if there was a wind turbine capturing mechanical energy from the air moving out of the balloon. Like a mini-turbine, the enzyme ATP synthase harnesses the power of the protons streaming out from between the membranes and uses the energy to generate ATP. Vindicated due to the growing understanding of ATP synthase and other discoveries related to membranes, Mitchell was awarded the Nobel Prize in Chemistry in 1978.

ATP is the main energy currency of living cells. This module answers the question of how most ATP is generated. A look at two important compounds, NADH and FADH2, reveals their important role in the production of ATP. The module explains the workings of the electron transport chain, which provides high-energy electrons to fuel the ATP-producing process called oxidative phosphorylation.

Key Concepts

  • Adenosine triphosphate (ATP) is the main energy currency of the cell. It is generated from a similar compound, ADP, using energy harnessed from cellular fuels, such as sugars, fats, and proteins.
  • The amount of ATP generated directly during glycolysis (the breakdown of the sugar glucose) is small compared with amount of energy contained within glucose.
  • The energy held by ATP and other energy-holding chemical compounds is contained in electrons. By moving electrons, different molecules move energy around the cell.
  • Two specialized energy currency compounds, NADH and FADH2, are vital to the movement of high-energy electrons from cellular fuels like glucose to an assembly-line system of enzymes called the electron transport chain.
  • Located inside mitochondria, the electron transport chain harnesses energy from NADH and FADH2 to power a process called oxidative phosphorylation, which generates large amounts of ATP. Oxidative phosphorylation requires oxygen.

David Warmflash, MD, Nathan H Lents, Ph.D. “Energy Metabolism II” Visionlearning Vol. BIO-4 (5), 2016.

Top


Page 23

Energy in Living Systems

by David Warmflash, MD, Nathan H Lents, Ph.D.

Inspiration can come from many places. Sometimes, inventors are inspired by new discoveries in science, and sometimes it’s the other way around – scientists are inspired by new developments in industry. This is what happened in the early 20th century after the moving assembly line came of age.

First introduced by Henry Ford in 1907, the assembly line was not just a single stream of automobile parts flowing from one worker to the next. Instead, it was a multi-path system of many assembly groups. Of course, there was a main assembly line that began with the wheels and the bottom of each car and ended with the completed vehicle, but there were also additional tributary lines feeding into the main line at different points. These tributary lines developed components that needed to be pre-assembled individually before they could go into each car. There was a special line for the engine, for the car body, the seats and doors, and movement of parts through each was timed so as to provide the components to the main assembly line in a coordinated fashion (Figure 1).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Workers on the first moving assembly line put together magnetos and flywheels for 1913 Ford automobiles. image © NARA

If anything slowed down one group – a shortage of parts, for instance – the entire system would slow. In such cases, the completed components from the other groups would accumulate, since they could not be put into new cars on the main line. But when all sections operated on schedule, the new Model T cars took shape very rapidly. In fact, operating like this, Ford could produce thousands of cars per day, which was a striking advance over earlier, custom made cars that were hugely expensive and available only to the wealthy. Other industries quickly adopted the assembly line approach.

By the start of the 20th century, scientists in various fields were already realizing that nature works in cycles. Geologists knew that water must cycle through the ground, oceans, and clouds; astronomers were figuring out that giant clouds of gas were giving birth to stars one by one; and chemists and biologists were starting to think in this way too. But with scientists now seeing how efficiently the assembly lines could produce cars and other big machines of the era, that cycling aspect of nature now moved to center stage. Might the assembly of stars from gas, clouds from water, or rocks from lava work like a kind of natural assembly line? Moreover, at the microscopic level within cells, might the processing of molecules also proceed in an organized fashion, as if moving through a tiny factory?

In the 1920s and 1930s, biochemists began discovering enzymes – proteins in our cells that catalyze chemical reactions. Simple reactions were worked out rather quickly, but more complicated chemical reactions were difficult to study. For example, if a person consumed compound A in the diet and then excreted compound E in the urine, how exactly did that happen? Was compound A transformed into E directly? Or, did the process occur in steps, like on an assembly line, with compounds B, C, and D, created along the way as intermediaries? Were there tributary lines generating various components that were needed at different points?

In many cases, the assembly line idea seemed to be the only one that made sense conceptually. Consider glucose, for example, commonly known as blood sugar. (See the structure of a glucose molecule in Figure 2.) By the turn of the 20th century, scientists knew that glucose was one of the main fuels, or sources of energy, for animals, bacteria, and yeast. Setting glucose on fire in the laboratory produced carbon dioxide (CO2) and water (H2O), the same compounds that animals produced when they exercised. However, no one believed that cells could have tiny fires inside. Observations under the microscope certainly did not show any flames. Nevertheless, people do feel a burning sensation in their muscles during heavy exercise.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: D-glucose with the formula C6H12O6.

Realizing that that the breakdown of body fuels probably took place in a controlled series of steps, researchers imagined enzymes working like factory workers, modifying different parts of a particular chemical compound. Like the workers on an assembly line, each enzyme would make one special change to each molecule. The altered molecule would then be further modified, step by step by different enzymes, and this could happen not only during the breakdown of fuels; it also could happen during the production, or synthesis, of needed biological molecules using simpler chemicals as building material.

Glucose and other sugars belong to the class of macromolecules called carbohydrates (see our Carbohydrates module). Along with lipids and proteins, carbohydrates play a variety of roles in organisms, and one role is providing cells with energy. While glucose and fats (a class of lipid) are the preferred energy compounds, proteins also can be used as fuel (see our modules Lipids and Fats and Proteins to learn more). Like the logs of a cabin, proteins are made from building blocks called amino acids, which can be used in multiple ways. They can be put together giving the cabin structure, but if needed they can also be burned as firewood to keep the cabin warm.

Comprehension Checkpoint

Which of the following works to break down or build up chemicals in the body?

Within structures called mitochondria, microscopic power plants in the cells of eukaryotes (Figure 3), broken down bits of carbohydrates, fats, and proteins all come together, feeding into a kind of reverse assembly line that goes around and around in a cycle. As the cycle goes around, the various energy-rich bits are incorporated at different stations. At the same time, the cycle sends other products away to other areas of the power plant. The pathway has many names, including the citric acid cycle and the tricarboxylic acid cycle (TCA), because of the compounds that cycle within it. However, it’s also known as the Krebs cycle, for its discoverer Sir Hans Adolf Krebs.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: A diagram of a typical animal cell. Number 9 indicates the mitochondria structures in the cell. image © Kelvinsong

Born August 25, 1900, in Germany, Krebs earned his MD and began his research career working with Otto Heinrich Warburg. A pioneer in biochemistry, Warburg was the inventor of the manometer, an instrument that could measure oxygen and other gasses in blood and other fluids. Warburg was one of the lead biochemists worldwide, and in the early 20th century his country was the best place for emerging researchers like Krebs to get an education. Germany in this era was the global center of scientific research, especially in all areas of chemistry. So frequent were German publications in research journals that students aspiring to science worldwide would learn German just to be prepared to read the new articles. This was the world in which Krebs came of age.

Using the Warburg manometer, Krebs made his first big discovery, the urea cycle (also called the ornithine cycle). By the late 1920s, it was well know that the breakdown of amino acids in animals must release ammonia (NH3). Krebs new that ammonia is toxic, yet somehow the body is able to convert it to urea, a chemical that is easily excreted in urine. Thinking assembly line style, Krebs and his student, Kurt Henseleit, came up with a hypothetical set of reactions, beginning with the conversion of ornithine into another chemical by receiving a piece of the amino acids containing the ammonia. The manometer allowed Krebs to analyze samples of animal liver exposed to the intermediary chemicals that they suspected were made from ornithine. Krebs and Henseleit, were able to test and tweak their hypothesis, reaction by reaction. The pathway of reactions was a cycle, because, after a bunch of steps, ornithine was re-created. As this happened, more and more ammonia was converted to urea. Thus, as long as amino acids were continuously broken down in the liver, the urea cycle would spin around and around, removing ammonia so that it did not accumulate and kill the organism. It was a milestone discovery that made Krebs world famous when he published his findings in 1932 (Figure 4).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Hans Adolf Krebs

Soon after that he was fired. Like many other academics in Germany, Krebs was dismissed from his position when the Nazis came to power in 1933, either because they were Jewish, as Krebs was, or because they opposed the Nazis. Prior to 1933, Germany was a powerhouse in all areas of science with a plethora of Nobel prizes going to Germans. That abruptly ended with the rise of Adolf Hitler.

Krebs relocated to England, along with many other academics escaping from Nazi controlled lands. Although he was unable to bring most of his personal possessions, he did take most of his lab equipment, including the Warburg manometer that had proven so useful in unlocking the secrets of the urea cycle.

Comprehension Checkpoint

Krebs lost his job because

With the urea cycle behind him, Krebs wanted to focus on tracking what happened to carbohydrates in the cell. While at the University of Sheffield, Krebs set up the manometer and started working out the chemistry. One of his major goals was to map out the ultimate fate of glucose in the presence of oxygen. By this time, the initial breakdown of glucose was already understood, step-by-step. Known as glycolysis, this initial process splits each glucose molecule into two smaller molecules called pyruvate.

The steps of glycolysis were worked out by two biochemists, Gustav Embden and Otto Fritz Meyerhof. (A few years after Krebs, Meyerhof also fled Nazi Germany for being Jewish.) Unlike burning glucose to a crisp in the laboratory, the conversion of glucose to pyruvate in cells is carefully controlled by enzymes. Each step in the Embden-Meyerhof glycolysis pathway has its own enzyme that performs a specialized procedure on one molecule after another, like the factory worker at a particular workstation.

In the course of breaking down glucose into pyruvate, glycolysis provides the cell with some energy, and does not require oxygen (Figure 5). This is good, since many organisms live in environments where oxygen is not even available. In fact, today we know that the enzymes controlling glycolysis emerged extremely early in the history of life, before there was any oxygen gas in Earth’s oceans or atmosphere.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: A diagram of the glycolysis process that occurs in the cytoplasm of a cell. image © RegisFrey

The understanding of glycolysis left a big question: What happens to the pyruvate after it is produced from the breakdown of glucose? By Krebs’ time, it was known that the answer depended on whether or not oxygen was available. It was also known that certain microorganisms, as well as animal muscles, produce a chemical compound called lactic acid. The reason, it turns out, is that lactic acid is very similar to pyruvate. When no oxygen is available – or in organisms that don't have the ability to use oxygen even if it is available – pyruvate is converted to lactic acid as a waste product. This is what happens in muscle cells during intensive exercise, especially in an individual who has not warmed up sufficiently.

However, as Krebs knew, something bigger must have been happening in cells when oxygen was available. One reason warming up helps muscles is that it brings more oxygen into the muscle cells, allowing for conversion of pyruvate to something other than lactic acid. Oxygen, it turns out, allows cells to activate a highly efficient system to break down fuel to the ultimate end product: carbon dioxide (CO2).

Comprehension Checkpoint

Pyruvate is similar to

When 19th century researchers burned sugar in the lab, they knew that oxygen was required to fuel the fire. This suggested that the metabolism of glucose also required oxygen, at least when glucose was broken down all the way to CO2 and H2O. Krebs knew that the key to understanding how most of the energy was extracted from glucose was to understand what happened to pyruvate when oxygen was present. Clearly, it was something different than what happened in the absence of oxygen. Think of a fork in the road at the point that pyruvate is created from the breakdown of glucose. Without oxygen, pyruvate is converted to lactic acid, but the presence of oxygen opens the gate to an alternate route that ends, not with lactic acid, but with CO2. All that Krebs needed to do was figure out the various steps that occurred along the way. Luckily, he still had his handy manometer, and luckily, he didn't need to start from scratch. A few reactions that Krebs was about to discover as steps in his new cycle were known already as independent reactions from research of an older biochemist, Albert Szent-Györgyi. It was Krebs who postulated that the reactions might be connected in a cycle, just like the reactions of the urea cycle that he'd discovered back in Germany.

Krebs’ research method was to let slices of beef liver soak in solutions of various chemicals. Using the Warburg manometer, Krebs could then see how the unidentified liver enzymes would change the different chemicals in the solutions. Testing the reactions one by one, he discovered that the breakdown of carbohydrates, lipids, and proteins did indeed proceed in a cyclic fashion. Bigger and more complex than the urea cycle, this cycle turned out to be the central route of all metabolic activity in the cell. Krebs identified the cycle’s reactions by 1937, although he tweaked it over the course of the following decade. Part of that tweaking led him to discover yet one more cycle, a little one called the glyoxylate cycle that acted as a bypass route for a section of the Krebs cycle.

Comprehension Checkpoint

Krebs is famous for discovering

Thinking about the Krebs cycle in terms of workstations is a way to remember broadly what types of chemical compounds enter the cycle at certain points, what they are changed into as a result of entering the cycle, and what compounds then leave the cycle at different points. Since it is a circular pathway, there is no beginning or end. For the sake of learning the Krebs cycle, however, the “first” reaction – is the conversion of oxaloacetate into citrate. While there are several differences between oxaloacetate and citrate, the most important difference is that citrate is the bigger molecule. Its “backbone” is built of six carbon atoms, while oxaloacetate has just four. What is the source of the two extra carbon atoms? The chemical equations that Krebs wrote out told him that the source of the two carbon atoms could be acetate, which had to come from outside the cycle. Mixing oxaloacetate with his liver specimens, Krebs could test his hypothesis. Using the Warburg manometer to measure changes in oxygen and CO2 in his mixture, Krebs could tell when the cycle was turning around. The liver specimens supplied the enzymes that controlled the reactions, including the enzyme that adds two carbons to oxaloacetate, forming citrate. This meant that Krebs could add different carbon sources, one by one, to the mixture, and see which, if any allowed the cycle to go around. Doing this, he confirmed that acetate was the needed substrate. In a test tube, as in the cells of his liver specimens, acetate had to be supplied from outside the cycle. Otherwise, the cycle would come to a halt.

The discovery that acetate joined a cycle of reactions that led to the extraction of energy from food was a major insight, because both sugars and fats – the major sources of dietary energy for all organisms – can be broken down to acetate, as can some amino acids. Krebs realized that pyruvate, the product of the initial breakdown of glucose, is very similar to acetate, except that pyruvate has one additional carbon. If the extra carbon from pyruvate were removed, the remaining molecule was easily converted to acetate.

Comprehension Checkpoint

Most of our energy comes from the _____ in our diet.

The acetate molecule itself is small, and highly diffusible, so it must be chaperoned around the cell by a much larger carrier molecule, called co-enzyme A (Co-A). Once the two-carbon acetate is linked up with the four-carbon oxaloacetate, however, the Co-A is free to pick up another acetate and repeat the process. Meanwhile, the cell has a new molecule of citrate with its six carbon atoms.

Realizing that he was dealing with a cyclic pathway, Krebs discovered that citrate does not remain for very long. After its shape is changed around, it is cut down to a five-carbon molecule and then again to a four-carbon molecule, which then is modified several times until oxaloacetate is produced, all ready to be combined with a new acetate to produce more citrate, and the cycle goes around another time. It’s a true cycle, because the product of the cycle – oxaloacetate – is also the first ingredient for the next cycle. (See Figure 6.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: The detailed Krebs cycle. image © Agrotman

Where do the carbon atoms go when they get cut off as the cycle goes from six to five and back to four-carbon units? The cycle occurs only in aerobic organisms, life forms that use oxygen, and using the Warburg manometer Krebs discovered that a portion of the carbon removed from the main compound was combining with oxygen atoms to generate CO2.

Comprehension Checkpoint

The Krebs cycle occurs in

The energy contained in fatty acids, glucose, and amino acids is held in the various chemical bonds that keep the individual atoms together. The most common way that we store the energy harvested from those chemical bonds is within a molecule called ATP. ATP is often called the cellular currency of energy. Just like economic currency, such as a dollar bill, the energy currency of ATP can be used to “purchase” whatever reactions or activities the cell needs to perform. Energy metabolism also depends on a chemical compound called GTP, which is nearly the same as ATP. Continuing with the dollar bill analogy, one can imagine GTP as a silver dollar coin. It’s not encountered as often as the normal paper dollar, but it has the same value and the two can be exchanged easily. As biological fuel such as glucose is broken down, ATP and GTP molecules are produced at different points in the assembly line.

Krebs found that the breakdown of sugars and fats into CO2, through a cycle of many chemical reactions, produced ATP and GTP that the cell could use to drive all sorts of reactions and activities. However, something was clearly missing. For one thing, the amount of ATP and GTP was much smaller than he predicted. Scientists knew how many calories sugars and fats provided and most of that energy was still unaccounted for in the reactions that Krebs discovered. Secondly, oxygen was not directly required for any of the reactions that Krebs discovered. Why was oxygen so important for harvesting the energy of sugars and fats if it wasn't required in their breakdown?

Looking more closely at chemical bonds of food molecules, the energy that is first present in those bonds is actually carried by the electrons that form the covalent bonds. Each bond is made by a pair of electrons, and depending on how the various atoms are arranged, the electron pairs can hold different amounts of energy. In the course of chemical reactions that harvest the energy from the food molecules to form ATP and GTP, the energy-holding electron pairs are physically transferred from one chemical compound to another, and various compounds that carry the electrons around are called electron carriers.

During the course of the Krebs cycle, two compounds are produced that do not feed back into the cycle. They are not ATP or GTP, and their function was not obvious to Krebs. These compounds were NADH and FADH2, made from their precursors NAD+ and FADH+, as shown in Figure 7 (learn more about these compounds in our Energy Metabolism II module). Krebs knew that NADH was also made in glycolysis, so he suspected that finding out what they do would probably answer the question of where the rest of the ATP in cellular respiration comes from.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: A diagram of the action inside a mitochondria, showing the Krebs cycle (also called the citric acid cycle) and the electron transport chain. image © RegisFrey

The discovery of the Krebs cycle would earn Hans Adolf Krebs the Nobel Prize in Physiology and Medicine in 1953, and five years later a knighthood. Even though Krebs did not discover the next phase of cellular respiration – oxidative phosphorylation - his work with the urea cycle and the Krebs cycle probably helped to inspire those discoveries, because, as it turned out, oxidative phosphorylation was also a kind of assembly line.

Food fuels our bodies, but how does our body convert food molecules into usable energy? This module looks at glycolysis and the Krebs cycle, two important stages of cellular respiration, the process by which cells harvest energy from food. It highlights the work of Sir Hans Adolf Krebs and his focus on cyclic pathways as he discovered the main biochemical pathway for breaking down fuel to produce energy.

Key Concepts

  • In a cell, chemical compounds are put together, taken apart, and moved around through pathways that resemble moving assembly lines.

  • The main types of biological macromolecules that cells use for fuel are sugars, fats, and proteins.

  • The main biochemical pathway where the breakdown of biological fuels comes together is called the Krebs cycle. Named for its discoverer, Sir Hans Adolf Krebs, this pathway is like a circular assembly line.

David Warmflash, MD, Nathan H Lents, Ph.D. “Energy Metabolism I” Visionlearning Vol. BIO-4 (3), 2015.

Top


Page 24

Genetics

by David Warmflash, MD, Nathan H Lents, Ph.D., Bonnie Denmark, M.A./M.S.

In science, people often have great insights, but they lead to important advances only if science has already laid the foundation for them to be tested. Just as Leonardo da Vinci designed a helicopter-like machine more than 400 years before there would be engines that could make it fly, so was the work of early geneticists like Gregor Mendel and Archibald Garrod too revolutionary to be accepted when it was first shared in the scientific community. Mendel’s ideas on the laws of inheritance were not recognized as truly groundbreaking until after his death. Likewise, when Archibald Garrod posited that certain diseases were inherited from parents, science had no way to understand or test his hypothesis.

In Garrod's time, the genetic work of Gregor Mendel had only recently been rediscovered (see our Mendel and Inheritance module for more information). Through painstaking research, Mendel had shown that traits were passed down from parent to offspring (Figure 1), with some traits being dominant (showing up in the offspring, even if only one parent carried them) and others being recessive (can be hidden and skip generations), but nobody knew why this happened. How could someone inherit blue eyes when both parents had brown eyes? Even Mendel was clueless and proposed an almost spiritual mechanism.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: A Punnet square showing the F1 cross of two plants with alleles Tt. As Mendel observed, 3/4ths of the offspring possess at least one copy of the dominant tall gene T, while 1/4th of the offspring possess two copies of the short gene t.

It wasn’t until 1941 that George Beadle and Edward Tatum figured out the mechanism by which genes are translated into physical traits. The process, known as “gene expression,” is the chemical pathway leading to the particular enzyme that each type of gene makes, resulting in physical characteristics. Beadle and Tatum won the Nobel Prize for their work in 1958, nearly a century after Mendel published his research on the inheritance of genetic traits.

Although genes were still completely abstract in the late 18th and early 19th centuries, researchers were starting to recognize that certain diseases ran in families. One particularly devastating condition manifested itself with a range of symptoms in the central nervous system. Afflicted infants looked normal at birth, but gradually developed mental and physical retardation, leading to paralysis, blindness, deafness, and ultimately death, usually by age three. The disease has been around for ages, but only in the 19th century had medicine advanced enough to recognize it. Various technological advances by 19th century lens grinders allowed for major improvements in telescopes and microscopes, leading to some well-known discoveries in astronomy and biology. Alongside those improving telescopes and microscopes came a new invention: the ophthalmoscope. That’s the instrument that doctors use to examine the retinas of your eyes. It was invented in 1851, and by the 1880s was already the most important tool for ophthalmologists. Using one to examine a child with mental and physical retardation whose vision was also deteriorating, Waren Tay, a British ophthalmologist in London, noticed something in the retina that was not supposed to be there. He called it a “cherry red spot” (Figure 2), and in his report for a medical journal he noted that the child was Jewish.

An ocean away from Tay’s London practice, a New York pediatric neurologist, Bernard Sachs, was being sent all of the unusual neurologic cases in the city. Many of the patients were part of a new wave of immigrants to the city that included massive numbers of Jews from Central and Eastern Europe. After seeing a few cases of deteriorating physical and mental retardation, Sachs began looking at the brains of children who had died. Observing the same kind of swelling in the nerve cells from autopsy samples, Sachs came to realize that the patients were afflicted by the same disease. By questioning the parents of the children to see if they recalled stories of similar cases in their villages back in the old country, he also figured out that the condition ran in families of Jews. Calling the condition infantile amaurotic familial idiocy, Sachs noted that it skipped generations, usually more than one generation at a time, before showing up in another infant, such as the child or grandchild.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The "cherry red spot" as observed by Tay in his work as an ophthalmologist. image © Jonathan Trobe, M.D., U. Michigan

Eventually, Tay and Sachs (Figure 3) realized that they were studying the same condition. Today, it’s called Tay-Sachs disease, and the cherry red spot that Tay saw in his ophthalmoscope is a telltale sign. Reporting that the disease skipped generations, Sachs actually was implying that it displayed what Mendel termed a “recessive factor.” Just like the recessive shapes and colors of Mendel’s peapods, and just like blue eye color or straight hair, Tay-Sachs disease is caused by a specific version of a gene, but only if two copies of the version are present. While Sachs did not express his observations in Mendel’s terminology, this was around the time when Mendel’s laws were being rediscovered. Along with new instruments and methods shaping early 20th century science, those rediscovered laws beckoned to a new generation of geneticists.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: The two scientists behind the discovery of Tay-Sachs disease: Bernard Sachs (l), a New York pediatric neurologist, and Waren Tay (r), a British ophthalmologist.

But Mendel’s laws do not explain how dominance and recessivity work. How could it be that an infant gets a terrible disease, dies in childhood, and therefore does not grow up to have children, yet later the same disease reappears in a nephew or niece, or in a grandchild of the infant’s sibling? What is the path from a particular gene to the manifestation of a certain disease, condition, or trait? This process would not be understood until later researchers continued investigating the mechanism of gene expression over another several decades.

Comprehension Checkpoint

Tay-Sachs disease

Not far from Tay’s ophthalmology clinic in London, another medical doctor was conducting research that would lead to a major breakthrough in our understanding of gene expression. Archibald Garrod was studying people with a handful of medical conditions. Much more benign than Tay-Sachs disease, the conditions that piqued Garrod’s interest did not kill the patients as toddlers. At the same time, each condition came with a telltale trait. One condition that Garrod studied, called albinism, leaves children with no pigment in their hair, eyes, or skin. He was also fascinated by cystinuria, a condition characterized by frequent urinary stones beginning in early adulthood, and another condition that produces urine that darkens when left standing. Known as alkaptonuria, it typically is discovered after a parent notices dark stains in an infant’s diapers. Realizing that urine provided an easy way to study the chemistry of the body, Garrod also took urine samples from people whose health seemed perfectly normal. In doing so, he discovered another condition, called pentosuria, whose only sign is the presence of a certain kind of sugar in the urine.

Some of Garrod’s conditions also produce other effects that were not so easily recognized in those days. People with alkaptonuria, for instance, often develop trouble in large joints, disks of the spine, and heart valves as they age. But most of these problems appear long after the patients can grow up and have children of their own, and, in the case of pentosuria, there are no known detrimental effects on health. These features made investigating family connections much easier for Garrod than for Sachs.

Knowledgeable of the newly rediscovered Mendelian laws, Garrod hypothesized that a single recessive gene was the cause of each condition, and the gene was passed down in particular family lines (Figure 4). For instance, though pentosuria was the most benign disease of all the four that Garrod studied, it had something in common with the deadly Tay-Sachs disease; namely, it ran in Jewish families. Going beyond Mendel and Sachs, however, Garrod also suggested that for each condition, a recessive gene caused a deficiency of an enzyme whose normal role was to create, break down, or modify a particular chemical.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Garrod theorized that the diseases he studied, including Tay-Sachs disease, were inherited from the parents. He correctly believed the diseases were caused by a recessive gene in the children was causing an enzyme deficiency. image © Cburnett

It was an amazing stroke of insight, for Garrod was correct. Manufactured in all plant and animal cells, enzymes are catalysts that enable biochemical reactions to move forward at a faster rate, speeding up reactions that would take much longer. Enzymes are vital to many of the body’s critical functions; without them, organisms would not be able to survive and function. These predominantly protein-based molecules perform very specific tasks within the body (Figure 5). Understanding their role was key to a new understanding gene expression.

Each of the four conditions that Garrod studied really does result from a problem with a single enzyme. The same enzyme problem that causes dark urine in alkaptonuria also affects cartilage and other connective tissues throughout the body, thereby affecting the joints, spinal discs, and heart valves. This happens because the one enzyme that’s affected in alkaptonuria happens to control the breakdown of two of the 20 amino acids that life-forms use for just about everything. Garrod didn’t work out this amino acid chemistry, but studying the families of patients with alkaptonuria and the other abnormalities, he developed a concept called inborn errors of metabolism. The hypothesis was way ahead of its time, yet Garrod had no way to test it, so it did not catch on during his lifetime. Unlike da Vinci’s helicopter, though, Garrod’s vindication lay not four centuries into the future, but a mere four decades.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: This diagram shows how enzymes enable biochemical reactions to move forward by catalyzing a single reaction.

Comprehension Checkpoint

Garrod proposed that

During the late 1930s, the final years of Garrod’s life, George Beadle was a young geneticist at Columbia University, where he was doing research on fruit flies called Drosophila melanogaster. Fruit flies, like humans, have noticeable differences in eye color that follow Mendelian inheritance. Using radiation to damage the Drosophila genes – whatever they were, for nobody yet knew their physical basis – Beadle was able to show that genes were related to eye color through a series of chemical reactions. Still, he couldn’t be sure whether the idea could apply to a wide range of traits and to life in general, or merely to eye color in fruit flies.

Teaming up with biochemist Edward Tatum in 1940, Beadle set aside the fruit flies in favor of Neurospora crassa, a type of bread mold (Figure 6). Like peas and people, fruit flies have two sets of chromosomes that carry genes for different characteristics. Thus, two genes encode the information for each trait, which is the cause of dominance and recessivity. Unlike fruit flies, N. crassa can produce little reproductive structures called spores that carry just one set of chromosomes, so dominance and recessivity do not come into play. Also, N. crassa offered another advantage. Studying fruit fly genetics, Beadle had to look at physical effects, like eye color, and picking up flies with tweezers can be time-consuming. N. crassa spores, on the other hand, could be placed on top of a nutrient-filled gel that has solidified, and Beadle could simply observe whether or not spores grew on the gel.

Since the answer was either “growing” or “not growing,” he could have hundreds of gel-filled plates, each with a spore. The nutrient gel contained only the minimal number of nutrients, the essential compounds, which the spores normally needed to grow (sugar, certain salts, and a vitamin called biotin). All other important chemical compounds the spores could make themselves, using the essential nutrients supplied in the gel as starting compounds. By using different nutrient mixtures, Beadle could observe whether a particular spore needed an extra nutrient, an ingredient not usually included in the gel since normal spores can make it themselves. Any spores needing an extra nutrient in order to grow could be considered abnormal. In genetics, these spores are called mutants, while the others (those able to grow with no extra ingredients) are called the wild type. If Garrod was right and each gene produced a certain enzyme, then damaging the gene for the enzyme that an organism used to make nutrient X would create a mutant organism that could grow only if nutrient X was supplied from the outside.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Neurospora crassa, a type of red bread mold studied by Beadle and Tatum. image © Jamie Cate

Although Beadle and Tatum did not know the physical basis of the genes, they were certain that each organism carried a whole lot of genes, probably thousands. In that case, how could they hope to create a mutant that depended on one and only one particular nutrient simply by zapping the organism with radiation? Actually, they weren’t sure that they could, but they understood the power in numbers. Like hoping to draw the queen of hearts from a shuffled deck of cards, you might get lucky if you try enough times. Likewise, when you can spread mold spores on a series of culture plates, you have more chances than when you’re picking up flies with tweezers. Even so, the scientists thought it could be a long shot, and so they made a deal. They would irradiate sample after sample and check mutant after mutant to see how they could grow or not grow with the addition or absence of particular nutrients. But they would set a limit of 5,000 attempts. If they got to that point without creating the mutant they needed, they would give up.

But they never had to give up because after just a few hundred attempts, they found a mutant that needed just one ingredient added to the usual growth mixture. That needed extra nutrient was arginine, one of the 20 amino acids that life-forms use as building blocks to make proteins. Normal N. crassa can make its own arginine, but Beadle and Tatum were able to create four different molds that could only survive when given arginine in their food. Using these strains, they were able to trace the chemical pathways connected with the mutated genes of the strains, ultimately demonstrating that each enzyme was made by one particular gene. Published in 1941, it was a milestone discovery that eventually would earn Beadle and Tatum the Nobel Prize. Their discovery was not limited to bread molds, for gradually it became clear that Tay-Sachs, all four of the conditions that Garrod studied, and a host of other familial disease were due to recessive gene mutants.

Table 1: Neurospora crassa Experiment Growth Data. Normal N. crassa (aka, the "wild type") can make its own arginine, but Beadle and Tatum were able to create four different molds (ARG-E, ARG-F, ARG-G, and ARG-H) that could only survive when given arginine in their food.
Mutant strain No supplement Ornithine Citrulline Arginino-succinate Arginine
Wild type + + + + +
ARG-E - + + + +
ARG-F - - + + +
ARG-G - - - + +
ARG-H - - - - +

Comprehension Checkpoint

Beadle and Tatum worked with the Neurospora crassa bread mold instead of fruit flies because

How does it work? Usually, for converting and breaking down chemicals in the body, enzymes are in fairly good supply. Like humans and peas, having two genes for everything, including for enzymes, means that you have a backup. If one gene of a pair is mutated and produces a defective enzyme or no enzyme at all, the individual still has the other gene, which makes enough enzyme to break down the chemical, convert the chemical to something else, or do whatever the enzyme does. Only an individual with two genes for the defective enzyme of alkaptonuria actually has the disease, just as two genes for a defective pigment are needed for a person to be an albino. A similar thing happens with human eye color. The gene for brown irises (the colored part of the eye) produces a dark pigment, which, if absent, leaves the iris blue. The dark color shows up, eliminating the blue, even if the individual has only one gene for brown eyes, which is why brown eyes are dominant. Blue eyes are recessive, because having them means you have no brown pigment at all, which only happens if both of your pigment genes are defective (Figure 7).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: A Punnett square showing how eye color develops. Here, a brown-eyed parent and a blue-eyed parent produce 50% children with brown eyes (a dominant trait) and 50% children with blue eyes (a recessive trait). image © Purpy Pupple

The principle also carries over to Tay-Sachs disease. Today, we know the disease is caused by an inability to break down a category of chemicals called lipids, specifically a special type of lipid called a GM2 ganglioside. Extremely important in membranes of nerve cells or neurons, GM2 ganglioside is broken down by an enzyme called HEXA. GM2 ganglioside gradually accumulates in the neurons of a child who makes no HEXA, but the accumulation takes time, which is why newborns with Tay-Sachs appear normal. Over months, however, the accumulating GM2 ganglioside causes the neurons to swell. Since the retina of the eye is made of the same kind of neurons that are in the brain, the retina swells in a particular pattern, and that’s what causes the cherry red spot that’s characteristic of Tay-Sachs disease, and eventually blindness. Similar swelling throughout the brain causes all of the other symptoms, and finally death.

Comprehension Checkpoint

A person who is an albino or who has the disease alkaptonuria must have _________ for the defective enzyme that causes these conditions.

Beadle and Tatum’s demonstration that a defective gene leads to a defective enzyme proved that enzymes were made as a consequence of genes. Published in 1941, this was a watershed discovery in genetics that set the stage for other researchers to hone in on the physical basis of genes and on gene expression, the chemical pathways from genes to protein enzymes. The particular arrangement of atoms within a gene allows for storage of information. When that stored genetic information is used to make enzymes, the gene is expressed, and Beadle and Tatum set the stage for new researchers to discover how the gene expression process worked. The stories of those other researchers are recounted in the modules that focus on the genes and on each phase of the process leading to the manufacture of their products.

The gene products do not include only enzymes, but enzymes were the first gene products to be understood. All of the enzymes affected in genetic diseases, like those studied by Garrod, Tay, and Sachs, are proteins. Immensely versatile and complex, proteins come from amino acids, which are linked in a chain called a polypeptide. When properly folded, one or more polypeptides form a protein. In addition to being enzymes, proteins take on a variety of roles, from providing structure for biological tissues to carrying important molecules around the body and in and out of cells.

Using the Beadle-Tatum discovery as a starting point, biologists during the 1940s and 50s figured out not just that DNA carried the genes, but they started to get an idea of how those genes were replicated and passed from generation to generation. Soon after that they learned that amino acids were put together into polypeptides using a set of rules called the genetic code, which is nearly the same for all life-forms on Earth. They also learned that the genetic code was not a way for cells to translate genetic information in DNA directly into chains of amino acids to make proteins. Instead, there are molecules called RNA that must be made as intermediaries along the way from DNA to the polypeptides that fold into proteins. The process of using DNA to make RNA, and then RNA to make polypeptides is one-directional. Never is a sequence of amino acids of a polypeptide used as a message for making either RNA or DNA, and only in certain viral infections is RNA ever used to make DNA. Known as the Central Dogma of molecular biology, this one-way process is universal to all organisms.

The one-directional nature of the movement of genetic information, which scientists came to understand from the 1950s-1970s, rests upon Beadle and Tatum’s 1941 watershed discovery. Before anyone could identify DNA as the physical basis of genes, and before anyone could reveal the chemistry carrying the genetic messages from DNA to RNA, and finally to the amino acids that make a polypeptide, somebody had to show what chemical product genes were actually affecting. And in the case of the various inherited diseases, it was enzymes. This supported the hypothesis of “one gene, one enzyme,” which was expanded to “one gene, one polypeptide” after it was realized that non-enzyme proteins were also made using genes, and eventually it had to be expanded again. Although many RNA molecules carry actual genetic messages from DNA that are used to make polypeptides, the job of some other RNA molecules is to help with the process. Since the helper RNA molecules also are made from genes, it’s not accurate to say that all genes code for some kind of protein product. Thus, today we say "one gene, one RNA."

Even before Beadle and Tatum could prove “one gene, one enzyme” with their painstaking bread mold experiments, somebody had to imagine a connection between genes and enzymes in the first place. Poor Archibald Garrod had died in 1936, just five years short of the Beadle and Tatum publication that vindicated the idea of inborn errors of metabolism. But Beadle and Tatum did remember Garrod. Inspired by their predecessor, Beadle named him at their 1958 Nobel Prize acceptance as the ultimate inspiration for their work.

Through a look at the devastating Tay-Sachs disease and other hereditary conditions, this module explores the connection between genes and enzymes. The role of dominance vs. recessivity is examined. The module traces developments in our understanding of gene expression, starting with a rediscovery of Mendel’s laws of inheritance and built upon by the pioneering work of later scientists. The module introduces the Central Dogma of molecular biology, which is the one-way process of using DNA to make RNA and RNA to make proteins.

Key Concepts

  • Genes cannot be used directly by organisms. The information stored in genes must be used to make products, such as enzymes, that cells need to perform different functions. Gene expression is the chemical pathway from genes to the gene products, such as proteins, that organisms can use.

  • Since organisms have two genes for everything, even If one gene of a pair produces a defective enzyme or no enzyme at all, the other gene in the pair will make enough enzyme to do its job. Only an individual with two genes for a defective enzyme will actually show the recessive trait, such as an inherited disease or condition, blue eyes, or a recessive peapod shape.

  • In the mid-1900s, George Beadle and Edward Tatum showed that a defective gene leads to a defective enzyme. Their “one gene, one enzyme” hypothesis was later expanded to “one gene, one RNA."

  • The genetic code is the set of rules that combines amino acids to form polypeptides and is nearly the same for all life-forms on Earth.

  • The genetic code is not a way for cells to translate genetic information in DNA directly into chains of amino acids to make proteins. Rather, RNA molecules must be made as intermediaries along the way from DNA to the polypeptides that fold into proteins.

  • Genetic information moves in one direction, from DNA to RNA to protein. This is known as the Central Dogma of molecular biology.

  • HS-C1.5, HS-LS1.A2, HS-LS3.A1, HS-LS3.B1

David Warmflash, MD, Nathan H Lents, Ph.D., Bonnie Denmark, M.A./M.S. “Gene Expression” Visionlearning Vol. BIO-4 (4), 2015.

Top


Page 25

Genetics

by David Warmflash, MD, Nathan H Lents, Ph.D.

Throughout a region in the US called Pennsylvania Dutch country, where there is a large Amish population, there is also an unusually high proportion of people with a condition called Ellis-van Creveld syndrome. Bearers of this condition are short in stature and have extra fingers (Figure 1), poorly formed teeth and nails, and heart defects that can shorten their lives significantly. Although Ellis-van Creveld syndrome is extremely rare globally, affecting less than 0.1 percent of people, it afflicts over seven percent of Amish people in the United States. Rates in the specific Amish communities in Pennsylvania Dutch country are even higher.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: People with Ellis-van Creveld Syndrome often have shorter forearms and lower legs, plus extra fingers and toes (polydactyly), malformed fingernails and toenails, and dental abnormalities. image © Darryl Leja, NHGRI

Variants in the genes (Figure 2), called alleles, for particular enzymes produce a defective gene product and can lead to genetic diseases that are recessive. This means that only individuals that receive defective copies from both parents are affected. Individuals with only one copy of the abnormal allele often experience no symptoms whatsoever. Occasionally, a disease-causing allele can actually confer a benefit. The classic example is the gene for sickle cell disease, which is devastating in individuals with only two copies of the allele, but protective against malaria in individuals with only one copy. In human populations plagued with a high presence of malaria, the sickle cell allele has thus persisted in the gene pool, a term to describe the collection of genes in the population. (For more on alleles and genes, see our module Gene Expression: An Overview.)

Some disease-causing genes have persisted in the human population because they provide some benefit, but the allele that causes Ellis-van Creveld syndrome helps nobody. It only kills. Therefore, why is it present in 7% of Amish people when it is so rare in the general population? Is there something unusual in the environment of central Pennsylvania that gives carriers of Ellis-van Creveld some advantage, like those with one sickle cell gene who are protected against malaria?

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: Illustration of genes, chromosomes, and DNA components. image © National Institute on Aging/National Institutes of Health

Questions like this puzzled early geneticists, particularly Reginald Punnett, a British researcher in the early 20th century. Charles Darwin had explained how natural selection worked as an evolutionary force, but Darwin had thought children were simply a blend of their parents. By Punnett’s time, the rediscovery of Gregor Mendel’s work (see our module Mendel and Independent Assortment) had led to the understanding that genes are the carriers of inherited traits (Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Reginald Punnett developed a visual method to understand inherited traits. Called a Punnett square, this instance shows how the probability of eye color. Here a brown-eyed parent and a blue-eyed parent produce 50% children with brown eyes (a dominant trait) and 50% children with blue eyes (a recessive trait). image © Purpy Pupple

Nobody knew what genes actually were, but during a lecture in 1908, Punnett was asked why a harmful recessive trait would not simply disappear over time. If a healthy gene were dominant over the version that caused a genetic disease, why would those diseases still be present in the population? Punnett realized that the answer must have something to do with genes, but he was unable to give a comprehensive answer.

Stumped, he explained the problem to his friend and cricket partner, Godfrey Harold Hardy, a mathematician who in June of 1908 published what came to be called Hardy’s Law. Decades later, it was realized that German physician Wilhelm Weinberg had figured out and published the same rule in January 1908, five months before Hardy. It thus became the Hardy-Weinberg Principle, although it turns out that an American, William Castle, actually had figured out the same thing, even earlier, in 1903. Calling it the Castle-Weinberg-Hardy Principle might be more accurate, but it’s quite a mouthful, so today biologists usually say the Hardy-Weinberg Principle, or the Hardy-Weinberg Equilibrium.

Comprehension Checkpoint

The rediscovery of Gregor Mendel's work led to the understanding that:

The Hardy-Weinberg Equilibrium describes how alleles behave in a given population, meaning a population’s gene pool. It’s called an equilibrium because the idea is that the frequencies of alleles (the variations of genes), genotypes (the alleles an individual possesses), and phenotypes (the characteristics an individual expresses due to the alleles, see Figure 4) in a population will remain constant unless the population is acted upon by a force. If this reminds you of Newton’s First Law of Motion, you have the right idea.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Using the example of the eye color Punnett square, the alleles, or variations of genes, are B for the dominant brown color and b for the recessive blue color. These combine to form the genotypes, the alleles an individual possesses, in BB, Bb, or bb combinations. Those with the at least one dominant allele, B, have the phenotype, or expressed characteristic, of brown eyes; those with two recessive b alleles have the phenotype of blue eyes. image © Based on Punnett square image by Purpy Pupple

The Hardy-Weinberg Equilibrium is usually understood in reference to one specific gene at a time. To understand the rules of Hardy-Weinberg, it is easiest to begin by considering the case of a gene with only two possible alleles in the population, a dominant one and a recessive one. Each individual can be homozygous for the recessive or dominant allele (i.e., having two copies of the dominant allele), or can be heterozygous, having one copy of each.

The equation of Hardy-Weinberg requires that we consider the abundance of each allele as frequencies, expressed in decimals as opposed to percentages. By convention, the dominant allele is called p and the recessive allele is called q. If the p allele has abundance of 35% in the population, it is expressed as 0.35. Because p and q are the only two alleles, their frequencies must add up to 100%. Therefore, p + q = 1. For example, if there’s an island with 50 dogs, that’s 100 alleles (two alleles per dog) for a certain gene, say one that determines the length of their tails, either a short or long tail. And if 30 of those alleles are the recessive type (a short tail), that’s q = 0.3, which means that the dominant type (a long tail) is p = 0.7.

The equation p + q = 1 speaks only about the frequencies of the individual alleles. However, each individual has two alleles. The frequency of each allele is multiplied because there are two chances to get each one. For an individual to end up with two dominant alleles, like a dog with two of the long tail alleles from the example above, we multiply p x p. This gives us p2. The frequency of the homozygous recessive phenotype, or the dog with two short tail alleles from the example, would thus be q x q, or q2.

Calculating the frequency of heterozygotes (those having a copy of each allele, like the dog with both a long and short tail allele) requires one extra step. We must multiply the frequency of the dominant allele, p, by that of the recessive allele, q, but we must also multiple this by a factor of 2. Why? Because heterozygous individuals have two possible ways to become heterozygous. They can receive the p allele from one parent and the q from the other, or, they can receive q from the first parent and p from the other. Therefore, the frequency of heterozygotes in the population is p x q x 2, or simply 2pq.

This leads us to the Hardy-Weinberg Equilibrium equation. If we add up all of the homozygote dominants, plus all the homozygote recessives, plus all the heterozygotes, we should get 100%. Therefore:

Comprehension Checkpoint

An individual with both types of alleles, a dominant and a recessive one, is called:

Saying that the Hardy-Weinberg principle describes an “equilibrium” is misleading, however, because the values remain constant only in a population that is not evolving. But real-life populations are always evolving. The frequencies of alleles, and thus genotypes and phenotypes, do not stay the same for long because there are always forces acting upon them. Some of the forces acting on the allele frequencies are mutation and natural selection, along with two other phenomena: gene flow and genetic drift.

Now let’s consider some of the interesting things that can happen to gene frequencies in a population.

Natural Selection occurs when one allele confers some benefit to the individuals that bear it and is thus favored by natural selection over time. This violates Hardy-Weinberg Equilibrium because the frequency of the beneficial allele will increase over time. The opposite will be true for an allele that harms the individuals that get it: The frequency will decline over time until it is eliminated.

Gene Flow refers to the movement of genes or alleles into our out of a gene pool. This can happen when members of a population migrate out, or members of another population migrate in and interbreed.

Genetic Drift refers to changes in gene frequencies due to random events, which can happen very quickly, producing dramatic and sudden effects. Drift can occur when a small group becomes isolated from the larger population. This is often called the Founder Effect. Drift can also occur when a catastrophic event reduces a large population to a very small size. Genetic drift means that the gene pool shrinks and becomes less diverse, which is often the opposite of what happens during gene flow when interbreeding expands the gene pool and increases genetic diversity.

Comprehension Checkpoint

When an allele confers some benefit to the individuals and is passed on over time, the genetic force is called:

Genetic drift is faster and more powerful in small populations, and this is best explained by considering the statistics of coin flipping. For each toss, you know that the chances of getting heads or tails is 50:50, but if you perform only ten flips, you probably won’t get exactly five heads and five tails. It might come out 4:6 or 3:7, simply due to the randomness of how the coin lands. You could also get 2:8, 1:9, or even 0:10. Odds are against this, but it’s certainly possible.

However, if you increase to 100 flips, you will probably get very close to a 50:50 ratio, even closer if you go up to 200, 400, or 1,000 flips. This is because the random factors causing heads or tails increasingly cancel each other out. The larger the number of coin flips, the more accurate the ideal prediction of 50:50 becomes. The lower the number of flips, the higher the chance of getting a strange ratio like 2:8 or 1:9.

For essentially the same reason, the frequencies of alleles are subject to wide swings when a population gets very small. Consequently, if we use the Hardy-Weinberg Equilibrium equation to calculate allele frequencies in a large population at one moment in time, the answer will be pretty accurate and will hold over several generations. However, when a population gets very small, little differences can have big impacts on the population frequencies after a few generations. This is the essence of genetic drift: The gene frequencies change over time because of random effects due to small population size. One allele may become way more frequent than another one for no other reason other than chance, like flipping 8 heads out of 10 flips.

In nature, it’s tempting to assume that some alleles become more frequent because of natural selection because they bring some benefit to survival or reproduction, but that may not be the case. It could be a case of pure genetic drift. While the Hardy-Weinberg Equilibrium equation can help us detect that a population has undergone some kind of change, such as genetic drift, it cannot say how or why. For that, we have to look closer.

Comprehension Checkpoint

Gene frequencies change over time because of _____ due to small population size.

The two main types of genetic drift are "bottleneck events" and "founder effects," each referring to a different mechanism by which a small population becomes reproductively isolated. Simply picturing how the neck of a bottle allows just a small fraction of the bottle’s contents into the limited space in a finite amount of time gives you a clue of what bottleneck means in genetics. If you imagine that the bottle contains a gene pool, you get still a better idea.

When a population suffers a sudden catastrophic decline and is then repopulated by a small group of survivors, that’s a bottleneck event (Figure 5). The gene pool shrinks and the new frequency of alleles for each gene is different from what it was in the larger population prior to the event.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: When a catastrophic event kills off a large portion of a population and a small group of survivors is left to repopulate, it is a type of genetic drift known as the bottleneck effect. It results in a smaller gene pool and a different mix of allele frequencies. image © OpenStax, Rice University

In this way, previously rare alleles can suddenly become common, purely by chance. This happens in nature all the time. A good example is the Northern elephant seal, which thrived in the Northern Pacific Ocean on the continent and islands from Mexico to Alaska, but was hunted to near extinction by the 1880s. In the 1920s, however, Mexico designated an island called Guadalupe as a sanctuary for the animals. Beginning with less than 100 seals, the population started expanding again so that today it numbers more than 127,000. While the population rebound is great news, this is an extreme bottleneck effect and the seals have lost a great deal of their original genetic diversity.

Furthermore, these Northern elephant seals are now very different from their counterparts in the South Pacific that did not undergo a bottleneck. For example, Northern elephant seals have an asymmetric looking face that is extremely rare among Southern elephant seals. This facial anomaly in the Northern seals might remind you of the Ellis-van Creveld syndrome seen in America’s Amish. Genetically, the two cases are similar; both are examples of genetic drift. Neither the asymmetric face nor the allele for Ellis-van Creveld syndrome offer a survival benefit, yet they have increased in abundance in these specific populations.

The type of genetic drift experienced by North American Amish communities is not a bottleneck, it is a founder effect because the Amish are not rare survivors of a large population that was mostly destroyed. Rather, they are descended from a small group of founders (Figure 6), people who left their roots in German lands and crossed the ocean. By pure chance, the small group that left Europe had a higher frequency of the Ellis-van Creveld gene allele than the larger population from which they came. When they became the founders of the new population of Amish in America, their descendants also exhibited that higher frequency.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: When a portion of a population is separated, like when settlers leave for a new location - a type of genetic drift called the founder effect occurs. The separated population's genetic makeup starts to change and, over time, match that of the founding men and women. image © Tsaneda

Over time, we would expect that the Ellis-van Creveld condition would reduce survival rates of those that bear it, mostly because of the heart defects. This would result in a reduction in the allele frequency and this could be detected using the Hardy-Weinberg Equilibrium equation. However, the symptoms associated with Ellis-van Creveld disease don't impair survival until after the person has likely reproduced, making it hard for natural selection to eliminate since the trait is already passed on before it hurts anyone.

Founder effects have been implicated for numerous recessive diseases, such as Tay-Sachs in Ashkenazi Jews, who didn’t abandon Europe but became reproductively isolated, due to anti-Semitism in the Middle Ages. Similar to the sickle cell gene, it’s also possible that the Tay-Sachs gene conferred some health benefit in heterozygous individuals many centuries ago. This makes matters more complex, but that’s a common characteristic of nature. Evolution results from the combined effect of many forces. Genetic drift is an important one but does not operate in a vacuum.

Comprehension Checkpoint

When a population suffers a sudden catastrophic decline and is then repopulated by a small group of survivors, it is called a:

You might be asking, “So if Hardy-Weinberg Equilibrium only holds for populations that are not evolving, and all populations are always evolving, what is it good for?” The value of the equation is two-fold. First, it is useful for calculating allele and genotype frequencies for populations at a certain point in time. It may not predict the future, but it can at least help describe the present. Secondly, the value of the Hardy-Weinberg principle is in helping us discover when a certain gene is being subject to natural selection or some other evolutionary force. If the Hardy-Weinberg Equilibrium predictions do not hold, then we know that something interesting is happening to that gene in the population.

Those who remember their basic algebra will recognize the equation, p2 + 2pq + q2 = 1, as a quadratic equation, more often expressed as x2 + 2xy + y2 = 1. This equation comes from the expansion of (x + y) = 1. When you square both sides, you get (x + y)2 = 12. While 12 is just 1, (x + y)2 expands to x2 + 2xy + y2.

When it comes to gene frequencies, the squaring of both sides of the equation represents fertilization – the fusion of sperm and egg. The sperm or egg cells are gametes, reproductive cells having half the number of chromosomes of a mature cell. When the two gametes are joined in the creation of a new organism, the frequency of each allele is multiplied. The genotype frequency of the resulting individual is the frequency of the maternal allele times the frequency of the paternal allele.

Frequency of the two alleles in the population: p + q = 1

Fertilization brings two alleles together: (p + q)2 = 12

Performing the square: p2 + 2pq + q2 = 1

Remember that p and q represent the allele frequencies, or the number of times that allele appears on the genes of the individuals in the population. For example, earlier we noted that in the population of 100 dogs, 70 had the long tail allele (p = 0.7) and 30 had the short tail allele (q = 0.3). While p2, q2, and 2pq represent the genotype frequencies, or the number of individuals in a population with the various types of genotypes (homozygous recessive, homozygous dominant, or heterozygous).

This quadratic relationship is also the mathematical expression of the dihybrid cross of selected individuals (a dihybrid cross is when two parents that differ by two pairs of alleles mate), but applied to a random population. It’s the same concept: the fusion of gametes brings together two alleles and so their individual frequencies are multiplied together. (For more on dihybrids, see our module Mendel and Independent Assortment.)

Because p = the frequency of the dominant alleles, p2 represents the frequency of homozygous dominant individuals. In the dog example above, the frequency of p, the allele for a long tail, equals 0.7. Therefore the frequency of homozygous dominant dogs would be (0.7)2 = 0.7 x 0.7 = 0.49. And because q = the frequency of recessive allele, a short tail, q2 represents the frequency of homozygous recessive dogs, which is (0.3)2 = 0.3 x 0.3 = 0.09.

Finally, 2pq represents the frequency of heterozygous dogs, those with both the long and short tail allele, which is 2 x 0.7 x 0.3 = 0.42. We can check our math by ensuring the frequency of each phenotype adds up to one: 0.49 + 0.09 + 0.42 = 1

Comprehension Checkpoint

In the Hardy-Weinberg Equilibrium equation, the symbol q represents the:

Let’s look at an example. Suppose that we have 100 rabbits. 88 of them have an agouti fur coat, a kind of blended color, which is a dominant trait. The other 12 have black fur, which is a recessive trait, so we know that those 12 are homozygous for the black fur gene. Therefore, q represents the frequency of the recessive allele. We can actually calculate q because we know q2. Since 12 rabbits out of 100 have the black fur, that means that q2 = 12/100 = 0.12. To find q, we calculate the square-root of 0.12, which is 0.35 (rounding to two significant figures).

Using this knowledge, we can calculate the other frequencies using the Hardy-Weinberg Equilibrium equation. If the black fur allele equals 0.35 and the frequencies of both alleles must add up to one, this means that p = 1 - 0.35 = 0.65. That’s the frequency of the dominant allele that produces agouti fur, and by squaring that frequency we can get the number of homozygous dominant rabbits. p2 = (0.65)2 = 0.42. Since there are 100 rabbits total, that means 42 of them are homozygous dominant for an agouti coat.

If 42 are homozygous dominant (agouti fur) and 12 are homozygous recessive (black fur), how many are heterozygous (agouti fur with only one agouti allele)? That’s 100 – (12 + 42) = 100 - 54 = 46, or 0.46 of the population of 100. The Hardy-Weinberg Equilibrium equation predicts that this frequency should equal 2pq. Let’s make sure that it does: 2 x 0.65 x 0.35 = 0.46. The math is correct!

Changes in the genetic makeup of a population affect the incidence of certain traits and diseases within the population. Beginning with a look at the abnormally high rate of a dangerous health condition in US Amish communities, this module explores forces that affect a population's gene pool. Among them are natural selection, gene flow, and two types of genetic drift: founder effects and bottleneck events. The Harvey-Weinberg Equilibrium equation is presented along with sample problems that show how to calculate the frequency of specific alleles in a population.

Key Concepts

  • Variants in genes are called alleles. Alleles can be dominant, meaning they are always expressed, or recessive, meaning that only individuals that receive defective copies from both parents are affected.

  • The work of Gregor Mendel on genes and inherited traits was important in the development of early genetic theories of traits.

  • In a population, the frequencies of alleles (the variations of genes), genotypes (the alleles an individual possesses), and phenotypes (the characteristics an individual expresses due to the alleles) will remain constant, or at equilibrium, unless acted upon by a force.

  • The Hardy-Weinberg Equilibrium equation (p2 + 2pq + q2 = 1) describes how alleles behave in a given population, also known as a population’s gene pool.

  • Genetic drift refers to changes in gene frequencies due to random events, which can happen very quickly, producing dramatic and sudden effects.

  • There are two main types of genetic drift: bottleneck events (when a population suffers a sudden catastrophic decline and is repopulated by a small group of survivors) and Founder effects (when a new population is started by just a few members of the original population).

David Warmflash, MD, Nathan H Lents, Ph.D. “Population Genetics” Visionlearning Vol. BIO-5 (1), 2016.

Top


Page 26

Energy in Living Systems

by Nathan H Lents, Ph.D., John Nishan

Before scientists understood the process of photosynthesis, they were at a loss to explain how plants could grow and increase their mass so dramatically from what appeared to be a steady diet of water. A 17th century Flemish chemist named Jean Baptista van Helmont thought plants “extracted” the bulk of their food from soil (Van Helmont, 1841). Other scientists assumed plants gained their weight and size from carbon dioxide, while others assumed that water alone gave plants their heft.

None of these explanations, however, held up when tested experimentally. In test after test, mass lost by soil, water, and even carbon dioxide didn’t measure up to the mass gained by a growing plant. It wasn’t until Joseph Priestley’s experiments a century later that scientists began to suspect sunlight as the major contributor to a plant’s growth.

Priestley, partially credited with the discovery of elemental oxygen, found that when he placed fresh sprigs of mint leaves inside a sealed glass container, a candle would burn longer than if the leaves were not there (Figure 1). He also found that a previously extinguished candle would reignite inside a sealed jar – sometimes days after it had ceased to burn – if mint leaves were present. This caused him to suspect that the leaves were somehow “refreshing” the air inside the container.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 1: Priestley’s experiments suggested leaves “refreshed” the air inside a closed container.

Several years later, a Dutch scientist named Jan Ingenhousz, having heard of Priestley’s experiments, began to conduct experiments of his own. He submerged willow plants in water and saw that bubbles formed on the surface of the leaves. The bubbles, however, formed only when the experiment was conducted in the presence of sunlight. Ingenhousz later determined the gas bubbles were oxygen, but never fully understood the significance of what he had observed regarding the sunlight.

Collectively, these chemists established the products and reactants of photosynthesis – water, oxygen, carbon dioxide, and light. But it took the musings of a German physicist named Julius Von Mayer to put the pieces together. Von Mayer, the first to propose that “energy is neither created nor destroyed,” was also first to suggest that plants derive their energy for growth from sunlight.

Von Mayer’s understanding of photosynthesis implied that the sun was the basis for all life on Earth. The sun’s chemical energy, he said, feeds the plants that in turn feed almost every living thing on the planet. He explained photosynthesis as a process that created organic molecules – sugars – from the inorganic molecules carbon dioxide and water (Liebig, 1841). He first articulated the equation as:

CO2 + H2O + light energy → O2 + organic matter + chemical energy

Work by other scientists helped to establish the chemical formula of the organic products of photosynthesis, which is usually simplified as a glucose molecule: C6H12O6. The properly balanced general formula for photosynthesis thus becomes:

6CO2 + 6H2O + light energy → C6H12O6 + 6O2

Comprehension Checkpoint

Plants take energy in the form of _____ and covert it to the form of _____.

The principal product of photosynthesis (sugar) is a high-energy molecule, but the reactants (carbon dioxide and water), are low-energy molecules, so the process of photosynthesis needs an energy source to drive it. Molecules called pigments absorb energy from light. The main pigment in photosynthesis is called chlorophyll. Chlorophyll exists in several different forms in different organisms. Chlorophyll a is the main photosynthetic pigment found in land plants and algae. It absorbs light in the blue/violet range of the light spectrum (wavelengths of 400-450nm) as you can see in Figure 2. It also absorbs light in the red range of the spectrum (wavelengths of 650-700nm) to a lesser degree. Green light is almost completely reflected by chlorophyll, giving plants their greenish hue.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 2: The absorption spectrum of chlorophyll a and b.

Plants do not make equal use of all the wavelengths present in the full range of visible light – a fact first demonstrated by German plant physiologist T. W. Engelmann in 1882. He used a simple experiment to demonstrate that the blue and red wavelengths of light, in particular, were the biggest drivers of photosynthesis.

Comprehension Checkpoint

______ molecules drive photosynthesis by absorbing energy from light.

Engelmann split white light into its spectral components using a prism and shone it on a dish of liquid solution containing a photosynthetic green algae called Chladophora. He then released bacteria into the solution. The bacteria, which need oxygen to survive, migrated toward those areas in the dish where blue and red wavelength light was shining. Why? Because where the red and blue range of light was shining, the photosynthetic algae produced more oxygen due to increased photosynthetic activity. With this demonstration, Engelmann had established the first action spectrum of photosynthesis.

Chlorophyll a does not perfectly overlap with the action spectrum of photosynthesis identified by Engelmann (see Table 1). This led scientists to suspect that there are additional pigments in plants that absorb light at different wavelengths. Land plants have pigments such as chlorophyll b and carotene, while other photosynthetic organisms, like protists, have chlorophyll c and chlorophyll a.

Pigment Peak absorbance Reflects
Chlorophyll Chlorophyll a 400-450nm green
Chlorophyll b 450-500nm yellow
Carotenoids (α and β forms) 425-475nm
Phycobilins-in red algae & cyanobacteria Wavelengths not absorbed by chlorophyll a Red, orange blue
Table 1: Three basic classes of photosynthetic pigments give plants and other photosynthetic organisms their color.

Plant pigments are classified as either chlorophylls or carotenoids. Chlorophylls reflect green light while carotenoids reflect light in the red, orange, and yellow range. Carotenoids give carrots their color. They are considered an accessory pigment because they cannot transfer sunlight energy directly to the photosynthetic pathway. Carotenoids pass their absorbed energy to chlorophyll, which in turn transfers energy to the photosynthetic pathway.

Photosynthetic pigments are large, hydrophobic molecules embedded in protein pigment complexes called photosystems that work like antennas to collect the sun’s energy. In plants, the photosystems are embedded in the thylakoid membranes inside chloroplasts (Figure 3).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 3: Chlorophyll pigments are found in thylakoid membranes inside plant cell organelles called chloroplasts.

Comprehension Checkpoint

Celery and carrots are different colors because their pigments absorb light at different wavelengths.

Photosynthesis occurs in two phases: the light-dependent reactions and the Calvin-Benson Cycle (see the Photosynthesis I video below). The light-dependent reaction is the first phase, when pigments like chlorophyll harvest light energy. The Calvin-Benson Cycle uses that energy to synthesize high-energy sugar molecules from carbon dioxide. In plants and algae, the light reactions occur within the thylakoid membranes of chloroplasts. The animation below provides an overview of photosynthesis.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Photosynthesis 1

When a photon of light (see Light I: Particle or Wave? module) strikes a pigment molecule, its energy is transferred to the pigment and one of the pigment’s electrons becomes “excited.” When excitation of an electron occurs, it “jumps” to a higher energy state. Thus, the energy of light is “captured” by the pigment in the form of an excited electron. The excited electron can hold on to this energy only for a brief time, though. If it cannot pass the energy quickly, the electron will fall back down to a low-energy state and the energy will be given off as heat.

Within a chloroplast of a leaf, however, there are many pigment molecules packed together very tightly in structures called light-harvesting complexes, which are combinations of proteins, cofactors, and pigment molecules. The pigment molecules are constantly moving in random, Brownian motion, colliding with one another. Excited pigments transfer energy to their neighboring pigments until it reaches the reaction center, as shown in Figure 4.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 4: Electron excitement and energy transfer inside a light-harvesting complex.

Like the light-harvesting complexes, the reaction centers are also made of proteins, cofactors, and pigments, but there are two types of reaction centers: photosystem I and photosystem II. Photosystem I, so named because it was discovered first, is also referred to as P700 because the special chlorophyll a pigment molecules that form it best absorb light of wavelength 700nm. Photosystem II is also referred to as P680, because the chlorophyll molecules that form it best absorb light in the 680nm wavelength. In both cases, after either P700 or P680 become excited, either by a photon or another excited pigment molecule, one of its electrons moves to a higher energy state. The difference between these two photosystems lies in what happens next with this harnessed energy. View a video of photosystems I and II below.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Photosynthesis 2

Even though it was discovered and named second, photosystem II is actually where the story begins. When a photon of light strikes the reaction center of photosystem II, it excites an electron that leaves and begins its journey through a series of high-energy electron acceptors and donors collectively known as the electron transport chain (ETC) as shown in Figure 5. (This particular ETC is called the cytochrome ETC, after one of the members of the chain that was discovered first.)

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 5: Photosystem II initiates the electron transport chain and primes the proton pump for ATP synthesis.

At the same time, two water molecules bind to a water-splitting enzyme at the reaction center of photosystem II, as seen in Figure 6. When the water molecules split, ionized hydrogen atoms (H+) enter the thylakoid space. An enzyme called cytochrome b6f, the next stop in the chain after photosystem II, generates more ions for the proton pump and sends the excited electrons along toward photosystem I. As the hydrogen ions accumulate within the thylakoid space, they create the H+ gradient that drives ATP synthesis. ATP will be used for sugar synthesis later, in the Calvin-Benson Cycle.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 6: Formation of O2 by photosystem II.

Oxygen atoms from the split water molecules also accumulate within the thylakoid space. Lone oxygen atoms are very reactive and rapidly combine to form molecular oxygen (O2) that is released as a waste product of photosynthesis. Yes, each molecule of oxygen that we breathe was formed in a chloroplast somewhere as an accidental by-product of the splitting of water. Electrons are at a much lower energy state at the end of the ETC than they were at the beginning of the process. They get a badly needed boost at the reaction centers in photosystem I.

Photosystem I also consists of light-harvesting complexes with lots of pigment molecules for capturing light energy. Light energy harvested from photons and intermediate-energy electrons from photosystem II flow to a special chlorophyll a molecule structure called P700 in photosystem I. Electrons jump up to a high-energy state when a photon arrives at P700, either directly from sunlight, or through a collision with an already excited pigment.

Once re-excited to a high energy level, the electrons don’t stay for long. Excited electrons leave photosystem I and flow through another ETC, but this one, called the Ferredoxin ETC, is much shorter and does not drive ATP synthesis. The Ferredoxin ETC passes the excited electrons to the high-energy electron acceptor NADP+, which then combines with a proton (H+) from the surrounding solution and forms NADPH. NADPH then delivers high-energy electrons to the Calvin Cycle for long-term energy storage in the form of sugar (Figure 7).

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.
Figure 7: Photosynthesis proteins embedded in a thylakoid membrane deliver high energy electrons to the Calvin Cycle and send hydrogen ions into the lumen to generate a proton gradient.

Comprehension Checkpoint

In the first phase of photosynthesis,

After the energy of light is harvested as high-energy electrons held by NADPH, these electrons are then used to synthesize high-energy sugar molecules from the low-energy starting material of carbon dioxide. The Calvin-Benson Cycle used to be called “the dark reactions” because light is not directly involved. However, this name is misleading because the products of the light reactions are required to drive the Calvin Cycle. Thus, light is required, just not directly.

So far, we’ve seen how the flow of electrons in the light reactions goes like this (note: PSI and PSII stand for photosystem I and II):

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

This linear path is called noncyclic electron transport. However, not all electrons flow in this linear path. Some electrons double back and return to PSII after the PSI. This is called cyclic electron flow.

Are not kinked, and thus pack closely together, making animal fats solid at room temperature.

Why would some electrons take the redundant path of twice being energized by PSII and twice flowing through the cytochrome ETC? The answer is found when thinking about what the ETC produces – ATP. The simple noncyclic flow of electrons produces ATP and NADPH in roughly equal amounts. However, the Calvin Cycle needs more ATP than NADPH. Thus, the extra trip through the ETC that occurs in cyclic electron flow provides a little “boost” of ATP so that the Calvin Cycle has what it needs to synthesize sugars.

In roughly 300 years, our understanding of photosynthesis has progressed from mere identification of all the basic products and reactants of photosynthesis to a detailed picture of the molecular processes involved. We have summarized in this module how electrons are harvested, energized, and stored in the covalent bonds of NADPH, a process called light reactions. In the next module, we explore the Calvin-Benson Cycle where high-energy electrons from NADPH drive synthesis of carbohydrates – the sugars that provide sustenance to nearly every living thing on the Earth.

Through photosynthesis, plants harvest energy from the sun to produce oxygen and sugar, the basic energy source for all living things. This module introduces photosynthesis, beginning with experiments leading to its discovery. The stages of photosynthesis are explained. Topics include the role of chlorophyll, the action spectrum of photosynthesis, the wavelengths of light that drive photosynthesis, light-harvesting complexes, and the electron transport chain.

Key Concepts

  • Photosynthesis is a process by which an organism converts light energy from the sun into chemical energy for its sustenance.

  • Photosynthesis occurs in plants, algae, and some species of bacteria.

  • In plants, chloroplasts contain chlorophyll that absorbs light in the red and blue-violet regions of the spectrum.

  • Photosynthesis occurs in two stages: the light-dependent stage that occurs in the thylakoid membrane of the chloroplast and harvests solar energy, and the light-independent stage that takes that energy and makes sugar from carbon dioxide.

  • HS-C5.2, HS-C5.4, HS-LS1.C1
  • Van Helmont, J.B. Ortus medicinae: id est initia phisicae inaudita: progressus medicinae novus in morborum vltionem ad vitam longam, (sumptibus Joan. Ant. Huguetan [et] Guillielmi Barbier, 1952).

  • Liebig, J. Annalen der Chemie und Pharmacie, (CF Winter'sche, 1841).

Nathan H Lents, Ph.D., John Nishan “Photosynthesis I” Visionlearning Vol. BIO-3 (6), 2014.

Top