Using Technology to Study Cellular and Molecular Biology
sponsoring Institutes
Main Getting Started Teacher's Guide Student Activities About NIH and NCRR
glossary | map | contact 
National Institutes of Health website National Center for Research Resources website

 

National Institutes of Health
National Center for Research Resources

Using Technology to Study Cellular and Molecular Biology

Main    Getting Started    Teacher's Guide    Student Activities    About NIH and NCRR

Glossary    Map    Contact

Teacher's Guide hand using a mouse

Teacher's Guide

Information about Using Technology to Study Cellular and Molecular Biology

1 Introduction

For society to gain the most from technology, the public must be able to understand scientific issues and consider them rationally. This point is made in the National Science Education Standards: “Because molecular biology will continue into the 21st century as a major frontier of science, students should understand the chemical basis of life, not only for its own sake, but because of the need to take informed positions on some of the practical and ethical implications of humankind’s capacity to tinker with the fundamental nature of life.”

A molecular genetic perspective affords teachers an opportunity to help integrate many of biology’s subdisciplines. This integrative process began with the advent of recombinant-DNA technology and is now being propelled by the new areas of bioinformatics and genomic biology. According to the National Science Education Standards, molecular and evolutionary biology are among the “small number of general principles that can serve as the basis for teachers and students to develop further understanding of biology.” A similar point is made in a medical context by the new Standards for Technology Literacy, which recognizes that “the use of technology has made numerous contributions to medicine over the years. Scientific and technological breakthroughs are at the core of most diagnostic and treatment practices.”12

When teachers try to relate advances in technology to biology, they may be frustrated by the fact that there is a lag, measured in years, between scientific advance and its inclusion in the curriculum. For example, the polymerase chain reaction (PCR) was introduced to the scientific community in 1985. Biology teachers became aware of the technique through stories in the media and wanted to learn more about it. It was not until 1990, however, when PCR inventor Kary Mullis published an article about the technique in Scientific American, that teachers found an accessible treatment of this important technology. It took another few years for PCR to be mentioned in most high school biology textbooks. This curriculum supplement, Using Technology to Study Cellular and Molecular Biology, will help short-circuit the usually lengthy process by which technology makes its way to the classroom.

2 Major Preconceptions

Preconception 1. Study in one field proceeds without contributions from, or connections to, other fields.

This belief occurs, in part, because scientific disciplines are treated as isolated subjects in most schools. Most science educators, however, recognize the many connections among biology, chemistry, and physics, and understand the need for an integrated approach to science teaching. For example, molecular biology is a hybrid discipline, drawing upon concepts and techniques from physics, chemistry, and biology. This hybrid nature explains in part why high school students may find the study of molecular biology challenging. They are confronted by a science that is abstract and seems far removed from classical biology. Moreover, many students are introduced to the subject at a point in their education where they have yet to take a formal course in either chemistry or physics. Without this scientific foundation, they are ill-prepared to undertake the study of life at its most funda-mental level.

Preconception 2. Most of what students are exposed to in science classes is about science, not technology.

Additionally, technology is about computers rather than about a way of adapting or a process for solving a problem. It is important for students to learn that each of the technologies covered in this supplement is a tool applied to a specific task. The supplement will help students recognize the type of scientific information that can be obtained from various techniques and gain an appreciation for and an understanding of the role technology has played in advancing our understanding of biological systems.

The concepts of resolution and scale can help students appreciate that structures invisible to the unaided eye, such as mitochondria, ribosomes, viruses, and protein molecules, have vastly different sizes and require different technologies for study.

Preconception 3. Students are likely to have preconceptions about the contributions that a range of technologies has made to science and medicine, that is, about the problem-solving capacity of technology.

For example, students have probably looked at a specimen with a light microscope, and they have seen photomicrographs in textbooks. However, students have limited experience evaluating the information conveyed at the microscopic level and placing it in the proper context. Consequently, it will be important in this supplement to help students gain a perspective of the relative sizes of cellular and molecular structures. The concepts of resolution and scale can help students appreciate that structures invisible to the unaided eye, such as mitochondria, ribosomes, viruses, and protein molecules, have vastly different sizes and require different technologies for study. It is important that this supplement help students understand the need to obtain information from more than one technique to solve a problem.

Preconception 4. Structure and function are independent and unrelated concepts.

This supplement can build a foundation to address this preconception and to help students understand the interdependence of structure and function. With this supplement, students will explore concepts to help them understand that technologies provide scientists with essential information about structure. The relationship between structure and function may be easier for students to understand at a macroscopic level, and students may struggle to understand this relationship at the abstract level of molecules. Inquiry-based activities will allow students to learn what structure is and at how many levels structure can be defined. Through these activities, students will learn how developing structural information at various tiers provides increasingly greater information about function. Structure-function relationships are critical to understanding normal cellular processes, as well as those associated with disease. Such intimate knowledge of biomolecules promises to expand the range of drug targets, shift the discovery effort from direct screening programs to rational target-based drug design, and usher in a new era of personalized medicine. One of the activities that follows—in Lesson 3, Putting Technology to Work—gives students insight into these scientific developments.

3 Scale and Resolution

3.1 Scale

How big is “big”? How small is “small”? It depends, of course, on one’s point of reference. An insect such as a bee (about 12 mm in length) is very small compared with a human (perhaps 1.7 to 2 meters in height). However, a bee is very large compared with one of the pollen grains it gathers (about 30 µm, or 0.03 mm, in diameter). While it may be easy to discern the relative sizes of some objects, such as those we can see with the naked eye, it is far more difficult to imagine the size of things that are very large or very small. For instance, how large is a lightyear? Can we conceive of the difference between 10 lightyears and 100 lightyears? What is the distance across a cell? A virus? A protein molecule? How much larger are these than the distance between two adjacent carbon atoms in a sugar molecule? Importantly, where do we humans fit into the picture?

Figure 1
Figure 1. Size of some familiar objects and energy waves on a logarithmic scale.

To understand the continuum from small to large, we need a way to represent the relationship between the actual size of an object (for example, its length or mass) and how that size is characterized either numerically or visually. We need a scale, a series of ascending and descending steps to assess the relative or absolute size of some property of an object. Scales can have upper and lower values, as required. They may be linear, or, when the distance between upper and lower values is very large, they may be logarithmic. Figure 1 presents the size of some familiar objects and energy waves on a logarithmic scale.

Without some notion of scale, a water molecule might appear to be as large as a house if both are drawn to occupy the same physical space on a piece of paper.

3.2 Resolution

In cellular and molecular biology, we are interested in resolving structural details of organs and tissues at the cellular level, of the intricacies that form the intracellular environment, of the molecules that make up living systems, and of molecular interactions. We are interested in understanding how living systems function. How do muscles contract? How do enzyme reactions occur? How are metabolic pathways regulated? How are molecules transported from one site to another? How do antibodies recognize antigens? We want answers to so many questions related to how living systems function that require us to understand molecular structure first. Why? A molecule’s function is determined by and is dependent on its structure. So, how do we get information about the structure of biological molecules? Consider the following:

As we look down a street in a residential neighborhood, we note individual houses because we are capable of distinguishing the space between the houses. We accomplish this feat using our visual system to detect visible light. In other words, visible light is the probe we use to resolve these discrete structures. In a general sense, we can think of resolving power as a measure of the ability of a system to form separate and distinct images of two objects of a given angular separation. This relationship is derived from the laws of optics. What does this mean to the study of cellular and molecular biology? In the laws of optics, two objects can be resolved if they are illuminated with radiation of wavelength that is not larger than the distance separating the objects. Visible light has a wavelength of 4,000 to 7,000 angstroms (Å; 1 Å = 10–8 cm = 10–10 m), or 4 to 7 × 10–7 m, and is a great probe for viewing a portion of our world. We can resolve much with the naked eye and even more, such as cells and cell organelles, with a light microscope. However, its wavelength makes it unusable as a probe for resolving much smaller objects, such as molecules and atoms. Other probes with smaller wavelengths are required for this task.

4 Major Techniques in the Study of Cellular and Molecular Biology

There is a reciprocal relationship between technology and the process of science. Improvements in technology enable scientists to investigate questions that were previously difficult, or even impossible, to address. At the same time, scientific curiosity often provides the impetus for refining an existing technology or developing a new one. This section provides a brief survey of some technologies important to the study of cellular and molecular biology. It presents a sampling of current research in cellular and molecular biology, showing that techniques that have been around for decades continue to be refined and put to new uses, sometimes in combination with other techniques.

4.1 Microscopy

The development of the microscope allowed us to extend our view to things not visible to the naked eye. Consider what our view of biological systems would be if we had no knowledge of cells and cell structure. Figure 2 depicts the development of three major types of microscopy over time.

The line for each type of microscopy shows how improvements in technology have increased the resolution available with each technique. Higher resolution means being able to see smaller objects.

Figure 2
Figure 2. Development and resolution of three major types of microscopy over time.
Figure 3. Photo of a tabletop optical microscope.
Figure 3. Optical microscope.

Optical microscopy. The first microscopes were optical microscopes, which used glass lenses to focus and magnify light. The first optical microscope was constructed around 1695 by Hans and Sacharias Janssen, but it wasn’t until 60 to 80 years later that major discoveries were made with this technology. By viewing capillaries under a microscope in 1660, Marcello Malpighi proved the controversial theory that blood circulates in a circular motion from the heart around the body and back to the heart. Also about this time, Robert Hooke is credited with discovering the cell, the basic unit of life. Antonio van Leeuwenhoek improved the lenses used in microscopes, allowing an increase in maximum magnification from 50× to 200×. Because of this, Leeuwenhoek was the first scientist to view bacteria, protozoa, and sperm cells. There were additional improvements to optical microscopy over the next 200 to 300 years, which ultimately allowed optical microscopes to distinguish objects as small as 200 nanometers (nm; 2 × 10–7 m). This resolution is a physical limit dictated by the wavelength of light (see section 3.2).

Electron microscopy. The first electron microscope was built in 1933 by Ernst Ruska, who was awarded the 1986 Nobel Prize in Physics for his achievements in electron optics. To break the 200-nm optical-resolution barrier, Ruska used accelerated electrons instead of light and magnetic coils instead of glass lenses to make an image. Electrons have a wavelength that is 104 to 105 times smaller than the wavelength of light. This allows electron microscopes to resolve objects that are 103 times smaller than the smallest resolvable object in a light microscope.

Figure 4
Figure 4. Resolution of three major types of microscopes.

Interestingly, although the design and physical appearance of electron microscopes have changed over the years, the essential characteristics remain the same. All electron microscopes require a high vacuum in which to form an electron beam and high voltage to control this beam. Electromagnetic lenses then focus the electron beam onto the specimen and viewing screen.

Figure 5 shows a typical transmission electron microscope (TEM). Note the much larger physical size compared with a standard light microscope, which fits comfortably on a laboratory bench. TEMs are patterned after standard transmission light microscopes and yield similar information about the size, shape, and arrangement of particles that make up a specimen, albeit at much higher resolution and with a magnification range of about 1,000× to 300,000×.

Figure 5
Figure 5. A typical transmission electron microscope (TEM).

 

The state-of-the-art TEM is the high-resolution TEM (Figure 6), which can magnify a sample up to 50,000,000 times and provide a resolution of 0.1 nm. It can produce information that complements data obtained from X-ray techniques (see section 4.2).

Figure 6
Figure 6. High-resolution TEM.

 

Figure 7
Figure 7. Scanning electron microscope.

In addition to the TEM, the other most common electron microscope is the scanning electron microscope (SEM; Figure 7). The SEM provides information about the surface features of an object. We learn about an object’s appearance, texture, and detectable features to within a resolution of several nanometers. Interestingly, we do not learn this information by viewing biological specimens directly. Biological specimens have low contrast and are difficult to see in the SEM. Consequently, high-contrast heavy atoms, such as osmium, are used to stain specimens and provide an indirect image of the underlying biological structures.

Resolution can be improved by modifications of the sample-preparation procedure. In a technique called cryo-electron microscopy (cryo-EM), specimens are rapidly frozen without formation of ice crystals that can distort the specimen’s structure. It is then possible to construct two- and three-dimensional models of the sample by using a computer program that averages many electron micrographs taken from different angles. When the technique was first applied to the structure of the ribosome in 1991, the resolution was just 45 Å. Still, it was possible to see the two ribosome subunits and the triangular space between them. In recent years, scientists have used cryo-EM techniques to image the ribosome to 4 Å.6–8 Studies with these techniques have revealed the surface topography of the ribosome for the first time and helped crystallographers interpret the ribosome’s diffraction patterns.

Other microscopic techniques. Despite the long history of light microscopy, it is still being improved. For example, a new way to image living cells without disturbing their biochemistry has been developed. Called coherent anti-Stokes Raman scattering, the technique directs two laser beams into the cell. The frequencies of the lasers differ by exactly the frequency at which a particular chemical bond in the cell vibrates. The lasers cause the chemical bond to vibrate and emit its own characteristic optical signal. The lasers can focus on tiny volumes and, by moving through the cell interior, can create a chemical map of the cell. One disadvantage of the technique is that it takes many minutes to produce an image, which limits its ability to visualize rapid changes within the cell.

Another technique, called Fourier transform infrared microspectroscopy (FTIR), combines microscopy with spectroscopy to provide chemical information about the sample being visualized. Samples can be analyzed wet or dry, in air, at room temperature, and at normal pressure. FTIR is limited for analysis of living specimens because samples must be very thin. It has proven useful in studies of pathogenesis, however. Biochemical studies of disease often fail to detect chemical compounds associated with pathology because the chemicals are diluted during their analysis. FTIR can be used to pinpoint areas of disease and identify compounds in individual cells, providing insights into disease progression. The technique is currently being developed for objective evaluations of pap smears.

Figure 8
Figure 8. Laser confocal microscopes produce optical sections of biological specimens one plane at a time.

Laser confocal microscopy is a valuable tool for obtaining high-resolution images and three-dimensional reconstructions of biological specimens. This technique’s major value is its ability to produce optical sections of a biological specimen that contain information from only one focal plane. By moving the focal plane of the microscope step by step through the thickness of a specimen, a series of optical sections can be obtained. The source of light for this technique is a laser, because it can produce very high intensities. The biological specimens are stained with a fluorescent probe to make a specific structure or structures visible in the presence of the laser light.

Figure 9
Figure 9. Laser confocal microscope.

Confocal microscopes are not large instruments. They consist of a microscope containing a confocal attachment. In the example in Figure 9, the confocal attachment is mounted on top of the upright microscope. It contains the complicated optics package. Also necessary are a large box containing electronics, a laser, and a computer for collecting and analyzing data.

Laser confocal microscopy is being used now to study the spatial and temporal organization of the DNA-transcription apparatus. Three-dimensional reconstructions suggest that splicing factors are stored in specific areas of the nucleus. When DNA templates are introduced, these factors are recruited to sites of transcription in an intron-dependent fashion. The movement of proteins within the nucleus is also being studied using confocal microscopy.20

Research indicates that proteins move rapidly throughout the nucleus in an energy-independent manner. Studies such as these are helping scientists understand nuclear architecture and how nuclear processes are organized in the cell.

While electron microscopes require that samples be carefully prepared and examined in a vacuum, a new family of microscopes can achieve electron microscope resolution in air or even liquid, and they require much less sample preparation. They have even been used to study living cells. These are called scanning probe microscopes (SPMs). These instruments use a microscopic needle-like probe (3 to 50 nm at the tip) that is scanned back and forth across a surface. A three-dimensional image is constructed from the recorded interactions between the probe and the atoms in the sample. The SPM has the ability to operate on a scale from micrometers to nanometers. It can magnify an object up to 10,000,000 times. In the laboratory under ideal conditions, the SPM can be used to look at individual atoms. Furthermore, SPMs can measure properties that other microscopes cannot, such as thermal properties, friction, hardness, magnetic properties, and extent of chemical binding.

Figure 10
Figure 10. Molecules of the protein GroEL viewed with a scanning probe microscope (SPM). (Reprinted here with permission from Zhifeng Shao, University of Virginia. Posted at http://www.people.virginia.edu/~zs9q/zsfig/random.html.)

 

Figure 11
Figure 11. Myosin molecules viewed with an SPM. (Reprinted here with permission from Zhifeng Shao, University of Virginia. Posted at http://www.people.virginia.edu/~zs9q/zsfig/myosin.html.)

 

4.2 X-ray crystallography

X-ray crystallography of proteins is a perfect example of the multidisciplinary approach to technology development, since it is a combination of chemistry, physics, and biology. It was designed to determine protein structure and, in so doing, provide some information about how proteins actually function in cells. This technology, like the microscopic techniques described above, continues to evolve. While it provides detailed information about protein structure, X-ray crystallography is also being used to design better medicines for treating serious diseases.

Resolving the structure of biomolecules requires visualizing individual atoms, which are only 1 to 3 Å apart when joined to form molecules. Therefore, resolving carbon, oxygen, and nitrogen atoms requires a probe with a wavelength of less than 2 Å. Light, with a wavelength of 4,000 to 7,000 Å, cannot be used for this task. However, the wavelengths of X-rays (like electrons) are short enough that the X-rays are scattered by the electron clouds of molecules and can be used to reveal the shape of a molecule. Furthermore, X-ray techniques have some advantages over electron microscopy for determining the structure of biomolecules, such as proteins. For instance, the electron beam damages its target after a short exposure because it is powerful enough to break chemical bonds. Electron microscopy is limited to resolving biomolecules to no greater than about 7 Å, whereas X-ray crystallography can be used to resolve biomolecular structures to greater than 1 Å in some cases.

In X-ray crystallography, X-rays, with wavelengths of the same order of magnitude as the spacing between atoms, are directed through a crystal of the substance under study (Figure 12). The X-rays are bent (or diffracted) by the electrons surrounding the atoms in the crystal. Each diffracted X-ray is represented as a spot, whether recorded on film or electronically by a detector.

A single molecule will not produce a detectable diffraction pattern, so crystals containing many millions of identical molecules in a regular pattern are used to amplify the signal. After measuring the positions and intensities of the diffraction spots, these data can be used to calculate an electron density map. There are thousands of spots to analyze, so sophisticated computer programs and high-speed computers are needed to convert the patterns of different intensity spots into electron density maps. The maps display contour lines of electron density, thus producing an image of the electron clouds of the molecule being studied. Because electrons surround atoms more or less uniformly, it is possible to determine where atoms are located by looking at these maps. By rotating the crystal and generating an electron density map for each angle of rotation, it is possible to produce a three-dimensional model of the molecule. If the amino acid sequence of a protein is known, an accurate model of the protein can be generated by fitting the atoms of the known sequence into the electron density map.

Figure 12
Figure 12. The X-ray crystallography process.

Figure 13 shows a typical diffraction pattern for a single orientation of a protein crystal through which an X-ray beam has been passed. Note the different positions and intensities of the spots, which mark the locations where scattered X-rays have struck the detector. The image is divided into quadrants because the detector was composed of four separate, adjacent modules. The white circle to the right of center with the white line extending to the left is a shadow resulting from a “beamstop.” The beamstop is a small piece of lead mounted on a metal arm. It prevents the intense beam of unscattered X-rays from impinging on and damaging the detector.

Figure 14 shows a three-dimensional model of a protein that was crystallized and then analyzed by X-ray crystallography.

Equipment used in X-ray crystallography continues to undergo development and refinement.

One of the most striking advancements has been the use of synchrotron X-rays, which are produced by the bending of particle beams generated by large accelerators. In a synchrotron, charged particles, such as electrons or positrons, are orbited around a path nearly a mile in circumference, which must be maintained in a vacuum. Understandably, synchrotrons are quite expensive to build and to maintain, and there are fewer than 20 in the world. Because synchrotron X-ray beams are many orders of magnitude brighter than the usual laboratory X-ray sources, data for single crystal orientations can be collected with exposures of a minute or less, rather than exposures of several minutes to an hour.

Figure 13
Figure 13. A typical X-ray–diffraction pattern for a single orientation of a protein crystal through which an X-ray beam has been passed.


Figure 14
Figure 14. Three-dimensional structure of the DNA-repair protein MutY as determined by X-ray crystallography. Graphic was produced from information available at http://www.rcsb.org/pdb/.

The completion of the Human Genome Project has provided the foundation for explosive growth in structural biology. Technological advances in X-ray crystallography have greatly reduced the time and effort required to solve structures. In addition to synchrotron X-rays, advances include faster X-ray detectors, improved computational methods for processing data, and robotics for growing and handling crystals. Structure determinations that used to involve a 20-person, yearlong effort now constitute a single chapter in a graduate student’s thesis. The Protein Structure Initiative, reminiscent of the Human Genome Project, aims to produce the three-dimensional structures for the estimated 1,000 to 5,000 distinct spatial arrangements assumed by polypeptides found in nature. Such high-throughput data collection is best suited to X-ray crystallography using synchrotron radiation. A modern synchrotron source can reduce total data collection to just 30 minutes, as compared with weeks using earlier X-ray–diffraction equipment.

Determining structures by X-ray diffraction continues to add to our understanding of DNA replication and protein synthesis. For example, scientists recently studied the crystal structures of a bacterial DNA polymerase I that had DNA primer templates bound to its active site.13

The enzyme was catalytically active, which allowed for direct observation of the products of several rounds of nucleotide incorporation. The polymerase was able to retain its ability to distinguish between correctly and incorrectly paired nucleotides in the crystal. By comparing the structures of successive complexes, it was possible to determine the structural basis for sequence-independent recognition of correctly formed base pairs.13

Ribosomes are the largest asymmetric structures to be solved by X-ray crystallography so far. Results, with resolutions as high as 2.4 Å, have helped establish the locations of the 27 proteins and the 2,833 bases of ribosomal (rRNA) found within the ribosome.4 The structure also shows that contacts between the two ribosome subunits are limited, which helps explain why the ribosome subunits dissociate so readily.

Some biomolecules or biomolecular complexes are not suitable for diffraction analysis because they cannot be crystallized. Scientists, however, are optimistic about developing techniques to deal effectively with noncrystalline materials.18 This will make it possible to image everything from cells to individual protein molecules.

4.3 Nuclear magnetic resonance (NMR) spectroscopy

Most people know of magnetic resonance imaging (MRI) as an important diagnostic tool in medicine that can produce incredible images of soft tissues. Less well known is that MRI represents only a limited area of NMR. NMR depends on the fact that atomic nuclei having an odd number of protons, neutrons, or both have an intrinsic spin. When such a nucleus is placed in a magnetic field, it can align either in the same direction as the field or in the opposite direction. A nucleus aligned with the field has a lower energy than one aligned against it. NMR spectroscopy refers to the absorption of radiofrequency radiation by nuclei in a strong magnetic field. Absorption of energy causes the nuclei to realign in the higher-energy direction. The nuclei then emit radiation and return to the lower-energy state. The local environment around each nucleus will distort the magnetic field slightly and affect its transition energy. This relationship between transition energy and an atom’s position within a molecule allows NMR to provide structural information.

One advantage of NMR spectroscopy over X-ray crystallography and electron microscopy is that it can be applied to the study of movement at the molecular level. NMR studies are providing a growing list of cases where conformational dynamics correlate with protein-protein interaction on surfaces. For example, the enzyme ATP synthase catalyzes the formation of ATP from ADP and phosphate during oxidative phosphor-ylation in animals and photophosphorylation in plants. This enzyme functions as a molecular motor that uses an internal rotary mechanism. NMR has been used to reveal structural changes in a protein subunit of the enzyme that may explain how the rotation is driven.20

Figure 15
Figure 15. Equipment for high-resolution nuclear magnetic resonance (NMR) spectroscopy.

Many see the successful Human Genome Project as providing a foundation for a major initiative in structural biology in which NMR will play a critical role.5 Informal groups of scientists in the United States are proposing the creation of 10 regional “collaboratories,” each with powerful new-generation NMR spectrophotometers to assist with high-throughput structure determinations. Universities, too, are interested in establishing collaborative centers in genomics and proteomics.9 At Stanford, Nobel Prize–winning physicist Steven Chu and biochemist James Spudich are leading an effort to create an interdisciplinary research center housing 50 faculty members, while Princeton University is planning to add an interdisciplinary genomics institute to its molecular biology department.

4.4 Laser technology

When the laser made its first appearance in the 1950s, it was a tool without a task. Since then, the laser has been put to myriad uses in our everyday lives—from scanning prices at the supermarket to playing music and printing text. Similarly, in scientific research, the laser has found many applications. It is like a Swiss Army knife, having many blades with a variety of uses.

Combining lasers and microscopy has greatly expanded our ability to image cellular and molecular structures. Cells, or parts of cells, can be exposed to antibodies or nucleic acid probes labeled with fluorescent dyes. When excited by laser light of the appropriate wavelength, specific areas of the cell, or regions of a chromosome, can be visualized. The resolution of optical microscopy is limited by physical laws. Diffraction prevents the laser beam (and therefore the spot of fluorescence) from being focused any finer than about 200 nm. However, a new approach is overcoming this limit. It uses a combination of two laser beams, one to illuminate and image the sample, and a second that shapes the first beam and reduces the effects of diffraction. The technique has been used to distinguish crystals only 100 nm apart and is still undergoing improvement.

Lasers, together with magnets, are being used to develop technologies for manipulating single molecules. Investigators are now able to examine how DNA interacts with the various protein molecules that cut, paste, and copy it. DNA is an ideal choice for single-molecule studies. It is a very large molecule (the longest human chromosome stretches to 9 centimeters) and quite robust. For example, scientists have succeeded in using lasers as optical tweezers to tie knots in single DNA molecules.2 Results indicate that knotted DNA is stronger than actin, a major muscle protein. Although tying DNA into knots may not seem particularly useful, it does provide insight into the molecule’s mechanical properties, which are critical to understanding how enzymes interact with it.

4.5 Simulations and computations

The explosion of data produced by the Human Genome Project led to the creation of a new discipline, bioinformatics, whose focus is on the acquisition, storage, analysis, modeling, and distribution of the many types of information embedded in DNA and protein-sequence data.14

Biologists are familiar with the terms in vivo and in vitro, used to describe processes that occur in the body and in the test tube, respectively. Now they are becoming acquainted with a new term, in silico, used to describe a new branch of biology that requires little more than a computer and a connection to the Internet. As more and more DNA and protein sequence data find their way into computer databases, the ability of bioinformatics to address biological questions becomes more powerful. The amount of genetic data available and the rate of acquisition are astonishing by any measure. According to Francis Collins, head of the National Human Genome Research Institute, it took four years to obtain the first 1 billion base pairs of human sequence and just four months to get the second billion.16

The amount of genetic data available and the rate of acquisition are astonishing by any measure.

The use of computers to model protein folding is one of the primary efforts in the postsequencing phase of the Human Genome Project. In the 1970s, when the first proteins were modeled, the structures generated were in vacuo (in a vacuum), with no other molecules interacting with the protein. Of course, each protein in a living cell is surrounded by thousands of water molecules, and these have an important effect on the protein’s conformation. Indeed, research has demonstrated that the water-containing models of proteins are much better predictors of how the proteins look and function within a cell.10

The importance of protein folding was recently recognized by IBM, which announced that it would spend $100 million to build a supercomputer called Blue Gene. The five-year IBM initiative will involve modeling how proteins take on their three-dimensional shapes. A major aim is to help drug researchers identify drug targets for treating diseases. Protein folding is a daunting problem. Even Blue Gene, which will be 500 times faster than the current fastest computer, will require about one year to simulate the complete folding of a typical protein. The stakes, however, are huge. Approximately one-third of the genes identified in the newly sequenced human genome are of unknown function and are therefore of particular academic and commercial interest. New companies are formed on a monthly basis to take part in this genetics sweepstakes.

5 Technology and the Origins of Molecular Biology

This section provides a brief history of the origins of molecular biology. It addresses the gene’s chemical nature, organization, and behavior. Despite molecular biology’s narrow focus on DNA, it is readily apparent that many of the most important advances in the field have relied heavily on technology-based contributions from chemistry and physics. This is addressed in the National Science Education Standards. The History and Nature of Science Content Standard G states, “As a result of activities in grades 9 to 12, all students should develop understanding of…historical perspectives.” It further states, “Occasionally, there are advances in science and technology that have important and long-lasting effects on science and society.”

Science historians often attribute the origins of molecular biology to the Phage Group, which first met in 1940 at Cold Spring Harbor Laboratory in Long Island, N.Y. At the center of the group were three scientists. Max Delbrück, a German physicist working at Vanderbilt University, and Salvador Luria, an Italian biologist working at Indiana University, had fled to the United States from Nazi Europe. They were joined at Cold Spring Harbor by Alfred Hershey, an American biologist working for the Carnegie Institution’s Department of Genetics.

Bacteriophage, also called phage, are viruses that infect bacteria.1 These were discovered in 1916 by the English microbiologist F.W. Twort and, independently, two years later by the French-Canadian F. d’Herelle. It was d’Herelle who came up with the name bacteriophage. Phage became an important area of research in the 1920s, when scientists hoped they could be used to treat bacterial diseases. When this hope failed to materialize, phage research fell out of favor until the Phage Group resurrected it.22

In 1944, Delbrück organized a summer course at Cold Spring Harbor Laboratory to introduce other scientists to the quantitative methods for studying phage that he and Luria had developed. In that same year, the great Austrian physicist Erwin Schrödinger published a book titled What Is Life? that discussed heredity from a physics perspective.19 Schrödinger reasoned that although living things obey the laws of physics, they also might be governed by undiscovered physical laws. Although biologists of that time regarded Schrödinger’s book as romantic and a bit naive (for example, he seemed unaware of the important one-gene–one-enzyme work of George Beadle and Edward Tatum from the early 1940s), the book has been credited with influencing a generation of physicists to consider biological questions.

Soon, the ranks of the Phage Group began to grow. It included other physicists, such as Leo Szilard, holder of the patent for the nuclear chain reaction and a participant in the Manhattan Project, and Thomas Anderson, one of the first American electron microscopists. Micrographs obtained by Anderson and Roger Herriott showed that phage begin the infection process by attaching to bacteria by their tails. Later, empty phage “ghosts” could be seen on the bacterial surface.

Hershey and his colleague Martha Chase used phage to examine the molecular nature of the gene.11 They took advantage of radioactive isotopes that became available as a consequence of work on the atomic bomb. Despite the earlier work of Oswald Avery and his colleagues demonstrating that DNA was the hereditary substance,3 many scientists continued to believe that genes could only be made of protein. Hershey and Chase began their experiment by using radioactive phosphorous to label phage DNA and radioactive sulfur to label phage protein. They tried to detect which radiolabel went inside the bacterium to direct synthesis of new phage particles after the bacterium was infected. At first, they could not effectively detach the phage particles from the surfaces of the bacterial cells, but then an unexpected technology came to their aid. They used a Waring blender, originally designed to mix cocktails, to disrupt the attachments of the phage to the bacterial cells. The radioactive phosphorous went into the bacterial cells, while the radioactive sulfur remained outside with the phage ghosts, confirming that DNA, and not protein, contains the genetic information. This work set the stage for the contribution of the youngest member of the Phage Group, James Watson.

Watson came to the Cavendish Laboratory at Cambridge University in 1951, ostensibly to study the three-dimensional structures of proteins. He quickly fell in with Francis Crick, a British physicist, who had developed an interest in heredity after reading Schrödinger’s What Is Life? The pair formed a collaboration that resulted two years later in the proposal of the double helix model of DNA.23 Although Watson and Crick relied on model building to solve DNA’s structure, they could not have succeeded without help from two other scientists at Cambridge, Maurice Wilkins and Rosalind Franklin. Wilkins first, and then Franklin, used X-ray diffraction to study the structure of DNA. In the case of DNA fibers, the diffraction patterns suggested that the molecule was some type of a helix with a diameter of 20 Å and a repeat of 34 Å. Near the end of the paper that describes the double helix, Watson and Crick included the statement, “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”

Experimental support for a copying mechanism suggested by the double helix structure came in 1958 from Matthew Meselson and Frank Stahl, then working at the California Institute of Technology. In what some have called “the most elegant experiment in molecular biology,” they demonstrated that DNA replicates in a semiconservative fashion, during which one parental DNA strand serves as the template for the synthesis of a new complementary strand.17 Their ingenious approach involved using a heavy isotope of nitrogen and the ability of density gradient centrifugation to distinguish this heavy form (15N) from the normal light form (14N).

Meselson and Stahl grew Escherichia coli in a nutrient medium containing only 15N as a source of nitrogen. DNA replication introduced the heavy isotope of nitrogen into the bacterial DNA. After 14 generations, the bacteria were placed into a medium that contained only 14N as a nitrogen source. During the subsequent replication, the light isotope was incorporated into the bacterial DNA.

Samples of cells were removed before the switch to the light-isotope growth medium (generation 1) and from the first two generations following the switch (generations 2 and 3). DNA samples extracted from the cell samples were centrifuged through a solution of cesium chloride that forms a density gradient during centrifugation (for 20 hours at 40,000 revolutions per minute). DNA molecules form a discrete band at a position where their density equals that of the cesium chloride gradient. The DNA samples taken from generation 1 contained a single heavy band, since both DNA strands contained the 15N isotope. Samples from generation 2 displayed a single band of medium density, since each DNA molecule consisted of one heavy (15N) parental strand and one light (14N) complementary strand. Finally, samples from generation 3 displayed bands of two different densities. One band of medium density again consisted of a heavy parental strand and a new complementary light strand. A second band of light density consisted of two strands of light DNA, one an inherited light parental strand and the other, a new complementary light strand.

Around the time that Meselson and Stahl were performing their experiments, Crick theorized that genetic information flow resided in DNA, passed through an RNA intermediate, and became expressed as a sequence of amino acids. Using the electron microscope, it was possible to visualize DNA and RNA molecules that first had been stained with heavy metals. Using extracts from bacteria, scientists were able to glimpse Crick’s “central dogma” in action. Micrographs were obtained that showed newly synthesized RNA molecules branching off from a transcribed region of DNA. Furthermore, ribosomes could be seen already attaching to the growing RNA chains. Not only did electron microscopy provide this comprehensive view of gene expression, it also was about to produce critical insight into gene organization.

In 1977, the laboratories of Phillip Sharp and Richard Roberts independently used the electron microscope to make a fundamental discovery about gene organization and expression. First in adenovirus, and later in eukaryotic DNA, it was shown that some genes are interrupted by stretches of DNA that are not represented in the messenger RNA (mRNA). For example, DNA containing the gene for ovalbumin was denatured and hybridized to ovalbumin mRNA. Electron micrographs of the hybrid revealed regions of heteroduplex formation alternating with a series of seven loops that corresponded to regions of genomic DNA that have complementary sequences in the mRNA. The regions of a gene found in the mRNA are called exons, because they are expressed in the gene product. Regions not found in the mRNA are called introns, because they are located in between the exons.

The origins and early development of molecular biology would not have been possible without biophysical techniques such as X-ray diffraction, electron microscopy, and isotope labeling. These techniques, along with others, continue to be refined and extended to new areas of biology. As biology becomes more data intensive, it relies increasingly on biophysical techniques.

The completion of the Human Genome Project marks the end of the effort to decode the entire set of human genes. It also marks the unofficial start of the next phase of our continuing quest to understand how genetics contributes to human health and well-being. Biology underwent a paradigm shift more than 30 years ago after the discovery of restriction enzymes. These enzymes are just tools, yet they helped shift biology from a largely descriptive science to a manipulative one. In a similar way, the rise of structural biology is helping propel biology toward another paradigm shift. Currently, over 500,000 human DNA sequences are contained in genetic databases. It is estimated that these may give rise to 160,000 targets for drug development.

6 The Goal of This Supplement

The goal of this curriculum supplement is to help prepare high school biology students for the technological world they will inherit. This is consistent with the National Science Education Standards. For example, Science and Technology Content Standard E states, “As a result of activities in grades 9 to 12, all students should develop…understandings about science and technology.” A fundamental concept that underlies this standard is that science advances with the introduction of new technologies, and solving technological problems results in new scientific knowledge. New technologies also extend scientific understandings and introduce new areas of research.

The technologies presented in this supplement are new to most high school students. Very few students will have had much exposure to chemistry or physics, and students in your classes will be spending only about a week with this supplement. A detailed understanding of each technique should not be the primary objective of the supplement. Rather, students should come away from it with an appreciation of some of the applications and implications of technology in the study of cellular and molecular biology.

Return to Teacher's Guide