123
Britannica LaunchPacks | The Treatment and Prevention of Disease 1 of 123 © 2020 Encyclopædia Britannica, Inc. The Treatment and Prevention of Disease For Grades 9-12 This Pack contains: 5 ARTICLES 1 IMAGE 5 VIDEOS

The Treatment and Prevention of Disease - Social Studies

Embed Size (px)

Citation preview

Britannica LaunchPacks | The Treatment and Prevention of Disease

1 of 123© 2020 Encyclopædia Britannica, Inc.

The Treatment and Prevention of DiseaseFor Grades 9-12

This Pack contains:

5 ARTICLES1 IMAGE5 VIDEOS

Britannica LaunchPacks | The Treatment and Prevention of Disease

2 of 123© 2020 Encyclopædia Britannica, Inc.

infectious diseaseBritannica Note:

Infectious diseases are caused by microorganisms, and they spread in many ways. Infectious diseases may travel through food, air, and water; contact with animals; and person-to-person contact.

in medicine, a process caused by an agent, often a type of microorganism, that impairs a person’s health. In

many cases, infectious disease can be spread from person to person, either directly (e.g., via skin contact) or

indirectly (e.g., via contaminated food or water).

An infectious disease can differ from simple infection, which is the invasion of and replication in the body by any

of various agents—including bacteria, viruses, fungi, protozoans, and worms—as well as the reaction of tissues

to their presence or to the toxins that they produce. When health is not altered, the process is called a

subclinical infection. Thus, a person may be infected but not have an infectious disease. This principle is

illustrated by the use of vaccines for the prevention of infectious diseases. For example, a virus such as that

which causes measles may be attenuated (weakened) and used as an immunizing agent. The immunization is

designed to produce a measles infection in the recipient but generally causes no discernible alteration in the

state of health. It produces immunity to measles without producing a clinical illness (an infectious disease).

The most important barriers to invasion of the human host by infectious agents are the skin and mucous

membranes (the tissues that line the nose, mouth, and upper respiratory tract). When these tissues have been

broken or affected by earlier disease, invasion by infectious agents may occur. These infectious agents may

produce a local infectious disease, such as boils, or may invade the bloodstream and be carried throughout the

body, producing generalized bloodstream infection (septicemia) or localized infection at a distant site, such as

meningitis (an infection of the coverings of the brain and spinal cord). Infectious agents swallowed in food and

drink can attack the wall of the intestinal tract and cause local or general disease. The conjunctiva, which covers

the front of the eye, may be penetrated by viruses that cause a local inflammation of the eye or that pass into

the bloodstream and cause a severe general disease, such as smallpox. Infectious agents can enter the body

through the genital tract, causing the acute inflammatory reaction of gonorrhea in the genital and pelvic organs

or spreading out to attack almost any organ of the body with the more chronic and destructive lesions of

syphilis. Even before birth, viruses and other infectious agents can pass through the placenta and attack

developing cells, so that an infant may be diseased or deformed at birth.

From conception to death, humans are targets for attack by multitudes of other living organisms, all of them

competing for a place in the common environment. The air people breathe, the soil they walk on, the waters and

vegetation around them, the buildings they inhabit and work in, all can be populated with forms of life that are

potentially dangerous. Domestic animals may harbour organisms that are a threat, and wildlife teems with

agents of infection that can afflict humans with serious disease. However, the human body is not without

defenses against these threats, for it is equipped with a comprehensive immune system that reacts quickly and

specifically against disease organisms when they attack. Survival throughout the ages has depended largely on

these reactions, which today are supplemented and strengthened through the use of medical drugs.

Britannica LaunchPacks | The Treatment and Prevention of Disease

3 of 123© 2020 Encyclopædia Britannica, Inc.

Infectious agentsCategories of organisms

The agents of infection can be divided into different groups on the basis of their size, biochemical

characteristics, or manner in which they interact with the human host. The groups of organisms that cause

infectious diseases are categorized as bacteria, viruses, fungi, and parasites.

Bacteria

Bacteria can survive within the body but outside individual cells. Some bacteria, classified as aerobes, require

oxygen for growth, while others, such as those normally found in the small intestine of healthy persons, grow

only in the absence of oxygen and, therefore, are called anaerobes. Most bacteria are surrounded by a capsule

that appears to play an important role in their ability to produce disease. Also, a number of bacterial species

give off toxins that in turn may damage tissues. Bacteria are generally large enough to be seen under a light

microscope. Streptococci, the bacteria that cause scarlet fever, are about 0.75 micrometre (0.00003 inch) in

diameter. The spirochetes, which cause syphilis, leptospirosis, and rat-bite fever, are 5 to 15 micrometres long.

Bacterial infections can be treated with antibiotics.

Bacterial infections are commonly caused by pneumococci, staphylococci, and streptococci, all of which are

often commensals (that is, organisms living harmlessly on their hosts) in the upper respiratory tract but that can

become virulent and cause serious conditions, such as pneumonia, septicemia (blood poisoning), and meningitis.

The pneumococcus is the most common cause of lobar pneumonia, the disease in which one or more lobes, or

segments, of the lung become solid and airless as a result of inflammation. Staphylococci affect the lungs either

in the course of staphylococcal septicemia—when bacteria in the circulating blood cause scattered abscesses in

the lungs—or as a complication of a viral infection, commonly influenza—when these organisms invade the

damaged lung cells and cause a life-threatening form of pneumonia. Streptococcal pneumonia is the least

common of the three and occurs usually as a complication of influenza or other lung disease.

Pneumococci often enter the bloodstream from inflamed lungs and cause septicemia, with continued fever but

no other special symptoms. Staphylococci produce a type of septicemia with high spiking fever; the bacteria can

reach almost any organ of the body—including the brain, the bones, and especially the lungs—and destructive

abscesses form in the infected areas. Streptococci also cause septicemia with fever, but the organisms tend to

cause inflammation of surface lining cells rather than abscesses—for example, pleurisy (inflammation of the

chest lining) rather than lung abscess, and peritonitis (inflammation of the membrane lining the abdomen)

rather than liver abscess. In the course of either of the last two forms of septicemia, organisms may enter the

nervous system and cause streptococcal or staphylococcal meningitis, but these are rare conditions.

Pneumococci, on the other hand, often spread directly into the central nervous system, causing one of the

common forms of meningitis.

Staphylococci and streptococci are common causes of skin diseases. Boils and impetigo (in which the skin is

covered with blisters, pustules, and yellow crusts) may be caused by either. Staphylococci also can cause a

severe skin infection that strips the outer skin layers off the body and leaves the underlayers exposed, as in

severe burns, a condition known as toxic epidermal necrolysis. Streptococcal organisms can cause a severe

condition known as necrotizing fasciitis, commonly referred to as flesh-eating disease, which has a fatality rate

between 25 and 75 percent. Streptococci can be the cause of the red cellulitis of the skin known as erysipelas.

Britannica LaunchPacks | The Treatment and Prevention of Disease

4 of 123© 2020 Encyclopædia Britannica, Inc.

Some staphylococci produce an intestinal toxin and cause food poisoning. Certain streptococci settling in the

throat produce a reddening toxin that speeds through the bloodstream and produces the symptoms of scarlet

fever. Streptococci and staphylococci also can cause toxic shock syndrome, a potentially fatal disease.

Streptococcal toxic shock syndrome (STSS) is fatal in some 35 percent of cases.

Meningococci are fairly common inhabitants of the throat, in most cases causing no illness at all. As the number

of healthy carriers increases in any population, however, there is a tendency for the meningococcus to become

more invasive. When an opportunity is presented, it can gain access to the bloodstream, invade the central

nervous system, and cause meningococcal meningitis (formerly called cerebrospinal meningitis or spotted

fever). Meningococcal meningitis, at one time a dreaded and still a very serious disease, usually responds to

treatment with penicillin if diagnosed early enough. When meningococci invade the bloodstream, some gain

access to the skin and cause bloodstained spots, or purpura. If the condition is diagnosed early enough,

antibiotics can clear the bloodstream of the bacterium and prevent any from getting far enough to cause

meningitis. Sometimes the septicemia takes a mild, chronic, relapsing form with no tendency toward meningitis;

this is curable once it is diagnosed. The meningococcus also can cause one of the most fulminating of all forms

of septicemia, meningococcemia, in which the body is rapidly covered with a purple rash, purpura fulminans; in

this form the blood pressure becomes dangerously low, the heart and blood vessels are affected by shock, and

the infected person dies within a matter of hours. Few are saved, despite treatment with appropriate drugs.

Haemophilus influenzae is a microorganism named for its occurrence in the sputum of patients with influenza—

an occurrence so common that it was at one time thought to be the cause of the disease. It is now known to be a

common inhabitant of the nose and throat that may invade the bloodstream, producing meningitis, pneumonia,

and various other diseases. In children it is the most common cause of acute epiglottitis, an infection in which

tissue at the back of the tongue becomes rapidly swollen and obstructs the airway, creating a potentially fatal

condition. also is the most common cause of meningitis and pneumonia in children under five H. influenzaeyears of age, and it is known to cause bronchitis in adults. The diagnosis is established by cultures of blood,

cerebrospinal fluid, or other tissue from sites of infection. Antibiotic therapy is generally effective, although

death from sepsis or meningitis is still common. In developed countries where vaccine is used, there H. influenzahas been a great decrease in serious infections and deaths.

Chlamydial organisms

Chlamydia are intracellular organisms found in many vertebrates, including birds and humans and other

mammals. Clinical illnesses are caused by the species , which is a frequent cause of genital C. trachomatisinfections in women. If an infant passes through an infected birth canal, it can produce disease of the eye

(conjunctivitis) and pneumonia in the newborn. Young children sometimes develop ear infections, laryngitis, and

upper respiratory tract disease from . Such infections can be treated with erythromycin.Chlamydia

Another chlamydial organism, , produces psittacosis, a disease that results from exposure Chlamydophila psittacito the discharges of infected birds. The illness is characterized by high fever with chills, a slow heart rate,

pneumonia, headache, weakness, fatigue, muscle pains, anorexia, nausea, and vomiting. The diagnosis is

usually suspected if the patient has a history of exposure to birds. It is confirmed by blood tests. Mortality is rare,

and specific antibiotic treatment is available.

Rickettsias

The rickettsias are a family of microorganisms named for American pathologist Howard T. Ricketts, who died of

typhus in 1910 while investigating the spread of the disease. The rickettsias, which range in size from 250

nanometres to more than 1 micrometre and have no cell wall but are surrounded by a cell membrane, cause a

Britannica LaunchPacks | The Treatment and Prevention of Disease

5 of 123© 2020 Encyclopædia Britannica, Inc.

group of diseases characterized by fever and a rash. Except for , the cause of Q fever, they are Coxiella burnetiiintracellular parasites, most of which are transmitted to humans by an arthropod carrier such as a louse or tick.

, however, can survive in milk, sewage, and aerosols and can be transmitted to humans by a tick or by C. burnetiiinhalation, causing pneumonia in the latter case. Rickettsial diseases can be treated with antibiotics.

Humans contract most rickettsial diseases only when they break into a cycle in nature in which the rickettsias

live. In murine typhus, for example, is a parasite of rats conveyed from rat to rat by the Rickettsia mooseriOriental rat flea, ; it bites humans if they intrude into its environment. Scrub typhus is caused Xenopsylla cheopisby , but it normally parasitizes only rats and mice and other rodents, being carried from one to R. tsutsugamushithe other by a small mite, (previously known as ). This mite is fastidious in matters Leptotrombidium Trombiculaof temperature, humidity, and food and finds everything suitable in restricted areas, or “mite islands,” in South

Asia and the western Pacific. It rarely bites humans in their normal environment, but if people invade its territory

en masse it will attack, and outbreaks of scrub typhus will follow.

The spotted fevers are caused by rickettsias that spend their normal life cycles in a variety of small animals,

spreading from one to the other inside ticks; these bite human intruders and cause African, North Asian, and

Queensland tick typhus, as well as Rocky Mountain spotted fever. One other spotted fever, rickettsialpox, is

caused by , which lives in the body of the ordinary house mouse, , and spreads from one to R. akari Mus musculusanother inside the house mite (formerly ). This rickettsia Liponyssoides sanguineus Allodermanyssus sanguineusis probably a parasite of wild field mice, and it is perhaps only when cities push out into the countryside that

house mice catch the infection.

Mycoplasmas and ureaplasmas

Mycoplasmas and ureaplasmas, which range in size from 150 to 850 nanometres, are among the smallest known

free-living microorganisms. They are ubiquitous in nature and capable of causing widespread disease, but the

illnesses they produce in humans are generally milder than those caused by bacteria. Diseases due to

mycoplasmas and ureaplasmas can be treated with antibiotics.

Mycoplasma pneumoniae is the most important member of its genus. is associated with 20 M. pneumoniaepercent of all cases of pneumonia in adults and children over five years of age. Patients have fever, cough,

headache, and malaise and, upon physical examination, may be found to have pharyngitis (inflamed throat),

enlarged lymph nodes, ear or sinus infection, bronchitis, or croup. Diagnosis is established by chest X-rays and

blood tests. Although treatment with erythromycin or tetracycline may shorten the illness, it can last for weeks.

Mycoplasmas may also cause a red, bumpy rash—usually on the trunk or back—that is occasionally vesicular

(with blisters). Inflammation of the heart muscle and the covering of the heart (pericardium) is rare but can be

caused by mycoplasmas. About one-fourth of the people infected with these organisms experience nausea,

vomiting, diarrhea, and cramping abdominal pain. Inflammation of the pancreas (pancreatitis) or the liver

(hepatitis) may occur, and infection of the brain and spinal cord is a serious complication.

Ureaplasmas can be recovered frequently from the genital areas of healthy persons. The organism can cause

inflammation of the urethra and has been associated with infertility, low birth weight of infants, and repeated

stillbirths. In general, however, ureaplasma infections are mild. Tetracycline is the preferred treatment once the

organism has been established as the cause of infection by microscopic examination of urethral secretions.

Viruses

Viruses are not, strictly speaking, living organisms. Instead, they are nucleic acid fragments packaged within

protein coats that require the machinery of living cells to replicate. Viruses are visible by electron microscopy;

Britannica LaunchPacks | The Treatment and Prevention of Disease

6 of 123© 2020 Encyclopædia Britannica, Inc.

they vary in size from about 25 nanometres for poliovirus to 250 nanometres for smallpox virus. Vaccination has

been the most successful weapon against viral infection; some infections may be treated with antiviral drugs or

interferon (proteins that interfere with viral proliferation).

Viruses of the Herpesviridae family cause a multiplicity of diseases. Those causing infections in humans are the

varicella-zoster virus (VZV), which causes chickenpox and herpes zoster (shingles); the Epstein-Barr virus, which

causes infectious mononucleosis; the cytomegalovirus, which is most often associated with infections of

newborn infants and immunocompromised people; and herpes simplex virus, which causes cold sores and

herpetic venereal (sexually transmitted) diseases.

There are two serotypes of herpes simplex virus, HSV-1 and HSV-2. HSV-1 is the common cause of cold sores.

The primary infection usually occurs in childhood and is without symptoms in 50 to 80 percent of cases. Between

10 and 20 percent of infected individuals have recurrences precipitated by emotional stress or by other illness.

HSV-1 can also cause infections of the eye, central nervous system, and skin. Serious infections leading to death

may occur in immunocompromised persons. HSV-2 is associated most often with herpetic lesions of the genital

area. The involved area includes the vagina, cervix, vulva, and, occasionally, the urethra in females and the

head of the penis in males; it may also cause an infection at the site of an abrasion. The disease is usually

transmitted by sexual contact. In herpetic sexually transmitted diseases, the lesions are small, red, painful spots

that quickly vesiculate, become filled with fluid, and quickly rupture, leaving eroded areas that eventually

become scabbed. These primary lesions occur from two to eight days after exposure and may be present for up

to three weeks. Viral shedding and pain usually resolve in two weeks. When infections recur, the duration of the

pain, lesions, and viral shedding is approximately 10 days.

There are numerous other viruses that are transmitted between humans and that are significant causes of

illness and death. Seasonal influenza viruses, for example, circulate globally every year, causing illness in tens of

millions of people worldwide; an estimated 290,000 to nearly 650,000 people die from seasonal influenza each

year.

In addition, new types of infectious viruses emerge periodically. In many instances, these viruses “jump” to

humans from an animal reservoir, such as bats, pigs, or primates; this occurs when a human is in close contact

with an animal that carries the virus. Often the virus then evolves to become transmissible between humans.

Examples of infectious viruses that originated from animal reservoirs in the mid-20th or early 21st century and

went on to cause epidemics or pandemics of disease in humans include ebolaviruses, SARS coronavirus,

influenza A H1N1, human immunodeficiency virus (HIV/AIDS), and SARS-CoV-2 coronavirus.

Fungi

Fungi are large organisms that usually live on dead and rotting animal and plant matter. They are found mostly

in soil, on objects contaminated with soil, on plants and animals, and on skin, and they may also be airborne.

Fungi may exist as yeasts or molds and may alternate between the two forms, depending on environmental

conditions. Yeasts are simple cells, 3 to 5 micrometres (0.0001 to 0.0002 inch) in diameter. Molds consist of

filamentous branching structures (called hyphae), 2 to 10 micrometres in diameter, that are formed of several

cells lying end to end. Fungal diseases in humans are called mycoses; they include such disorders as

histoplasmosis, coccidioidomycosis, and blastomycosis. These diseases can be mild, characterized by an upper

respiratory infection, or severe, involving the bloodstream and every organ system. Fungi may cause

devastating disease in persons whose defenses against infection have been weakened by malnutrition, cancer,

or the use of immunosuppressive drugs. Specific types of antibiotics known as antifungals are effective in their

treatment.

Britannica LaunchPacks | The Treatment and Prevention of Disease

7 of 123© 2020 Encyclopædia Britannica, Inc.

Parasites

Among the infectious parasites are the protozoans, unicellular organisms that have no cell wall, that cause such

diseases as malaria. The various species of malarial parasites are about 4 micrometres (0.0002 inch) in

diameter. At the other extreme, the tapeworm can grow to several metres in length; treatment is designed

either to kill the worm or to dislodge it from its host.

The worm causes ascariasis, one of the most prevalent infections in the world. lives Ascaris lumbricoides Ascarisin the soil, and its eggs are ingested with contaminated food. The eggs hatch in the human intestine, and the

worms then travel through the bloodstream to the liver, heart, and lungs. They can cause pneumonia,

perforations of the intestine, or blockage of the bile ducts, but infected people usually have no symptoms

beyond the passage of worms in the stool. Specific treatment is available and prognosis is excellent.

Infections are also caused by whipworms, genus , and pinworms, , each Trichuris Enterobius vermicularispopularly named for its shape. The former is parasitic in the human large intestine and may cause chronic

diarrhea. The latter can be found throughout the gastrointestinal tract, especially in children, and can cause poor

appetite, loss of weight, anemia, and itching in the anal area (where it lays its eggs). Both conditions are easily

diagnosed and treated with drugs.

Modes of survival

Infectious agents have various methods of survival. Some depend on rapid multiplication and rapid spread from

one host to another. For example, when the measles virus enters the body, it multiplies for a week or two and

then enters the bloodstream and spreads to every organ. For several days before a rash appears, the surface

cells of the respiratory tract are bursting with measles virus, and vast quantities are shed every time the

infected person coughs or sneezes. A day or two after the rash appears, the amount of antibody (protein

produced in response to a pathogen) rises in the bloodstream, neutralizing the virus and stopping further

shedding. The patient rapidly becomes noninfectious but already may have spread the virus to others. In this

way an epidemic can rapidly occur. Many other infectious agents—for example, influenza virus—survive in this

manner. How such viruses exist between epidemics is, in some cases, less clear.

Learn why tuberculosis is still a threat to the human population.

© American Chemical Society

The picture is different in more-chronic infections. In tuberculosis there is neither overwhelming multiplication

nor rapid shedding of the tubercle bacillus. Rather, the bacilli remain in the infected person’s body for a long

period, slowly forming areas of chronic inflammation that may from time to time break down and allow them to

escape.

Some organisms form spores, a resting or dormant stage that is resistant to heat, cold, drying, and chemical

action. Spore-forming organisms can survive for months or years under the most adverse conditions and may

Britannica LaunchPacks | The Treatment and Prevention of Disease

8 of 123© 2020 Encyclopædia Britannica, Inc.

not, in fact, be highly infectious. The bacterium that causes tetanus, , is present everywhere in Clostridium tetanithe environment—in soil, in dust, on window ledges and floors—and yet tetanus is an uncommon disease,

especially in developed countries. The same is true of the anthrax bacterium, . Although Bacillus anthracisusually present in abundance in factories in which rawhides and animal wool and hair are handled, it rarely

causes anthrax in employees. , the cause of botulism, produces one of the most lethal Clostridium botulinumtoxins that can afflict humans, and yet the disease is one of the rarest because the microorganism depends for

its survival on its resistant spore.

In contrast to these relatively independent organisms, there are others that cannot exist at all outside the

human body. The germs of syphilis and gonorrhea, for example, depend for survival on their ability to infect and

their adaptation to the human environment.

Some organisms have complicated life cycles and depend on more than one host. The malarial parasite must

spend a portion of its life cycle inside a mosquito, while the liver fluke , an occasional human Fasciola hepaticaparasite, spends part of its life in the body of a land animal such as a sheep, part in a water snail, and part in the

open air as a cyst attached to grass.

Commensal organisms

All of the outer surfaces of the human body are covered with agents that normally do no harm and may, in fact,

be beneficial. Those commensal organisms on the skin help to break down dying skin cells or to destroy debris

secreted by the many minute glands and pores that open on the skin. Many of the organisms in the

intestinaltract break down complex waste products into simple substances, and others help in the manufacture

of chemical compounds that are essential to human life.

The gastrointestinal tract is considered in this regard to be one of these “outer” surfaces since it is formed by

the intucking, or invagination, of the ectoderm, or outer surface, of the body. The mouth, nose, and sinuses

(spaces inside the bones of the face) are also considered to be external structures because of their direct

contact with the outside environment. Both the gastrointestinal tract and the mouth, nose, and sinuses are

heavily populated with microorganisms, some of which are true commensals—living in humans and deriving

their sustenance from the surface cells of the body without doing any harm—and others of which are

indistinguishable from disease germs. The latter may live like true commensals in a particular tract in a human

and never cause disease, despite their potential to do so. When the environment is altered, however, they are

capable of causing severe illness in their host, or, without harming their host, they may infect another person

with a serious disease.

It is not known why, for example, the hemolytic streptococcus bacterium can live for months in the throat

without causing harm and then suddenly cause an acute attack of tonsillitis or how an apparently harmless

pneumococcus gives rise to pneumonia. Similarly, it is not understood how a person can harmlessly carry

type B in the throat but then become ill when the organism invades the body and Haemophilus influenzaecauses one of the most severe forms of meningitis. It may be that external influences, such as changes in

temperature or humidity, are enough to upset the balance between host and parasite or that a new microbial

invader enters and, by competing for some element in the environment, forces the original parasite to react

more violently with its host. The term , often used to describe conditions at the onset of lowered resistanceinfectious disease, is not specific and simply implies any change in the immune system of the host.

A microorganism’s environment can be changed radically, of course. If antibiotics are administered, the body’s

commensal organisms can be killed, and other, less-innocuous organisms may take their place. In the mouth and

throat, penicillin may eradicate pneumococci, streptococci, and other bacteria that are sensitive to the drug,

Britannica LaunchPacks | The Treatment and Prevention of Disease

9 of 123© 2020 Encyclopædia Britannica, Inc.

while microorganisms that are insensitive, such as , may then proliferate and cause thrush (an Candida albicansinflammatory condition of the mouth and throat). In the intestinal tract, an antibiotic may kill most of the

bacterial microorganisms that are normally present and allow dangerous bacteria, such as Pseudomonas , to multiply and perhaps to invade the bloodstream and the tissues of the body. If an infectious aeruginosa

agent—for example, —reaches the intestinal tract, treatment with an antibiotic may have an effect Salmonellathat differs from what was intended. Instead of attacking and destroying the salmonella, it may kill the normal

inhabitants of the bowel and allow the salmonella to flourish and persist in the absence of competition from

other bacterial microorganisms.

Effects of environment on human diseaseHuman activity

Family patterns

Humans are social animals. As a result, human social habits and circumstances influence the spread of infectious

agents. Crowded family living conditions facilitate the passage of disease-causing organisms from one person to

another. This is true whether the germs pass through the air from one respiratory tract to another or whether

they are bowel organisms that depend for their passage on close personal hand-to-mouth contact or on lapses of

sanitation and hygiene.

The composition of the family unit is also important. In families with infants and preschool children, infection

spreads more readily, for children of this age are both more susceptible to infection and, because of their

undeveloped hygiene habits, more likely to share their microbes with other family members. Because of this

close and confined contact, infectious agents are spread more rapidly.

Distinction must be made between disease and infection. The virus of poliomyelitis, for example, spreads easily

in conditions of close contact (infection), but it usually causes no active disease. When it does cause active

disease, it attacks older people much more severely than the young. Children in more-crowded homes, for

example, are likely to be infected at an early age and, if illness results, it is usually mild. In less-crowded

conditions, young children are exposed less often to infection; when they first encounter the virus at an older

age, they tend to suffer more severely. The difference between infection and disease is seen even more rapidly

in early childhood, when infection leads more often to immunity than to illness. Under high standards of hygiene,

young children are exposed less frequently, and fewer develop immunity in early life, with the result that

paralytic illness, a rarity under the former conditions, is seen frequently in older children and adults. The pattern

of infection and disease, however, can be changed. In the case of the poliomyelitis virus, only immunization can

abolish both infection and disease.

Population density

Density of population does not of itself determine the ease with which infection spreads through a population.

Problems tend to arise primarily when populations become so dense as to cause overcrowding. Overcrowding is

often associated with decreases in quality of living conditions and sanitation, and hence the rate of agent

transmission is typically very high in such areas. Thus, overcrowded cities or densely populated areas of cities

can potentially serve as breeding grounds for infectious agents, which may facilitate their evolution, particularly

in the case of viruses and bacteria. Rapid cycling between humans and other hosts, such as rats or mice, can

result in the emergence of new strains capable of causing serious disease.

Britannica LaunchPacks | The Treatment and Prevention of Disease

10 of 123© 2020 Encyclopædia Britannica, Inc.

Social habits

The vampire bats of Brazil, which transmit paralytic rabies, bite cattle but not ranchers, presumably because

ranchers are few but cattle are plentiful on the plains of Brazil. Bat-transmitted rabies, however, does occur in

humans in Trinidad, where herdsmen sleep in shacks near their animals. The mechanism of infection is the same

in Brazil and Trinidad, but the difference in social habits affects the incidence of the disease.

During the early 20th century in Malta, goats were milked at the customers’ doors, and a species in the Brucellamilk caused a disease that was common enough to be called Malta fever. When the pasteurization of milk

became compulsory, Malta fever almost disappeared from the island. (It continued to occur in rural areas where

people still drank their milk raw and were in daily contact with their infected animals.)

Important alterations in environment also occur when children in a modern community first go to school. Colds,

coughs, sore throats, and swollen neck glands can occur one after the other. In a nursery school, with young

children whose hygiene habits are undeveloped, outbreaks of dysentery and other bowel infections may occur,

and among children who take their midday meal at school, foodborne infection caused by a breakdown in

hygiene can sweep through entire classes of students. These are dangers against which the children are

protected to some extent at home but against which they have no defense when they move to the school

environment.

Changing food habits among the general population also affect the environment for humans and microbes.

Meals served in restaurants, for example, offer a greater danger of food poisoning if the standard of hygiene for

food preparation is flawed. The purchase and preparation of poultry—which is often heavily infected with

—present a particular danger. If chickens are bought fresh from a farm or shop and cooked in an oven Salmonellaat home, food poisoning from eating them is rare. If poultry is purchased while it is deep-frozen and then not

fully thawed before it is cooked, there is a good chance that insufficient heat penetration will allow the

—which thrive in the cold—to survive in the meat’s centre and infect the people who eat it.Salmonella

Temperature and humidity

At a social gathering, the human density per square yard may be much greater than in any home, and humidity

and temperature may rise to levels uncomfortable for humans but ideal for microbes. Virus-containing droplets

pass easily from one person to another, and an outbreak of the common cold may result.

In contrast, members of scientific expeditions have spent whole winters in the Arctic or Antarctic without any

respiratory illness, only to catch severe colds upon the arrival of a supply ship in the early summer. This is

because viruses, not cold temperatures, cause colds. During polar expeditions, the members rapidly develop

immunity to the viruses they bring with them, and, throughout the long winter, they encounter no new ones.

Their colds in the summer are caused by viruses imported by the crew of the supply ship. When the members of

the expedition return on the ship to temperate zones, they again come down with colds, this time caught from

friends and relatives who have spent the winter at home.

Britannica LaunchPacks | The Treatment and Prevention of Disease

11 of 123© 2020 Encyclopædia Britannica, Inc.

Movement

Refugees from Kyrgyzstan waiting in a refugee camp near the village of Erkishlok, Uzbekistan, June…

Anvar Ilyasov—AP/Shutterstock.com

Movement into a new environment often is followed by an outbreak of infectious disease. On pilgrimages and in

wars, improvised feeding and sanitation lead to outbreaks of such intestinal infections as dysentery, cholera, and

typhoid fever, and sometimes more have died in war from these diseases than have been killed in the fighting.

People entering isolated communities may carry a disease such as measles with them, and the disease may

then spread with astonishing rapidity and often with enhanced virulence. A traveler from Copenhagen carried

measles virus with him to the Faroe Islands in 1846, and 6,000 of the 8,000 inhabitants caught the disease. Most

of those who escaped were old enough to have acquired immunity during a measles outbreak 65 years earlier.

In Fiji a disastrous epidemic of measles in 1875 killed one-fourth of the population. In these cases, the change of

environment favoured the virus. Nearly every person in such “virgin” populations is susceptible to infection, so

that a virus can multiply and spread unhindered. In a modern city population, by contrast, measles virus mainly

affects susceptible young children. When it has run through them, the epidemic must die down because of a lack

of susceptible people, and the virus does not spread again until a new generation of children is on hand. With

the use of measles vaccine, the supply of susceptible young children is reduced, and the virus cannot spread

and multiply and must die out.

An innocent change in environment such as that experienced during camping can lead to infection if it brings a

person into contact with sources of infection that are absent at home. Picnicking in a wood, a person may be

bitten by a tick carrying the virus of one of several forms of encephalitis; as he swims in a canal or river, his skin

may be penetrated by the organisms that cause leptospirosis. He may come upon some watercress growing wild

in the damp corner of a field and may swallow with the cress almost invisible specks of life that will grow into

liver flukes in his body, giving him fascioliasis, an illness that is common in cattle and sheep but that can spread

to humans when circumstances are in its favour.

Occupation and commerce

In occupational and commercial undertakings, people often manipulate their environment and, in so doing,

expose themselves to infection. A farmer in his fields is exposed to damp conditions in which disease

microorganisms flourish. While clearing out a ditch, he may be infected with leptospires passed into the water in

rats’ urine. In his barns he may be exposed to brucellosis if his herd of cattle is infected or to salmonellosis or Q

fever. Slaughterhouse workers run similar risks, as do veterinarians. A worker in a dock or tannery may get

anthrax from imported hides; an upholsterer may get the disease from wool and hair; and a worker mending

sacks that have contained bone meal may contract the disease from germs still clinging to the sack.

Britannica LaunchPacks | The Treatment and Prevention of Disease

12 of 123© 2020 Encyclopædia Britannica, Inc.

Workers in packing plants and shops often are infected from the raw meat that they handle; they are sometimes

regarded as carriers and causes of outbreaks of food poisoning, but as often as not they are victims Salmonellarather than causes. Workers in poultry plants can contract salmonellosis, more rarely psittacosis or a viral

infection of the eye, from the birds that they handle. Forestry workers who enter a reserve may upset the

balance of nature of the area and expose themselves to attack from the undergrowth or the trees by insect

vectors of disease that, if undisturbed, would never come into contact with humans. Whenever people

manipulate the environment—by herding animals, by importing goods from abroad, by draining a lake, or by

laying a pipe through swampy land, and in many other seemingly innocent ways—they run the chance of

interfering with microbial life and attracting into their own environment agents of disease that they might not

otherwise ever encounter.

The inanimate environment

Dust cannot cause infectious disease unless it contains the living agents of the infection. Yet the term inanimateis a convenient one to use when infectious disease arises from contact with an environment in which there is no

obvious direct living contact between the source and the victim of an infection. A pencil is an inanimate object,

but if it is sucked by a child with scarlet fever and then by a second child, the organisms causing the disease can

be conveyed to the second child. Many such objects—a handkerchief or a towel, for example—may convey

infection under favourable conditions, and, when they do so, they are known as fomites.

Dust is perhaps one of the most common inanimate elements capable of conveying disease. Organisms present

in dust may get on food and be swallowed, settle on the skin and infect it, or be breathed into the respiratory

passages. Some germs—the causative agents of anthrax and tetanus, of Q fever, brucellosis, and psittacosis, for

example—can live for long periods dried in dust. Under certain conditions all may be dangerous, but under

different conditions there may be no danger. The germ that causes tetanus, , is one of the Clostridium tetanimost common germs in dust, but the incidence of tetanus varies greatly in different parts of the world. In many

countries it is rare, in others common, and the difference may be related to slight differences in human

behaviour or custom. The wearing of shoes in temperate climates protects the wearer against many wounds,

while the barefoot child in the tropics sustains many puncture wounds of the feet that, although minor, may yet

carry tetanus spores into the tissues of the foot, where conditions may be ideal for germination and the

production of toxin. Obstetrical practices in some parts of the world can lead to infection of the newborn child.

Other, more subtle influences, such as changes of temperature or humidity, may affect the spread of diseases

by dust. A long wet winter followed by a dry summer may encourage the growth of molds in hay, and the dust,

when the hay is disturbed, may lead to infection of the farmer’s lungs.

Water is not a favourable medium for the growth and multiplication of microorganisms, and yet many can

survive in it long enough to carry infection to humans. Cholera and typhoid fever are both waterborne diseases,

and the virus of hepatitis A also can survive in water. But the mere presence of a microorganism in water does

not necessarily lead to the spread of disease. People have swum in water polluted with without Salmonella typhigetting typhoid fever, while others eating shellfish from the same water have developed the disease. The same

may be true of hepatitis; the eating of clams from infected water has often caused the disease, whereas

swimming has not proved to be a hazard. Shellfish concentrate germs in their tissues, which is probably why

they can transmit diseases.

Water is carried to humans in pipes for drinking, but it is also carried away in pipes as sewage. Defects in these

systems may permit water to pass from one to the other, and microorganisms from sewage may get into

drinking water. Wells and other water sources can be contaminated by badly sited septic tanks, manure heaps,

or garbage dumps. Properly treated, such water is safe to drink; if drunk untreated, disease may follow.

Britannica LaunchPacks | The Treatment and Prevention of Disease

13 of 123© 2020 Encyclopædia Britannica, Inc.

The soil of gardens and farmlands can harbour disease organisms, as can food. Smoke, fog, and tobacco can

affect the human respiratory tract and render it more vulnerable to disease. Air is probably the most common

source of infectious agents. The whole of the environment is, in fact, filled with organisms and objects that can

transmit infectious agents to humans.

Immune response to infectionWhen a pathogenic (disease-causing) microorganism invades the body for the first time, the clinical (observable)

response may range from nothing at all, through various degrees of nonspecific reactions, to specific infectious

disease. Immunologically, however, there is always a response, the purpose of which is defense. If the defense is

completely successful, there is no obvious bodily reaction; if it is partially successful, the affected person

exhibits symptoms but recovers from an infectious disease; if unsuccessful, the person may be overwhelmed by

the infectious process and die.

The two responses—the clinical and the immunologic—can be illustrated by the natural history of the disease

poliomyelitis. When the virus of this disease enters the body for the first time, it multiplies in the throat and in

the intestinal tract. In some people, it gets no farther; virus is shed to the outside from the throat and the bowel

for a few weeks, and then the shedding ceases and the infection is over. The host, however, has responded and

has developed circulating antibodies to a type of poliovirus. These antibodies are specific antipoliovirus proteins

in the blood and body fluids that subsequently prevent disease should the poliovirus again be encountered. In

addition, the infected individual develops an antipoliovirus response in a subset of white blood cells known as T

cells. Antipoliovirus T cells persist throughout the individual’s lifetime.

In other people, the same process occurs but some virus also gets into the bloodstream, where it circulates for a

short time before being eliminated. In a few individuals, the virus passes from the bloodstream into the central

nervous system, where it circulates for a short time before being eliminated. Finally, in some individuals, the

virus passes from the bloodstream into the central nervous system, where it may enter and destroy some of the

nerve cells that control movement in the body and so cause paralysis. Such paralysis is the least-common result

of infection with poliomyelitis virus; most infected persons have no symptoms at all. Those whose bloodstream

contains virus often have a mild illness, consisting of no more than malaise, slight headache, and possibly a sore

throat. This so-called minor illness of poliomyelitis is unlikely to be recognized except by those in close contact

with someone later paralyzed by the disease.

If the nervous system becomes invaded by the virus, the infected person has a severe headache and other

symptoms suggesting meningitis. Such persons are acutely ill, but most recover their normal health after about

one week. Only a few of those with this type of infection have paralysis. Of all the people infected with

poliomyelitis virus, not more than 1 in 100, possibly as few as 1 in 1,000, has paralysis, though paralysis is the

dominant feature of the fully developed clinical picture of poliomyelitis. This poliomyelitis is actually an

uncommon complication of poliovirus infection.

This wide range of response to the poliomyelitis virus is characteristic of most infections, though the proportions

may vary. Influenza virus, for example, may cause symptoms ranging from a mild cold to a feverish illness,

severe laryngitis (inflammation of the larynx, or voice box) or bronchitis, or an overwhelming and fatal

pneumonia. The proportions of a population with these differing outcomes may vary from one epidemic to

another. There is perhaps more uniformity of pattern in the operation of the defense mechanisms of the body,

called the immune response.

Britannica LaunchPacks | The Treatment and Prevention of Disease

14 of 123© 2020 Encyclopædia Britannica, Inc.

Natural and acquired immunity

Every animal species possesses some natural resistance to disease. Humans have a high degree of resistance to

foot-and-mouth disease, for example, while the cattle and sheep with which they may be in close contact suffer

in the thousands from it. Rats are highly resistant to diphtheria, whereas unimmunized children readily contract

the disease.

What such resistance depends on is not always well understood. In the case of many viruses, resistance is

related to the presence on the cell surface of protein receptors that bind to the virus, allowing it to gain entry

into the cell and thus cause infection. Presumably, most causes of absolute resistance are genetically

determined; it is possible, for example, to produce by selective breeding two strains of rabbits, one highly

susceptible to tuberculosis, the other highly resistant. In humans there may be apparent racial differences, but it

is always important to disentangle such factors as climate, nutrition, and economics from those that might be

genetically determined. In some tropical and subtropical countries, for example, poliomyelitis is a rare clinical

disease, though a common infection, but unimmunized visitors to such countries often contract serious clinical

forms of the disease. The absence of serious disease in the residents is due not to natural resistance, however,

but to resistance acquired after repeated exposure to poliovirus from infancy onward. Unimmunized visitors from

other countries, with perhaps stricter standards of hygiene, are protected from such immunizing exposures and

have no acquired resistance to the virus when they encounter it as adults.

Natural resistance, in contrast to acquired immunity, does not depend upon such exposures. The human skin

obviously has great inherent powers of resistance to infection, for most cuts and abrasions heal quickly, though

often they are smothered with potentially pathogenic microorganisms. If an equal number of typhoid bacteria

are spread on a person’s skin and on a glass plate, those on the skin die much more quickly than do those on

the plate, suggesting that the skin has some bactericidal property against typhoid germs. The skin also varies in

its resistance to infectious organisms at different ages: impetigo is a common bacterial infection of children’s

skin but is rarer in adults, and acne is a common infection of the skin of adolescents but is uncommon in

childhood or in older adults. The phenomenon of natural immunity can be illustrated equally well with examples

from the respiratory, intestinal, or genital tracts, where large surface areas are exposed to potentially infective

agents and yet infection does not occur.

If an organism causes local infection or gains entry into the bloodstream, a complicated series of events ensues.

These events are described in detail in the article immune system, but they can be summarized as follows:

special types of white blood cells called polymorphonuclear leukocytes or granulocytes, which are normally

manufactured in the bone marrow and which circulate in the blood, move to the site of the infection. Some of

these cells reach the site by chance, in a process called random migration, since almost every body site is

supplied constantly with the blood in which these cells circulate. Additional granulocytes are attracted and

directed to the sites of infection in a process called directed migration, or chemotaxis.

When a granulocyte reaches the invading organism, it attempts to ingest the invader. Ingestion of bacteria may

require the help of still other components of the blood, called opsonins, which act to coat the bacterial cell wall

and prepare it for ingestion. An opsonin generally is a protein substance, such as one of the circulating

immunoglobulins or complement components.

Once a prepared bacterium has been taken inside the white blood cell, a complex series of biochemical events

occurs. A bacterium-containing vacuole (phagosome) may combine with another vacuole that contains bacterial-

degrading proteins (lysozymes). The bacterium may be killed, but its products pass into the bloodstream, where

they come in contact with other circulating white blood cells called lymphocytes. Two general types of

Britannica LaunchPacks | The Treatment and Prevention of Disease

15 of 123© 2020 Encyclopædia Britannica, Inc.

lymphocytes—T cells and B cells—are of great importance in protecting the human host. When a T cell

encounters bacterial products, either directly or via presentation by a special antigen-presenting cell, it is

sensitized to recognize the material as foreign, and, once sensitized, it possesses an immunologic memory. If the

T cell encounters the same bacterial product again, it immediately recognizes it and sets up an appropriate

defense more rapidly than it did on the first encounter. The ability of a T cell to function normally, providing what

is generally referred to as cellular immunity, is dependent on the thymus gland. The lack of a thymus, therefore,

impairs the body’s ability to defend itself against various types of infections.

After a T cell has encountered and responded to a foreign bacterium, it interacts with B cells, which are

responsible for producing circulating proteins called immunoglobulins or antibodies. There are various types of B

cells, each of which can produce only one of the five known forms of immunoglobulin (Ig). The first

immunoglobulin to be produced is IgM. Later, during recovery from infection, the immunoglobulin IgG, which can

specifically kill the invading microorganism, is produced. If the same microorganism invades the host again, the

B cell immediately responds with a dramatic production of IgG specific for that organism, rapidly killing it and

preventing disease.

In many cases, acquired immunity is lifelong, as with measles or rubella. In other instances, it can be short-lived,

lasting not more than a few months. The persistence of acquired immunity is related not only to the level of

circulating antibody but also to sensitized T cells (cell-mediated immunity). Although both cell-mediated

immunity and humoral (B-cell) immunity are important, their relative significance in protecting a person against

disease varies with particular microorganisms. For example, antibody is of great importance in protection

against common bacterial infections such as pneumococcal pneumonia or streptococcal disease and against

bacterial toxins, whereas cell-mediated immunity is of greater importance in protection against viruses such as

measles or against the bacteria that cause tuberculosis.

Immunization

Antibodies are produced in the body in response to either infection with an organism or, through vaccination, the

administration of a live or inactivated organism or its toxin by mouth or by injection. When given alive, the

organisms are weakened, or attenuated, by some laboratory means so that they still stimulate antibodies but do

not produce their characteristic disease. However stimulated, the antibody-producing cells of the body remain

sensitized to the infectious agent and can respond to it again, pouring out more antibody. One attack of a

disease, therefore, often renders a person immune to a second attack, providing the theoretical basis for active

immunization by vaccines.

Antibody can be passed from one person to another, conferring protection on the antibody recipient. In such a

case, however, the antibody has not been produced in the body of the second person, nor have his antibody-

producing cells been stimulated. The antibody is then a foreign substance and is eventually eliminated from the

body, and protection is short-lived. The most common form of this type of passive immunity is the transference

of antibodies from a mother through the placenta to her unborn child. This is why a disease such as measles is

uncommon in babies younger than one year. After that age, the infant has lost all of its maternal antibody and

becomes susceptible to the disease unless protective measures, such as measles vaccination, are taken.

Sometimes antibody is extracted in the form of immunoglobulin from blood taken from immune persons and is

injected into susceptible persons to give them temporary protection against a disease, such as measles or

hepatitis A.

Generally, active immunization is offered before the anticipated time of exposure to an infectious disease. When

unvaccinated people are exposed to an infectious disease, two alternatives are available: active immunization

may be initiated immediately in the expectation that immunity can be developed during the incubation period of

Britannica LaunchPacks | The Treatment and Prevention of Disease

16 of 123© 2020 Encyclopædia Britannica, Inc.

the disease, or passive immunity can be provided for the interim period and then active immunization given at

an appropriate time. The antigens (foreign substances in the body that stimulate the immune defense system)

introduced in the process of active immunization can be live attenuated viruses or bacteria, killed

microorganisms or inactivated toxins (toxoids), or purified cell wall products (polysaccharide capsules, protein

antigens).

There are five basic requirements for an ideal vaccine. The agents used for immunization should not in

themselves produce disease. The immunizing agent should induce long-lasting, ideally permanent, immunity.

The agent used for immunization should not be transmissible to susceptible contacts of the person being

vaccinated. The vaccine should be easy to produce, its potency easy to assess, and the antibody response to it

measurable with common and inexpensive techniques. Finally, the agent in the vaccine should be free of

contaminating substances. It is also recognized, however, that vaccine transmissibility can be helpful—e.g., in

the case of live polio vaccine, which can be spread from vaccinated children to others who have not been

vaccinated.

The route by which an antigen is administered frequently determines the type and duration of antibody

response. For example, intramuscular injection of inactivated poliomyelitis virus (Salk vaccine) generates less

production of serum antibody and induces only a temporary systemic immunity; it may not produce substantial

local gastrointestinal immunity and, therefore, may not prevent the carrying of the virus in the gastrointestinal

tract. Live, attenuated, oral poliomyelitis virus (Sabin vaccine) induces both local gastrointestinal and systemic

antibody production; thus, immunization by mouth is preferred.

The schedule by which a vaccine is given depends upon the epidemiology of the naturally occurring disease, the

duration of immunity that can be induced, the immunologic status of the host, and, in some cases, the

availability of the patient. Measles, for example, is present in many communities and poses a potential threat to

many children over 5 months of age. A substantial number of infants, however, are born with measles antibody

from their mothers, and this maternal antibody interferes with an adequate antibody response until they are

between 12 and 15 months of age. Generally, the immunization of infants after the age of 15 months benefits

the community at large. In measles outbreaks, however, it may be advisable to alter this schedule and immunize

all infants between 6 and 15 months of age.

Diphtheria toxoid

The introduction of diphtheria toxoid in the early 20th century led to a dramatic reduction in the incidence of the

disease in many parts of the world. Primary prevention programs consisting of communitywide routine

immunization of infants and children have largely eliminated the morbidity and mortality previously associated

with diphtheria. Although the reported annual incidence of diphtheria has been relatively constant since the

1960s, local epidemics continue to occur. A complacent attitude toward immunization in some nations largely

reflects a lack of awareness of the public health hazard that can arise if the proportion of susceptible individuals

is significant enough to allow renewed outbreaks. Also, adequate immunization does not completely eliminate

the potential for transmission of the bacterium . Carriage of in the Corynebacterium diphtheriae C. diphtheriaenose or throat has been well documented in fully immunized persons who clearly may transmit the disease to

susceptible individuals.

The vaccine—prepared by the treatment of toxin with formaldehyde—is available in both fluid and C. diphtheriaeadsorbed forms, the latter being recommended. Diphtheria toxoid is also available combined with tetanus toxoid

and pertussis vaccine (DPT), combined with tetanus toxoid alone (DT), and combined with tetanus toxoid for

adults (Td). The Td preparation contains only 15 to 20 percent of the diphtheria toxoid present in the DPT

vaccine and is more suitable for use in older children and adults.

Pertussis vaccine

Britannica LaunchPacks | The Treatment and Prevention of Disease

17 of 123© 2020 Encyclopædia Britannica, Inc.

Pertussis vaccine

The number of cases of pertussis (whooping cough), a serious disease that is frequently fatal in infancy, can be

dramatically reduced by the use of the pertussis vaccine. The pertussis immunizing agent is included in the DPT

vaccine. Active immunity can be induced by three injections given eight weeks apart.

Tetanus toxoid

The efficacy of active immunization against tetanus was illustrated most dramatically during World War II, when

the introduction of tetanus toxoid among military personnel virtually eliminated the occurrence of the disease as

a result of war-related injuries. Since then, the routine immunization of civilian populations with tetanus toxoid

has resulted in the decreased incidence of tetanus. In the United States, for example, 50 or fewer cases of

tetanus are reported each year, the majority of deaths occurring in persons more than 60 years of age. In

virtually all cases, the disease has been reported in unimmunized or inadequately immunized individuals.

Because it provides long-lasting protection and relative safety in humans, tetanus toxoid has proved to be an

ideal vaccine. Tetanus toxoid is available in both vaccine fluid and alum-precipitated preparations.

Commercially, tetanus toxoid is available in DPT, Td, and T (tetanus toxoid, adsorbed) preparations. DPT is

recommended for infants, while the Td form is recommended at 12 and again at 18 years of age and only once

every 10 years thereafter. If a person sustains a wound prone to tetanus (such as a puncture wound or a wound

contaminated with animal excreta), Td is given along with tetanus immune globulin (TIG) to prevent occurrence

of the disease.

Polio vaccine

A health care worker giving a polio vaccine to a child in Katsina state, Nigeria, 2014.

Alford Williams/Centers for Disease Control and Prevention (CDC)

The value of primary prevention of disease through active immunization programs has been most convincingly

demonstrated in the case of poliomyelitis. Before the vaccine was known, more than 20,000 cases of paralytic

disease occurred in the United States alone every year. With use of the vaccine after 1961, the last case of

transmission of wild-type poliomyelitis was in 1979. However, such achievement toward eliminating the clinical

disease does not justify a casual attitude toward compliance with the recommended polio vaccine schedules.

Low immunization rates are still evident in children in certain disadvantaged urban and rural groups, among

which most of the cases of paralytic disease continue to occur. This is all the more important as international

attempts to eradicate poliomyelitis are achieving remarkable success, with most nations of the world now polio-

free.

Britannica LaunchPacks | The Treatment and Prevention of Disease

18 of 123© 2020 Encyclopædia Britannica, Inc.

Live trivalent oral poliovirus vaccine (OPV) is used for routine mass immunization but is not recommended for

patients with altered states of immunity (for example, those with cancer or an immune deficiency disease or

those receiving immunosuppressive therapy) or for children whose siblings are known to have an immune

deficiency disease. Inactivated poliovirus vaccine (IPV) is used for immunodeficient or immunosuppressed

patients and for the primary immunization of adults because of their greater susceptibility to paralytic disease.

In 2000, as poliomyelitis eradication appeared imminent, the United States switched from OPV to IPV as a

routine recommended infant vaccine.

Rubella (German measles) vaccine

A major epidemic in the United States in 1964 resulted in more than 20,000 cases of congenital rubella. In

consequence, active immunization programs with attenuated rubella vaccine were initiated in 1969 in an

attempt to prevent an expected epidemic in the early 1970s. The immunization of all children from 1 to 12 years

of age was aimed at reducing the reservoir and transmission of wild rubella virus and, secondarily, at diminishing

the risk of rubella infection in susceptible pregnant women. This national policy contrasts with that of the United

Kingdom, where only girls from 10 to 14 years of age who do not have detectable antibody levels to rubella virus

are immunized. Proponents of the British policy have argued that natural rubella infection, which confers lifelong

immunity, is more effective than vaccine-induced protection of uncertain duration and that continued outbreaks

of rubella in the United States (largely in older children and young adults) since the introduction of rubella

vaccines attest to the difficulty of achieving herd (group) immunity for this disease.

Live attenuated rubella vaccine is available in combination with measles vaccine (MR) and in combination with

measles and mumps vaccines (MMR). For routine infant immunization, MMR is given one time at about 15

months of age. Rubella vaccination can be accompanied by mild joint pain and fever in 5 percent of those who

receive it. Vaccination is recommended for all children between the ages of 12 months and puberty. Vaccination

is not recommended for pregnant women. A number of women, however, have inadvertently received rubella

vaccine during pregnancy with no harm to their fetuses being noted.

Measles (rubeola) vaccine

The licensing and distribution of killed measles vaccine in 1963, followed by the development and widespread

use of live attenuated vaccine, have sharply reduced the prevalence of measles and its related morbidity and

mortality in many parts of the world. Additional attenuated vaccine preparations have been developed; though

not fully evaluated, they appear to be safe and highly effective and to confer prolonged, if not lifelong,

protection.

Despite the introduction of effective immunizing agents, measles continues to be a major public health concern.

The continued occurrence of outbreaks of measles, especially among young adults, emphasizes the probable

failure of herd immunity to eliminate measles transmission, despite high local immunization rates in young

children. The outbreaks also indicate the possibility that a small number of appropriately immunized individuals

may not develop solid immunity. It is estimated that about 700,000 to 800,000 people, mostly children, die of

measles each year.

Measles vaccine is commercially available in live attenuated form and is used routinely in the MMR preparation

for infant immunization at about 15 months of age. Susceptible older children or adults who have not had

measles or have not previously received measles vaccine also should receive a single dose.

Britannica LaunchPacks | The Treatment and Prevention of Disease

19 of 123© 2020 Encyclopædia Britannica, Inc.

Mumps vaccine

Mumps is generally a self-limited disease in children but occasionally is moderately debilitating. A live

attenuated mumps vaccine is available alone or in combination with measles and rubella vaccines. No serious

adverse reactions have been reported following mumps immunization.

Pneumococcal vaccine

Streptococcus pneumoniae (pneumococcus) is the most frequent cause of bloodstream infection, pneumonia,

and ear infection and is the third most common cause of bacterial meningitis in children. Pneumococcal infection

is particularly serious among the elderly and among children with sickle-cell anemia, with congenital or acquired

defects in immunity, without spleens, or with abnormally functioning spleens.

Immunity after pneumococcal disease is type-specific and lifelong. The pneumococcal vaccine now available

consists of polysaccharide antigens from many of the most common types of pathogenic pneumococci. It can be

given to children two years of age or older or to adults in a single intramuscular injection. The duration of

protection is unknown.

Meningococcal vaccine

Neisseria meningitidis can cause meningitis (infection of the coverings of the brain and spinal cord) or severe

bloodstream infection known as meningococcemia. In the general population, less than 1 per 400,000 persons is

attacked by the bacterium, while among those younger than one year, the ratio rises to 1 per 100,000. In a day-

care centre in which a primary case of meningococcal disease has occurred, the ratio has been reported at 2 to

100 per 100,000, and the danger from household contact from an infected person is believed to be 200 to 1,000

times greater than that of the general population. Because of these statistics, anyone who has had contact with

the disease in a home, day-care centre, or nursery school should receive prophylactic antibiotic treatment as

soon as possible, preferably within 24 hours of the diagnosis of the primary case of the disease.

Group-specific meningococcal polysaccharide vaccines can be used to control outbreaks of disease and may

benefit travelers to countries where these diseases are endemic. Certain of the vaccines are administered

routinely to military recruits in the United States.

Hepatitis B vaccine

Hepatitis B virus (HBV) produces an illness characterized by jaundice, poor appetite, malaise, and nausea.

Chronic liver disease may follow the infection. Hepatitis B vaccine is recommended for infants and for persons

who are at a greater risk of contracting the disease because of their lifestyles or jobs. These include health care

personnel who are exposed to blood products, hemodialysis patients, institutionalized patients and their staffs,

patients receiving multiple transfusions, prostitutes and the sexual partners of individuals with the disease,

users of illicit intravenous drugs, and homosexual males.

Hepatitis B vaccine consists of recombinant viral surface antigen particles and is given in a three-dose series.

Britannica LaunchPacks | The Treatment and Prevention of Disease

20 of 123© 2020 Encyclopædia Britannica, Inc.

Influenza vaccine

Searching for a universal flu vaccine.

© American Chemical Society

The manufacture of influenza vaccine is complicated by the many influenza viruses and by the major changes in

antigenic composition that these viruses continually undergo. Routine immunization against influenza viruses is

recommended for all healthy individuals before the respiratory disease season commences (in the fall) and may

be recommended throughout the year for travelers to a different hemisphere (e.g., from North to South America,

because their winter seasons are reversed).

Haemophilus influenzae type B vaccine

The bacterium is a major cause of morbidity and mortality in children, particularly in Haemophilus influenzaethose under six years of age. Because it is highly contagious among people in close contact with one another,

antibiotics were traditionally used to prevent infection. In 1990 a powerful vaccine called a conjugate vaccine

was licensed, and it has caused a dramatic decrease in disease in many countries.H. influenzae

Chickenpox (varicella) vaccine

The low morbidity of chickenpox in healthy children does not arguably support the universal use of vaccine. In

certain persons, especially those with immunodeficiency disease or cancer, however, chickenpox can be

devastating. A live attenuated vaccine has been found to be safe and immunogenic in healthy children and has

recently been licensed, although its use has been questioned.

Passive immunity

Passive immunity is the administration of antibodies to an unimmunized person from an immune subject to

provide temporary protection against a microbial agent or toxin. This type of immunity can be conferred on

persons who are exposed to measles, mumps, whooping cough, poliomyelitis, rabies, rubella (German measles),

tetanus, chickenpox, and herpes zoster (shingles). The process is also used in the treatment of certain disorders

associated with bites (snake and spider) and as a specific (Rho-GAM) or nonspecific (antilymphocyte serum)

immunosuppressant. Other antibody preparations are available under specific conditions for specific disorders.

Passive immunization is not always effective; the duration of immunity provided is brief and variable, and

undesirable reactions may occur, especially if the antiserum is of nonhuman origin. Several preparations are

available for use as passive immunizing agents.

Britannica LaunchPacks | The Treatment and Prevention of Disease

21 of 123© 2020 Encyclopædia Britannica, Inc.

HISG

Human immune serum globulin (HISG) is prepared from human serum. Special treatment of the serum removes

various undesirable proteins and infectious viruses, thus providing a safe product for intramuscular injection.

HISG is used for the treatment of antibody deficiency conditions and for the prevention of hepatitis A and

hepatitis B viral infections, measles, chickenpox, rubella, and poliomyelitis.

The most widespread use of HISG is in the prevention of hepatitis A infection, a disease for which active

immunization has only recently become available, in individuals known to have had intimate exposure to the

disease. Hepatitis B immunoglobulin should be given immediately to susceptible persons who are exposed to

contaminated blood or who have had intimate physical contact with a person who has hepatitis B infection.

Because of the scarcity of the product, hepatitis B immune globulin is not recommended routinely for those who

are continuously at high risk of exposure to hepatitis B. It should be given, however, in conjunction with

vaccination, to infants born to mothers who have serological evidence of hepatitis B viral infection.

Several investigators have claimed a beneficial effect of HISG in persons with HIV infection and AIDS, as well as

in persons with asthma and other allergic disorders; evidence confirming its efficacy in these conditions is

lacking, however. Monthly HISG is not beneficial in the prevention of upper respiratory infections, otitis, skin

infections, gastrointestinal disorders, or fever of undetermined cause. HISG has been used inconclusively in the

treatment of infants with significantly low levels of immunoglobulins and patients with severe burns who are at

an increased risk of infection. Antivenoms derived from horses are used effectively to treat snake or spider bites,

but not without significant risk of reaction to the equine antibody preparation.

Rho-GAM

Rho-GAM is a human anti-RhD immune serum globulin used in the prevention of Rh hemolytic disease of the

newborn. Rho-GAM is given to Rh-negative mothers after the delivery of Rh-positive infants or after miscarriage

or abortion to prevent the development of anti-Rh antibodies, which could cause hemolysis (red blood cell

destruction) in the infant of a subsequent pregnancy.

Antitoxins

Botulism, a severe paralytic poisoning, results from the ingestion or absorption of the toxin of the bacterium

. As a preventive measure, antitoxin can be given to individuals known to have ingested Clostridium botulinumcontaminated food and to patients with symptoms as soon as possible after exposure.

Most of the damaging effect of diphtheria results from the toxin produced by the bacterium Corynebacterium . This toxin not only has local effects but also is distributed through the blood to the heart, nervous diphtheriae

system, kidneys, and other organs. Diphtheria antitoxin of animal origin remains the principal treatment, along

with antibiotics.

Gas gangrene is caused by infection with clostridial organisms, usually following a traumatic injury that has

caused extensive local tissue damage. An antitoxin derived from horses is available as an adjunct to surgical and

other treatment of these infections.

Ralph D. FeiginRenu GargAndrew Barnett Christie

Britannica LaunchPacks | The Treatment and Prevention of Disease

22 of 123© 2020 Encyclopædia Britannica, Inc.

Additional ReadingGeneral information

GERALD L. MANDELL, , and (eds.), JOHN E. BENNETT RAPHAEL DOLIN Mandell, Douglas, and Bennett’s Principles

, 7th ed. (2010), is a modern textbook that the general reader may find and Practice of Infectious Diseasesinformative and useful. and , , 4th ed. H.A.K. ROWLAND PHILIP D. WELSBY A Color Atlas of Infectious Diseases

(2003), reveals through the use of colour photographs the actual clinical features associated with various

infectious diseases. , LAURIE GARRETT The Coming Plague: Newly Emerging Diseases in a World out of

(1994), is a Pulitzer Prize-winning account that focuses on emerging infectious diseases. Balance RICHARD

(ed.), (1998, reissued 2000), is an excellent M. KRAUSE Emerging Infections: Biomedical Research Reports

collection of articles edited by the former director of the U.S. National Institute of Allergy and Infectious

Diseases. (ed.), , 4th ed. (2008), is a well-written text on a KING K. HOLMES Sexually Transmitted Diseases

facet of infectious disease that is assuming increasing importance in society. , PAUL W. EWALD Evolution of

(1993, reissued 1996), an accessible work, was the first to present a Darwinian Infectious Diseaseperspective on infectious disease.

History of infectious disease

ANDRÉ SIEGFRIED, (also published as Routes of Contagion Germs and Ideas: Routes of Epidemics and

, 1965; originally published in French, 1960); and , Ideologies HENRY E. SIGERIST Civilization and Disease

(1943, reissued 1970), deal with the effect of disease on human life and history; both are nontechnical.

, (1976, reissued 1998), is a modern classic that looks at the role of WILLIAM H. MCNEILL Plagues and Peoples

epidemics in human history from an interdisciplinary perspective. Two other excellent books on the

history of infectious disease are , CHARLES-EDWARD AMORY WINSLOW The Conquest of Epidemic Disease: A

(1943, reprinted 1980); and , (1935, Chapter in the History of Ideas H. ZINSSER Rats, Lice, and History

reissued 2000).

Citation (MLA style):

"Infectious disease." , Encyclopædia Britannica LaunchPacks: The Treatment and Prevention of DiseaseBritannica, 1 May. 2020. packs-ngl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

genetic testingany of a group of procedures used to identify gene variations associated with health, disease, and ancestry and

to diagnose inherited diseases and disorders. A genetic test is typically issued only after a medical history, a

physical examination, and the construction of a family pedigree documenting the genetic diseases present in the

past three generations have been considered. The pedigree is especially important, since it aids in determining

whether a disease or disorder is inherited and likely to be passed on to subsequent generations. Genetic testing

is increasingly being used in genealogy, the study of family origins and history.

Genetic mutations

Britannica LaunchPacks | The Treatment and Prevention of Disease

23 of 123© 2020 Encyclopædia Britannica, Inc.

Genetic mutationsA genetic disorder can occur in a child with parents who are not affected by the disorder. This situation arises

when a gene mutation occurs in the egg or sperm (germinal mutation) or following conception, when

chromosomes from the egg and sperm combine. Mutations can occur spontaneously or be stimulated by

environmental factors, such as radiation or carcinogens (cancer-causing agents). Mutations occur with increasing

frequency as people age. In men this may result from errors that occur throughout a lifetime as DNA

(deoxyribonucleic acid) replicates to produce sperm. In women nondisjunction of chromosomes becomes more

common later in life, increasing the risk of aneuploidy (too many or too few chromosomes). Long-term exposure

to ambient ionizing radiation may cause genetic mutations in either gender. In addition to these exposure

mutations, there also exist two broad classes of genes that are prone to mutations that give rise to cancer.

These classes include oncogenes, which promote tumour growth, and tumour-suppressor genes, which suppress

tumour growth.

Types of diagnostic genetic testsChemical, radiological, histopathologic, and electrodiagnostic procedures can diagnose basic defects in patients

suspected of genetic disease. Genetic tests may involve cytogenetic analyses to investigate chromosomes,

molecular assays to investigate genes and DNA, or biochemical assays to investigate enzymes, hormones, or

amino acids. Tests such as amino acid chromatography of blood and urine, in which the amino acids present in

these fluids are separated on the basis of certain chemical affinities, can be used to identify specific hereditary

or acquired gene defects. There also exist numerous genetic tests for blood and blood typing and antibody

determination. These tests are used to isolate blood or antibody abnormalities that can be traced to genes

involved in the generation of these substances. Various electrodiagnostic procedures such as electromyography

are useful for identifying defects in muscle and nerve function, which often result from inherited gene mutations.

Prenatal diagnosis

Prenatal screening is performed if there is a family history of inherited disease, the mother is at an advanced

age, a previous child had a chromosomal abnormality, or there is an ethnic indication of risk. Parents can be

tested before or after conception to determine whether they are carriers.

A common prenatal test involves screening for alpha-fetoprotein (AFP) in maternal serum. Elevated levels of AFP

are associated with neural tube defects in the fetus, including spina bifida (defective closure of the spine) and

anencephaly (absence of brain tissue). When AFP levels are elevated, a more specific diagnosis is attempted,

using ultrasound and amniocentesis to analyze the amniotic fluid for the presence of AFP. Fetal cells contained in

the amniotic fluid also can be cultured and the karyotype (chromosome morphology) determined to identify

chromosomal abnormality. Cells for chromosome analysis also can be obtained by chorionic villus sampling, the

direct needle aspiration of cells from the chorionic villus (future placenta).

Women who have had repeated in vitro fertilization failures may undergo preimplantation genetic diagnosis

(PGD). PGD is used to detect the presence of embryonic genetic abnormalities that have a high likelihood of

causing implantation failure or miscarriage. In PGD a single cell is extracted from the embryo and is analyzed by

fluorescence in situ hybridization (FISH), a technique used to identify structural abnormalities in chromosomes

that standard tests such as karyotyping cannot detect. In some cases DNA is isolated from the cell and analyzed

by polymerase chain reaction (PCR) for the detection of gene mutations that can give rise to certain disorders

such as Tay-Sachs disease. Another technique, known as comparative genomic hybridization (CGH), may be

used alongside PGD to identify chromosomal abnormalities.

Britannica LaunchPacks | The Treatment and Prevention of Disease

24 of 123© 2020 Encyclopædia Britannica, Inc.

Advances in DNA sequencing technologies have enabled scientists to reconstruct the human fetal genome from

genetic material found in maternal blood and paternal saliva. This in turn has raised the possibility for

development of prenatal diagnostic tests that are noninvasive to the fetus but capable of accurately detecting

genetic defects in fetal DNA. Such tests are desirable because they would significantly reduce the risk of

miscarriage that is associated with procedures requiring cell sampling from the fetus or chorionic villus.

Karyotyping

Chromosomal karyotyping, in which chromosomes are arranged according to a standard classification scheme, is

one of the most commonly used genetic tests. To obtain a person’s karyotype, laboratory technicians grow

human cells in tissue culture media. After being stained and sorted, the chromosomes are counted and

displayed. The cells are obtained from the blood, skin, or bone marrow or by amniocentesis or chorionic villus

sampling, as noted above. The standard karyotype has approximately 400 visible bands, and each band contains

up to several hundred genes.

When a chromosomal aberration is identified, it allows for a more accurate prediction of the risk of its recurrence

in future offspring. Karyotyping can be used not only to diagnose aneuploidy, which is responsible for Down

syndrome, Turner syndrome, and Klinefelter syndrome, but also to identify the chromosomal aberrations

associated with solid tumours such as nephroblastoma, meningioma, neuroblastoma, retinoblastoma, renal-cell

carcinoma, small-cell lung cancer, and certain leukemias and lymphomas.

Karyotyping requires a great deal of time and effort and may not always provide conclusive information. It is

most useful in identifying very large defects involving hundreds or even thousands of genes.

DNA tests

Techniques such as FISH, CGH, and PCR have high rates of sensitivity and specificity. These procedures provide

results more quickly than traditional karyotyping because no cell culture is required. FISH can detect genetic

deletions involving one to five genes. It is also useful in detecting moderate-sized deletions, such as those

causing Prader-Willi syndrome. CGH is more sensitive than FISH and is capable of detecting a variety of small

chromosomal rearrangements, deletions, and duplications. The analysis of individual genes also has been

greatly enhanced by the development of PCR and recombinant DNA technology. In recombinant DNA

technology, small DNA fragments are isolated and copied, thereby producing unlimited amounts of cloned

material. Once cloned, the various genes and gene products can be used to study gene function both in healthy

individuals and those with disease. Recombinant DNA and PCR methods can detect any change in DNA, down to

a one-base-pair change, such as a point mutation or a single nucleotide polymorphism, out of the three billion

base pairs in the human genome. The detection of these changes is facilitated by DNA probes that are labeled

with radioactive isotopes or fluorescent dyes. Such methods can be used to identify persons who are carriers for

inherited conditions, such as hemophilia A, polycystic kidney disease, sickle cell anemia, Huntington disease,

cystic fibrosis, and hemochromatosis.

Biochemical tests

Biochemical tests primarily detect enzymatic defects such as phenylketonuria, porphyria, and glycogen-storage

disease. Although testing of newborns for all these abnormalities is possible, it is not cost-effective, because

some of these conditions are quite rare. Screening requirements for these disorders vary and depend on

whether the disease is sufficiently common, has severe consequences, and can be treated or prevented if

diagnosed early and whether the test can be applied to the entire population at risk.

Genetic testing and genealogy

Britannica LaunchPacks | The Treatment and Prevention of Disease

25 of 123© 2020 Encyclopædia Britannica, Inc.

Genetic testing and genealogyOnce the domain of oral traditions and written pedigrees, genealogy in the modern era has become grounded in

the science of genetics. Increased rigour in the field has been made possible by the development and ongoing

refinement of methods to accurately trace genes and genetic variations through generations. Genetic tests used

in genealogy are mainly intended to identify similarities and differences in DNA between living humans and their

ancestors. In some instances, however, in the process of tracing genetic lineages, gene variations associated

with disease may be detected.

Methods used in genealogical genetics analysis include Y chromosome testing, mitochondrial DNA (mtDNA)

testing, and detection of ancestry-associated genetic variants that occur as single nucleotide polymorphisms

(SNPs) in the human genome. Y chromosome testing is based on genetic comparison of Y chromosomes, from

males. Because males with a common male ancestor have matching Y chromosomes, scientists are able to trace

paternal lineages and thereby determine distant relationships between males. Such analyses allow genealogists

to confirm whether males with the same surname are related. Likewise, maternal lineages can be traced

genetically through mtDNA testing, since the mitochondrial genome is inherited only from the mother. Maternal

lineage tests typically involve analysis of a segment in mtDNA known as hypervariable region 1; comparison of

this segment against reference mtDNA sequences (e.g., Cambridge Reference Sequence) enables scientists to

reconstruct an individual’s maternal genetic lineage.

Following the completion of the Human Genome Project in 2003, it became possible to more efficiently scan the

human genome for SNPs and to compare SNPs occurring in the genomes of human populations in different

geographical regions of the world. The analysis of this information for genetic testing and genealogical purposes

forms the basis of biogeographical ancestry testing. These tests typically make use of panels of ancestry

informative markers (AIMs), which are SNPs specific to human populations and their geographical areas that can

be used to infer ancestry. In 2010 a study using genome-wide SNP analysis incorporating ancestral information

successfully traced persons in Europe to the villages in which their grandparents lived. The technique was

expected to advance genetic testing intended to map an individual’s geographical ancestry.

Citation (MLA style):

"Genetic testing." , Encyclopædia Britannica, 8 Britannica LaunchPacks: The Treatment and Prevention of DiseaseJun. 2012. packs-ngl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

medicinethe practice concerned with the maintenance of health and the prevention, alleviation, or cure of disease.

Britannica LaunchPacks | The Treatment and Prevention of Disease

26 of 123© 2020 Encyclopædia Britannica, Inc.

Surgeons performing a laparoscopy.

© s4svisuals/Shutterstock.com

The World Health Organization at its 1978 international conference held in the Soviet Union produced the Alma-

Ata Health Declaration, which was designed to serve governments as a basis for planning health care that would

reach people at all levels of society. The declaration reaffirmed that

health, which is a state of complete physical, mental and social well-being, and not merely the absence of

disease or infirmity, is a fundamental human right and that the attainment of the highest possible level of health

is a most important world-wide social goal whose realization requires the action of many other social and

economic sectors in addition to the health sector.

In its widest form, the practice of medicine—that is to say, the promotion and care of health—is concerned with

this ideal.

Organization of health servicesIt is generally the goal of most countries to have their health services organized in such a way to ensure that

individuals, families, and communities obtain the maximum benefit from current knowledge and technology

available for the promotion, maintenance, and restoration of health. In order to play their part in this process,

governments and other agencies are faced with numerous tasks, including the following: (1) They must obtain as

much information as is possible on the size, extent, and urgency of their needs; without accurate information,

planning can be misdirected. (2) These needs must then be revised against the resources likely to be available

in terms of money, manpower, and materials; developing countries may well require external aid to supplement

their own resources. (3) Based on their assessments, countries then need to determine realistic objectives and

draw up plans. (4) Finally, a process of evaluation needs to be built into the program; the lack of reliable

information and accurate assessment can lead to confusion, waste, and inefficiency.

Health services of any nature reflect a number of interrelated characteristics, among which the most obvious,

but not necessarily the most important from a national point of view, is the curative function; that is to say,

caring for those already ill. Others include special services that deal with particular groups (such as children or

pregnant women) and with specific needs such as nutrition or immunization; preventive services, the protection

of the health both of individuals and of communities; health education; and, as mentioned above, the collection

and analysis of information.

Levels of health care

In the curative domain there are various forms of medical practice. They may be thought of generally as forming

a pyramidal structure, with three tiers representing increasing degrees of specialization and technical

Britannica LaunchPacks | The Treatment and Prevention of Disease

27 of 123© 2020 Encyclopædia Britannica, Inc.

sophistication but catering to diminishing numbers of patients as they are filtered out of the system at a lower

level. Only those patients who require special attention either for diagnosis or treatment should reach the

second (advisory) or third (specialized treatment) tiers where the cost per item of service becomes increasingly

higher. The first level represents primary health care, or first contact care, at which patients have their initial

contact with the health-care system.

Primary health care is an integral part of a country’s health maintenance system, of which it forms the largest

and most important part. As described in the declaration of Alma-Ata, primary health care should be “based on

practical, scientifically sound and socially acceptable methods and technology made universally accessible to

individuals and families in the community through their full participation and at a cost that the community and

country can afford to maintain at every stage of their development.” Primary health care in the developed

countries is usually the province of a medically qualified physician; in the developing countries first contact care

is often provided by nonmedically qualified personnel.

The vast majority of patients can be fully dealt with at the primary level. Those who cannot are referred to the

second tier (secondary health care, or the referral services) for the opinion of a consultant with specialized

knowledge or for X-ray examinations and special tests. Secondary health care often requires the technology

offered by a local or regional hospital. Increasingly, however, the radiological and laboratory services provided

by hospitals are available directly to the family doctor, thus improving his service to patients and increasing its

range. The third tier of health care, employing specialist services, is offered by institutions such as teaching

hospitals and units devoted to the care of particular groups—women, children, patients with mental disorders,

and so on. The dramatic differences in the cost of treatment at the various levels is a matter of particular

importance in developing countries, where the cost of treatment for patients at the primary health-care level is

usually only a small fraction of that at the third level; medical costs at any level in such countries, however, are

usually borne by the government.

Ideally, provision of health care at all levels will be available to all patients; such health care may be said to be

universal. The well-off, both in relatively wealthy industrialized countries and in the poorer developing world,

may be able to get medical attention from sources they prefer and can pay for in the private sector. The vast

majority of people in most countries, however, are dependent in various ways upon health services provided by

the state, to which they may contribute comparatively little or, in the case of poor countries, nothing at all.

Costs of health care

The costs to national economics of providing health care are considerable and have been growing at a rapidly

increasing rate, especially in countries such as the United States, Germany, and Sweden; the rise in Britain has

been less rapid. This trend has been the cause of major concerns in both developed and developing countries.

Some of this concern is based upon the lack of any consistent evidence to show that more spending on health

care produces better health. There is a movement in developing countries to replace the type of organization of

health-care services that evolved during European colonial times with some less expensive, and for them, more

appropriate, health-care system.

In the industrialized world the growing cost of health services has caused both private and public health-care

delivery systems to question current policies and to seek more economical methods of achieving their goals.

Despite expenditures, health services are not always used effectively by those who need them, and results can

vary widely from community to community. In Britain, for example, between 1951 and 1971 the death rate fell

by 24 percent in the wealthier sections of the population but by only half that in the most underprivileged

Britannica LaunchPacks | The Treatment and Prevention of Disease

28 of 123© 2020 Encyclopædia Britannica, Inc.

sections of society. The achievement of good health is reliant upon more than just the quality of health care.

Health entails such factors as good education, safe working conditions, a favourable environment, amenities in

the home, well-integrated social services, and reasonable standards of living.

In the developing countries

The developing countries differ from one another culturally, socially, and economically, but what they have in

common is a low average income per person, with large percentages of their populations living at or below the

poverty level. Although most have a small elite class, living mainly in the cities, the largest part of their

populations live in rural areas. Urban regions in developing and some developed countries in the mid- and late

20th century developed pockets of slums, which were growing because of an influx of rural peoples and

immigrants. Slums persisted into the 21st century, though newer, cheaper construction techniques greatly

facilitated the development of affordable housing, alleviating some housing problems in densely populated

places such as New York City and cities in India. Nonetheless, many urban areas are simply overcrowded, which

is problematic for maintaining public health. For lack of even the simplest measures, vast numbers of urban and

rural poor die each year of preventable and curable diseases, often associated with poor hygiene and sanitation,

impure water supplies, malnutrition, vitamin deficiencies, and chronic preventable infections. The effect of these

and other deprivations is reflected by the finding that in the 1980s the life expectancy at birth for men and

women was about one-third less in Africa than it was in Europe; similarly, infant mortality in Africa was about

eight times greater than in Europe. The extension of primary health-care services is therefore a high priority in

the developing countries.

The developing countries themselves, lacking the proper resources, have often been unable to generate or

implement the plans necessary to provide required services at the village or urban poor level. It has, however,

become clear that the system of health care that is appropriate for one country is often unsuitable for another.

Research has established that effective health care is related to the special circumstances of the individual

country, its people, culture, ideology, and economic and natural resources.

The rising costs of providing health care have influenced a trend, especially among the developing nations, to

promote services that employ less highly trained primary health-care personnel who can be distributed more

widely in order to reach the largest possible proportion of the community. The principal medical problems to be

dealt with in the developing world include undernutrition, infection, gastrointestinal disorders, and respiratory

complaints, which themselves may be the result of poverty, ignorance, and poor hygiene. For the most part,

these are easy to identify and to treat. Furthermore, preventive measures are usually simple and cheap. Neither

treatment nor prevention requires extensive professional training: in most cases they can be dealt with

adequately by the “primary health worker,” a term that includes all nonprofessional health personnel.

In the developed countries

Those concerned with providing health care in the developed countries face a different set of problems. The

diseases so prevalent in the Third World have, for the most part, been eliminated or are readily treatable. Many

of the adverse environmental conditions and public health hazards have been conquered. Social services of

varying degrees of adequacy have been provided. Public funds can be called upon to support the cost of medical

care, and there are a variety of private insurance plans available to the consumer. Nevertheless, the funds that a

government can devote to health care are limited and the cost of modern medicine continues to increase, thus

putting adequate medical services beyond the reach of many. Adding to the expense of modern medical

practices is the increasing demand for greater funding of health education and preventive measures specifically

directed toward the poor.

Harold Scarborough

Britannica LaunchPacks | The Treatment and Prevention of Disease

29 of 123© 2020 Encyclopædia Britannica, Inc.

Administration of primary health care

A health care worker in surgical scrubs.

Creatas/Getty Images Plus

In many parts of the world, particularly in developing countries, people get their primary health care, or first-

contact care, where available at all, from nonmedically qualified personnel; these cadres of medical auxiliaries

are being trained in increasing numbers to meet overwhelming needs among rapidly growing populations. Even

among the comparatively wealthy countries of the world, containing in all a much smaller percentage of the

world’s population, escalation in the costs of health services and in the cost of training a physician has

precipitated some movement toward reappraisal of the role of the medical doctor in the delivery of first-contact

care.

In advanced industrial countries, however, it is usually a trained physician who is called upon to provide the first-

contact care. The patient seeking first-contact care can go either to a general practitioner or turn directly to a

specialist. Which is the wisest choice has become a subject of some controversy. The general practitioner,

however, is becoming rather rare in some developed countries. In countries where he does still exist, he is being

increasingly observed as an obsolescent figure, because medicine covers an immense, rapidly changing, and

complex field of which no physician can possibly master more than a small fraction. The very concept of the

general practitioner, it is thus argued, may be absurd.

The obvious alternative to general practice is the direct access of a patient to a specialist. If a patient has

problems with vision, he goes to an eye specialist, and if he has a pain in his chest (which he fears is due to his

heart), he goes to a heart specialist. One objection to this plan is that the patient often cannot know which organ

is responsible for his symptoms, and the most careful physician, after doing many investigations, may remain

uncertain as to the cause. Breathlessness—a common symptom—may be due to heart disease, to lung disease,

to anemia, or to emotional upset. Another common symptom is general malaise—feeling run-down or always

tired; others are headache, chronic low backache, rheumatism, abdominal discomfort, poor appetite, and

constipation. Some patients may also be overtly anxious or depressed. Among the most subtle medical skills is

the ability to assess people with such symptoms and to distinguish between symptoms that are caused

predominantly by emotional upset and those that are predominantly of bodily origin. A specialist may be capable

of such a general assessment, but, often, with emphasis on his own subject, he fails at this point. The generalist

with his broader training is often the better choice for a first diagnosis, with referral to a specialist as the next

option.

It is often felt that there are also practical advantages for the patient in having his own doctor, who knows about

his background, who has seen him through various illnesses, and who has often looked after his family as well.

Britannica LaunchPacks | The Treatment and Prevention of Disease

30 of 123© 2020 Encyclopædia Britannica, Inc.

This personal physician, often a generalist, is in the best position to decide when the patient should be referred

to a consultant.

The advantages of general practice and specialization are combined when the physician of first contact is a

pediatrician. Although he sees only children and thus acquires a special knowledge of childhood maladies, he

remains a generalist who looks at the whole patient. Another combination of general practice and specialization

is represented by group practice, the members of which partially or fully specialize. One or more may be general

practitioners, and one may be a surgeon, a second an obstetrician, a third a pediatrician, and a fourth an

internist. In isolated communities group practice may be a satisfactory compromise, but in urban regions, where

nearly everyone can be sent quickly to a hospital, the specialist surgeon working in a fully equipped hospital can

usually provide better treatment than a general practitioner surgeon in a small clinic hospital.

Medical practice in developed countriesBritain

Before 1948, general practitioners in Britain settled where they could make a living. Patients fell into two main

groups: weekly wage earners, who were compulsorily insured, were on a doctor’s “panel” and were given free

medical attention (for which the doctor was paid quarterly by the government); most of the remainder paid the

doctor a fee for service at the time of the illness. In 1948 the National Health Service began operation. Under its

provisions, everyone is entitled to free medical attention with a general practitioner with whom he is registered.

Though general practitioners in the National Health Service are not debarred from also having private patients,

these must be people who are not registered with them under the National Health Service. Any physician is free

to work as a general practitioner entirely independent of the National Health Service, though there are few who

do so. Almost the entire population is registered with a National Health Service general practitioner, and the vast

majority automatically sees this physician, or one of his partners, when they require medical attention. A few

people, mostly wealthy, while registered with a National Health Service general practitioner, regularly see

another physician privately; and a few may occasionally seek a private consultation because they are

dissatisfied with their National Health Service physician.

A general practitioner under the National Health Service remains an independent contractor, paid by a capitation

fee; that is, according to the number of people registered with him. He may work entirely from his own office,

and he provides and pays his own receptionist, secretary, and other ancillary staff. Most general practitioners

have one or more partners and work more and more in premises built for the purpose. Some of these structures

are erected by the physicians themselves, but many are provided by the local authority, the physicians paying

rent for using them. Health centres, in which groups of general practitioners work have become common.

In Britain only a small minority of general practitioners can admit patients to a hospital and look after them

personally. Most of this minority are in country districts, where, before the days of the National Health Service,

there were cottage hospitals run by general practitioners; many of these hospitals continued to function in a

similar manner. All general practitioners use such hospital facilities as X-ray departments and laboratories, and

many general practitioners work in hospitals in emergency rooms (casualty departments) or as clinical assistants

to consultants, or specialists.

General practitioners are spread more evenly over the country than formerly, when there were many in the

richer areas and few in the industrial towns. The maximum allowed list of National Health Service patients per

doctor is 3,500; the average is about 2,500. Patients have free choice of the physician with whom they register,

Britannica LaunchPacks | The Treatment and Prevention of Disease

31 of 123© 2020 Encyclopædia Britannica, Inc.

with the proviso that they cannot be accepted by one who already has a full list and that a physician can refuse

to accept them (though such refusals are rare). In remote rural places there may be only one physician within a

reasonable distance.

Until the mid-20th century it was not unusual for the doctor in Britain to visit patients in their own homes. A

general practitioner might make 15 or 20 such house calls in a day, as well as seeing patients in the office or

“surgery,” often in the evenings. This enabled him to become a family doctor in fact as well as in name. In

modern practice, however, a home visit is quite exceptional and is paid only to the severely disabled or seriously

ill when other recourses are ruled out. All patients are normally required to go to the doctor.

It has also become unusual for a personal doctor to be available during weekends or holidays. His place may be

taken by one of his partners in a group practice, a provision that is reasonably satisfactory. General

practitioners, however, may now use one of several commercial deputizing services that employs young doctors

to be on call. Although some of these young doctors may be well experienced, patients do not generally

appreciate this kind of arrangement.

John Walford ToddHarold Scarborough

United States

Whereas in Britain the doctor of first contact is regularly a general practitioner, in the United States the nature of

first-contact care is less consistent. General practice in the United States was in a state of decline in the second

half of the 20th century, especially in metropolitan areas. Increasingly, the general practitioner was replaced by

the growing field of family practice. In 1969 family practice was recognized as a medical specialty after the

American Academy of General Practice (later the American Academy of Family Physicians) and the American

Medical Association created the American Board of General Practice (later American Board of Family Medicine).

Since that time the field has become one of the larger medical specialties in the United States. The family

physicians were the first group of medical specialists in the United States for whom recertification was required.

There is no national health service, as such, in the United States. Most physicians in the country have

traditionally been in some form of private practice, whether seeing patients in their own offices, clinics, medical

centres, or another type of facility and regardless of the patients’ income. Doctors are usually compensated by

such state and federally supported agencies as Medicaid (for treating the poor) and Medicare (for treating the

elderly); not all doctors, however, accept poor patients. There are also some state-supported clinics and

hospitals where the poor and elderly may receive free or low-cost treatment, and some doctors devote a small

percentage of their time to treatment of the indigent. Veterans may receive free treatment at Veterans

Administration hospitals, and the federal government through its Indian Health Service provides medical services

to American Indians and Alaskan natives, sometimes using trained auxiliaries for first-contact care.

In the rural United States first-contact care is likely to come from a generalist. The middle- and upper-income

groups living in urban areas, however, have access to a larger number of primary medical care options. Children

are often taken to pediatricians, who may oversee the child’s health needs until adulthood. Adults frequently

make their initial contact with an internist, whose field is mainly that of medical (as opposed to surgical)

illnesses; the internist often becomes the family physician. Other adults choose to go directly to physicians with

narrower specialties, including dermatologists, allergists, gynecologists, orthopedists, and ophthalmologists.

Patients in the United States may also choose to be treated by doctors of osteopathy. These doctors are fully

qualified, but they make up only a small percentage of the country’s physicians. They may also branch off into

specialties, but general practice is much more common in their group than among M.D.’s.

Britannica LaunchPacks | The Treatment and Prevention of Disease

32 of 123© 2020 Encyclopædia Britannica, Inc.

It used to be more common in the United States for physicians providing primary care to work independently,

providing their own equipment and paying their own ancillary staff. In smaller cities they mostly had full hospital

privileges, but in larger cities these privileges were more likely to be restricted. Physicians, often sharing the

same specialties, are increasingly entering into group associations, where the expenses of office space, staff,

and equipment may be shared; such associations may work out of suites of offices, clinics, or medical centres.

The increasing competition and risks of private practice have caused many physicians to join Health

Maintenance Organizations (HMOs), which provide comprehensive medical care and hospital care on a prepaid

basis. The cost savings to patients are considerable, but they must use only the HMO doctors and facilities.

HMOs stress preventive medicine and out-patient treatment as opposed to hospitalization as a means of

reducing costs, a policy that has caused an increased number of empty hospital beds in the United States.

While the number of doctors per 100,000 population in the United States has been steadily increasing, there has

been a trend among physicians toward the use of trained medical personnel to handle some of the basic

services normally performed by the doctor. So-called physician extender services are commonly divided into

nurse practitioners and physician’s assistants, both of whom provide similar ancillary services for the general

practitioner or specialist. Such personnel do not replace the doctor. Almost all American physicians have

systems for taking each other’s calls when they become unavailable. House calls in the United States, as in

Britain, have become exceedingly rare.

EB Editors

Russia

In Russia general practitioners are prevalent in the thinly populated rural areas. Pediatricians deal with children

up to about age 15. Internists look after the medical ills of adults, and occupational physicians deal with the

workers, sharing care with internists.

Teams of physicians with experience in varying specialties work from polyclinics or outpatient units, where many

types of diseases are treated. Small towns usually have one polyclinic to serve all purposes. Large cities

commonly have separate polyclinics for children and adults, as well as clinics with specializations such as

women’s health care, mental illnesses, and sexually transmitted diseases. Polyclinics usually have X-ray

apparatus and facilities for examination of tissue specimens, facilities associated with the departments of the

district hospital. Beginning in the late 1970s was a trend toward the development of more large, multipurpose

treatment centres, first-aid hospitals, and specialized medicine and health care centres.

Home visits have traditionally been common, and much of the physician’s time is spent in performing routine

checkups for preventive purposes. Some patients in sparsely populated rural areas may be seen first by

feldshers (auxiliary health workers), nurses, or midwives who work under the supervision of a polyclinic or

hospital physician. The feldsher was once a lower-grade physician in the army or peasant communities, but

feldshers are now regarded as paramedical workers.

Japan

In Japan, with less rigid legal restriction of the sale of pharmaceuticals than in the West, there was formerly a

strong tradition of self-medication and self-treatment. This was modified in 1961 by the institution of health

insurance programs that covered a large proportion of the population; there was then a great increase in visits

to the outpatient clinics of hospitals and to private clinics and individual physicians.

When Japan shifted from traditional Chinese medicine with the adoption of Western medical practices in the

1870s, Germany became the chief model. As a result of German influence and of their own traditions, Japanese

Britannica LaunchPacks | The Treatment and Prevention of Disease

33 of 123© 2020 Encyclopædia Britannica, Inc.

physicians tended to prefer professorial status and scholarly research opportunities at the universities or

positions in the national or prefectural hospitals to private practice. There were some pioneering physicians,

however, who brought medical care to the ordinary people.

Physicians in Japan have tended to cluster in the urban areas. The Medical Service Law of 1963 was amended to

empower the Ministry of Health and Welfare to control the planning and distribution of future public and

nonprofit medical facilities, partly to redress the urban-rural imbalance. Meanwhile, mobile services were

expanded.

The influx of patients into hospitals and private clinics after the passage of the national health insurance acts of

1961 had, as one effect, a severe reduction in the amount of time available for any one patient. Perhaps in

reaction to this situation, there has been a modest resurgence in the popularity of traditional Chinese medicine,

with its leisurely interview, its dependence on herbal and other “natural” medicines, and its other traditional

diagnostic and therapeutic practices. The rapid aging of the Japanese population as a result of the sharply

decreasing death rate and birth rate has created an urgent need for expanded health care services for the

elderly. There has also been an increasing need for centres to treat health problems resulting from

environmental causes.

Other developed countries

On the continent of Europe there are great differences both within single countries and between countries in the

kinds of first-contact medical care. General practice, while declining in Europe as elsewhere, is still rather

common even in some large cities, as well as in remote country areas.

In The Netherlands, departments of general practice are administered by general practitioners in all the medical

schools—an exceptional state of affairs—and general practice flourishes. In the larger cities of Denmark, general

practice on an individual basis is usual and popular, because the physician works only during office hours. In

addition, there is a duty doctor service for nights and weekends. In the cities of Sweden, primary care is given by

specialists. In the remote regions of northern Sweden, district doctors act as general practitioners to patients

spread over huge areas; the district doctors delegate much of their home visiting to nurses.

In France there are still general practitioners, but their number is declining. Many medical practitioners advertise

themselves directly to the public as specialists in internal medicine, ophthalmologists, gynecologists, and other

kinds of specialists. Even when patients have a general practitioner, they may still go directly to a specialist.

Attempts to stem the decline in general practice are being made by the development of group practice and of

small rural hospitals equipped to deal with less serious illnesses, where general practitioners can look after their

patients.

Although Israel has a high ratio of physicians to population, there is a shortage of general practitioners, and only

in rural areas is general practice common. In the towns many people go directly to pediatricians, gynecologists,

and other specialists, but there has been a reaction against this direct access to the specialist. More general

practitioners have been trained, and the Israel Medical Association has recommended that no patient should be

referred to a specialist except by the family physician or on instructions given by the family nurse. At Tel Aviv

University there is a department of family medicine. In some newly developing areas, where the doctor shortage

is greatest, there are medical centres at which all patients are initially interviewed by a nurse. The nurse may

deal with many minor ailments, thus freeing the physician to treat the more seriously ill.

Britannica LaunchPacks | The Treatment and Prevention of Disease

34 of 123© 2020 Encyclopædia Britannica, Inc.

Nearly half the medical doctors in Australia are general practitioners—a far higher proportion than in most other

advanced countries—though, as elsewhere, their numbers are declining. They tend to do far more for their

patients than in Britain, many performing such operations as removal of the appendix, gallbladder, or uterus,

operations that elsewhere would be carried out by a specialist surgeon. Group practices are common.

Medical practice in developing countriesChina

Health services in China since the Cultural Revolution have been characterized by decentralization and

dependence on personnel chosen locally and trained for short periods. Emphasis is given to selfless motivation,

self-reliance, and to the involvement of everyone in the community. Campaigns stressing the importance of

preventive measures and their implementation have served to create new social attitudes as well as to break

down divisions between different categories of health workers. Health care is regarded as a local matter that

should not require the intervention of any higher authority; it is based upon a highly organized and well-

disciplined system that is egalitarian rather than hierarchical, as in Western societies, and which is well suited to

the rural areas where about two-thirds of the population live. In the large and crowded cities an important

constituent of the health-care system is the residents’ committees, each for a population of 1,000 to 5,000

people. Care is provided by part-time personnel with periodic visits by a doctor. A number of residents’

committees are grouped together into neighbourhoods of some 50,000 people where there are clinics and

general hospitals staffed by doctors as well as health auxiliaries trained in both traditional and Westernized

medicine. Specialized care is provided at the district level (over 100,000 people), in district hospitals and in

epidemic and preventive medicine centres. In many rural districts people’s communes have organized

cooperative medical services that provide primary care for a small annual fee.

Throughout China the value of traditional medicine is stressed, especially in the rural areas. All medical schools

are encouraged to teach traditional medicine as part of their curriculum, and efforts are made to link colleges of

Chinese medicine with Western-type medical schools. Medical education is of shorter duration than it is in

Europe, and there is greater emphasis on practical work. Students spend part of their time away from the

medical school working in factories or in communes; they are encouraged to question what they are taught and

to participate in the educational process at all stages. One well-known form of traditional medicine is

acupuncture, which is used as a therapeutic and pain-relieving technique; requiring the insertion of brass-

handled needles at various points on the body, acupuncture has become quite prominent as a form of

anesthesia.

The vast number of nonmedically qualified health staff, upon whom the health-care system greatly depends,

includes both full-time and part-time workers. The latter include so-called barefoot doctors, who work mainly in

rural areas, worker doctors in factories, and medical workers in residential communities. None of these groups is

medically qualified. They have had only a three-month period of formal training, part of which is done in a

hospital, fairly evenly divided between theoretical and practical work. This is followed by a varying period of on-

the-job experience under supervision.

India

Āyurvedic medicine is an example of a well-organized system of traditional health care, both preventive and

curative, that is widely practiced in parts of Asia. Āyurvedic medicine has a long tradition behind it, having

originated in India perhaps as long as 3,000 years ago. It is still a favoured form of health care in large parts of

the Eastern world, especially in India, where a large percentage of the population use this system exclusively or

Britannica LaunchPacks | The Treatment and Prevention of Disease

35 of 123© 2020 Encyclopædia Britannica, Inc.

combined with modern medicine. The Indian Medical Council was set up in 1971 by the Indian government to

establish maintenance of standards for undergraduate and postgraduate education. It establishes suitable

qualifications in Indian medicine and recognizes various forms of traditional practice including Āyurvedic, Unani,

and Siddha. Projects have been undertaken to integrate the indigenous Indian and Western forms of medicine.

Most Āyurvedic practitioners work in rural areas, providing health care to at least 500,000,000 people in India

alone. They therefore represent a major force for primary health care, and their training and deployment are

important to the government of India.

Like scientific medicine, Āyurvedic medicine has both preventive and curative aspects. The preventive

component emphasizes the need for a strict code of personal and social hygiene, the details of which depend

upon individual, climatic, and environmental needs. Bodily exercises, the use of herbal preparations, and Yoga

form a part of the remedial measures. The curative aspects of Āyurvedic medicine involves the use of herbal

medicines, external preparations, physiotherapy, and diet. It is a principle of Āyurvedic medicine that the

preventive and therapeutic measures be adapted to the personal requirements of each patient.

Other developing countries

A main goal of the World Health Organization (WHO), as expressed in the Alma-Ata Declaration of 1978, is to

provide to all the citizens of the world a level of health that will allow them to lead socially and economically

productive lives by the year 2000. By the late 1980s, however, vast disparities in health care still existed

between the rich and poor countries of the world. In developing countries such as Ethiopia, Guinea, Mali, and

Mozambique, for instance, governments in the late 1980s spent less than $5 per person per year on public

health, while in most western European countries several hundred dollars per year was spent on each person.

The disproportion of the number of physicians available between developing and developed countries is similarly

wide.

Along with the shortage of physicians, there is a shortage of everything else needed to provide medical care—of

equipment, drugs, and suitable buildings, and of nurses, technicians, and all other grades of staff, whose

presence is taken for granted in the affluent societies. Yet there are greater percentages of sick in the poor

countries than in the rich countries. In the poor countries a high proportion of people are young, and all are

liable to many infections, including tuberculosis, syphilis, typhoid, and cholera (which, with the possible

exception of syphilis, are now rare in the rich countries), and also malaria, yaws, worm infestations, and many

other conditions occurring primarily in the warmer climates. Nearly all of these infections respond to the

antibiotics and other drugs that have been discovered since the 1920s. There is also much malnutrition and

anemia, which can be cured if funding is available. There is a prevalence of disorders remediable by surgery.

Preventive medicine can ensure clean water supplies, destroy insects that carry infections, teach hygiene, and

show how to make the best use of resources.

In most poor countries there are a few people, usually living in the cities, who can afford to pay for medical care,

and in a free market system the physicians tend to go where they can make the best living; this situation causes

the doctor–patient ratio to be much higher in the towns than in country districts. A physician in Bombay or in Rio

de Janeiro, for example, may have equipment as lavish as that of a physician in the United States and can earn

an excellent income. The poor, however, both in the cities and in the country, can get medical attention only if it

is paid for by the state, by some supranational body, or by a mission or other charitable organization. Moreover,

the quality of the care they receive is often poor, and in remote regions it may be lacking altogether. In practice,

hospitals run by a mission may cooperate closely with state-run health centres.

Because physicians are scarce, their skills must be used to best advantage, and much of the work normally done

by physicians in the rich countries has to be delegated to auxiliaries or nurses, who have to diagnose the

Britannica LaunchPacks | The Treatment and Prevention of Disease

36 of 123© 2020 Encyclopædia Britannica, Inc.

common conditions, give treatment, take blood samples, help with operations, supply simple posters containing

health advice, and carry out other tasks. In such places the doctor has time only to perform major operations

and deal with the more difficult medical problems. People are treated as far as possible on an outpatient basis

from health centres housed in simple buildings; few can travel except on foot, and, if they are more than a few

miles from a health centre, they tend not to go there. Health centres also may be used for health education.

Although primary health-care service differs from country to country, that developed in Tanzania is

representative of many that have been devised in largely rural developing countries. The most important feature

of the Tanzanian rural health service is the rural health centre, which, with its related dispensaries, is intended

to provide comprehensive health services for the community. The staff is headed by the assistant medical officer

and the medical assistant. The assistant medical officer has at least four years of experience, which is then

followed by further training for 18 months. He is not a doctor but serves to bridge the gap between medical

assistant and physician. The medical assistant has three years of general medical education. The work of the

rural health centres and dispensaries is mainly of three kinds: diagnosis and treatment, maternal and child

health, and environmental health. The main categories of primary health workers also include medical aids,

maternal and child health aids, and health auxiliaries. Nurses and midwives form another category of worker. In

the villages there are village health posts staffed by village medical helpers working under supervision from the

rural health centre.

In some primitive elements of the societies of developing countries, and of some developed countries, there

exists the belief that illness comes from the displeasure of ancestral gods and evil spirits, from the malign

influence of evilly disposed persons, or from natural phenomena that can neither be forecast nor controlled. To

deal with such causes there are many varieties of indigenous healers who practice elaborate rituals on behalf of

both the physically ill and the mentally afflicted. If it is understood that such beliefs, and other forms of

shamanism, may provide a basis upon which health care can be based, then primary health care may be said to

exist almost everywhere. It is not only easily available but also readily acceptable, and often preferred, to more

rational methods of diagnosis and treatment. Although such methods may sometimes be harmful, they may

often be effective, especially where the cause is psychosomatic. Other patients, however, may suffer from a

disease for which there is a cure in modern medicine.

In order to improve the coverage of primary health-care services and to spread more widely some of the benefits

of Western medicine, attempts have sometimes been made to find a means of cooperation, or even integration,

between traditional and modern medicine (see above Medical practice in developing countries: India). In Africa,

for example, some such attempts are officially sponsored by ministries of health, state governments,

universities, and the like, and they have the approval of WHO, which often takes the lead in this activity. In view,

however, of the historical relationships between these two systems of medicine, their different basic concepts,

and the fact that their methods cannot readily be combined, successful merging has been limited.

Alternative or complementary medicinePersons dissatisfied with the methods of modern medicine or with its results sometimes seek help from those

professing expertise in other, less conventional, and sometimes controversial, forms of health care. Such

practitioners are not medically qualified unless they are combining such treatments with a regular (allopathic)

practice, which includes osteopathy. In many countries the use of some forms, such as chiropractic, requires

licensing and a degree from an approved college. The treatments afforded in these various practices are not

always subjected to objective assessment, yet they provide services that are alternative, and sometimes

complementary, to conventional practice. This group includes practitioners of homeopathy, naturopathy,

acupuncture, hypnotism, and various meditative and quasi-religious forms. Numerous persons also seek out

Britannica LaunchPacks | The Treatment and Prevention of Disease

37 of 123© 2020 Encyclopædia Britannica, Inc.

some form of faith healing to cure their ills, sometimes as a means of last resort. Religions commonly include

some advents of miraculous curing within their scriptures. The belief in such curative powers has been in part

responsible for the increasing popularity of the television, or “electronic,” preacher in the United States, a

phenomenon that involves millions of viewers. Millions of others annually visit religious shrines, such as the one

at Lourdes in France, with the hope of being miraculously healed.

Special practices and fields of medicineSpecialties in medicine

At the beginning of World War II it was possible to recognize a number of major medical specialties, including

internal medicine, obstetrics and gynecology, pediatrics, pathology, anesthesiology, ophthalmology, surgery,

orthopedic surgery, plastic surgery, psychiatry and neurology, radiology, and urology. Hematology was also an

important field of study, and microbiology and biochemistry were important medically allied specialties. Since

World War II, however, there has been an almost explosive increase of knowledge in the medical sciences as

well as enormous advances in technology as applicable to medicine. These developments have led to more and

more specialization. The knowledge of pathology has been greatly extended, mainly by the use of the electron

microscope; similarly microbiology, which includes bacteriology, expanded with the growth of such other

subfields as virology (the study of viruses) and mycology (the study of yeasts and fungi in medicine).

Biochemistry, sometimes called clinical chemistry or chemical pathology, has contributed to the knowledge of

disease, especially in the field of genetics where genetic engineering has become a key to curing some of the

most difficult diseases. Hematology also expanded after World War II with the development of electron

microscopy. Contributions to medicine have come from such fields as psychology and sociology especially in

such areas as mental disorders and mental handicaps. Clinical pharmacology has led to the development of

more effective drugs and to the identification of adverse reactions. More recently established medical specialties

are those of preventive medicine, physical medicine and rehabilitation, family practice, and nuclear medicine. In

the United States every medical specialist must be certified by a board composed of members of the specialty in

which certification is sought. Some type of peer certification is required in most countries.

Magnetic resonance imaging (MRI) is a powerful diagnostic technique that is used to visualize organs …

© Corbis

Expansion of knowledge both in depth and in range has encouraged the development of new forms of treatment

that require high degrees of specialization, such as organ transplantation and exchange transfusion; the field of

anesthesiology has grown increasingly complex as equipment and anesthetics have improved. New technologies

have introduced microsurgery, laser beam surgery, and lens implantation (for cataract patients), all requiring the

specialist’s skill. Precision in diagnosis has markedly improved; advances in radiology, the use of ultrasound,

Britannica LaunchPacks | The Treatment and Prevention of Disease

38 of 123© 2020 Encyclopædia Britannica, Inc.

computerized axial tomography (CAT scan), and nuclear magnetic resonance imaging are examples of the

extension of technology requiring expertise in the field of medicine.

To provide more efficient service it is not uncommon for a specialist surgeon and a specialist physician to form a

team working together in the field of, for example, heart disease. An advantage of this arrangement is that they

can attract a highly trained group of nurses, technologists, operating room technicians, and so on, thus greatly

improving the efficiency of the service to the patient. Such specialization is expensive, however, and has

required an increasingly large proportion of the health budget of institutions, a situation that eventually has its

financial effect on the individual citizen. The question therefore arises as to their cost-effectiveness.

Governments of developing countries have usually found, for instance, that it is more cost-efficient to provide

more people with basic care.

Teaching

Physicians in developed countries frequently prefer posts in hospitals with medical schools. Newly qualified

physicians want to work there because doing so will aid their future careers, though the actual experience may

be wider and better in a hospital without a medical school. Senior physicians seek careers in hospitals with

medical schools because consultant, specialist, or professorial posts there usually carry a high degree of

prestige. When the posts are salaried, the salaries are sometimes, but not always, higher than in a nonteaching

hospital. Usually a consultant who works in private practice earns more when on the staff of a medical school.

In many medical schools there are clinical professors in each of the major specialties—such as surgery, internal

medicine, obstetrics and gynecology, and psychiatry—and often of the smaller specialties as well. There are also

professors of pathology, radiology, and radiotherapy. Whether professors or not, all doctors in teaching hospitals

have the two functions of caring for the sick and educating students. They give lectures and seminars and are

accompanied by students on ward rounds.

Industrial medicine

The Industrial Revolution greatly changed, and as a rule worsened, the health hazards caused by industry, while

the numbers at risk vastly increased. In Britain the first small beginnings of efforts to ameliorate the lot of the

workers in factories and mines began in 1802 with the passing of the first factory act, the Health and Morals of

Apprentices Act. The factory act of 1838, however, was the first truly effective measure in the industrial field. It

forbade night work for children and restricted their work hours to 12 per day. Children under 13 were required to

attend school. A factory inspectorate was established, the inspectors being given powers of entry into factories

and power of prosecution of recalcitrant owners. Thereafter there was a succession of acts with detailed

regulations for safety and health in all industries. Industrial diseases were made notifiable, and those who

developed any prescribed industrial disease were entitled to benefits.

The situation is similar in other developed countries. Physicians are bound by legal restrictions and must report

industrial diseases. The industrial physician’s most important function, however, is to prevent industrial

diseases. Many of the measures to this end have become standard practice, but, especially in industries working

with new substances, the physician should determine if workers are being damaged and suggest preventive

measures. The industrial physician may advise management about industrial hygiene and the need for safety

devices and protective clothing and may become involved in building design. The physician or health worker

may also inform the worker of occupational health hazards.

Modern factories usually have arrangements for giving first aid in case of accidents. Depending upon the size of

the plant, the facilities may range from a simple first-aid station to a large suite of lavishly equipped rooms and

may include a staff of qualified nurses and physiotherapists and one or perhaps more full-time physicians.

Britannica LaunchPacks | The Treatment and Prevention of Disease

39 of 123© 2020 Encyclopædia Britannica, Inc.

Periodic medical examination

Physicians in industry carry out medical examinations, especially on new employees and on those returning to

work after sickness or injury. In addition, those liable to health hazards may be examined regularly in the hope

of detecting evidence of incipient damage. In some organizations every employee may be offered a regular

medical examination.

The industrial and the personal physician

When a worker also has a personal physician, there may be doubt, in some cases, as to which physician bears

the main responsibility for his health. When someone has an accident or becomes acutely ill at work, the first aid

is given or directed by the industrial physician. Subsequent treatment may be given either at the clinic at work

or by the personal physician. Because of labour-management difficulties, workers sometimes tend not to trust

the diagnosis of the management-hired physician.

Industrial health services

During the epoch of the Soviet Union and the Soviet bloc, industrial health service generally developed more

fully in those countries than in the capitalist countries. At the larger industrial establishments in the Soviet

Union, polyclinics were created to provide both occupational and general care for workers and their families.

Occupational physicians were responsible for preventing occupational diseases and injuries, health screening,

immunization, and health education.

In the capitalist countries, on the other hand, no fixed pattern of industrial health service has emerged.

Legislation impinges upon health in various ways, including the provision of safety measures, the restriction of

pollution, and the enforcement of minimum standards of lighting, ventilation, and space per person. In most of

these countries there is found an infinite variety of schemes financed and run by individual firms or, equally, by

huge industries. Labour unions have also done much to enforce health codes within their respective industries.

In the developing countries there has been generally little advance in industrial medicine.

Family health care

In many societies special facilities are provided for the health care of pregnant women, mothers, and their young

children. The health care needs of these three groups are generally recognized to be so closely related as to

require a highly integrated service that includes prenatal care, the birth of the baby, the postnatal period, and

the needs of the infant. Such a continuum should be followed by a service attentive to the needs of young

children and then by a school health service. Family clinics are common in countries that have state-sponsored

health services, such as those in the United Kingdom and elsewhere in Europe. Family health care in some

developed countries, such as the United States, is provided for low-income groups by state-subsidized facilities,

but other groups defer to private physicians or privately run clinics.

Prenatal clinics provide a number of elements. There is, first, the care of the pregnant woman, especially if she is

in a vulnerable group likely to develop some complication during the last few weeks of pregnancy and

subsequent delivery. Many potential hazards, such as diabetes and high blood pressure, can be identified and

measures taken to minimize their effects. In developing countries pregnant women are especially susceptible to

many kinds of disorders, particularly infections such as malaria. Local conditions determine what special

precautions should be taken to ensure a healthy child. Most pregnant women, in their concern to have a healthy

child, are receptive to simple health education. The prenatal clinic provides an excellent opportunity to teach the

mother how to look after herself during pregnancy, what to expect at delivery, and how to care for her baby. If

the clinic is attended regularly, the woman’s record will be available to the staff that will later supervise the

Britannica LaunchPacks | The Treatment and Prevention of Disease

40 of 123© 2020 Encyclopædia Britannica, Inc.

delivery of the baby; this is particularly important for someone who has been determined to be at risk. The same

clinical unit should be responsible for prenatal, natal, and postnatal care as well as for the care of the newborn

infants.

Most pregnant women can be safely delivered in simple circumstances without an elaborately trained staff or

sophisticated technical facilities, provided that these can be called upon in emergencies. In developed countries

it was customary in premodern times for the delivery to take place in the woman’s home supervised by a

qualified midwife or by the family doctor. By the mid-20th century women, especially in urban areas, usually

preferred to have their babies in a hospital, either in a general hospital or in a more specialized maternity

hospital. In many developing countries traditional birth attendants supervise the delivery. They are women, for

the most part without formal training, who have acquired skill by working with others and from their own

experience. Normally they belong to the local community where they have the confidence of the family, where

they are content to live and serve, and where their services are of great value. In many developing countries the

better training of birth attendants has a high priority. In developed Western countries there has been a trend

toward delivery by natural childbirth, including delivery in a hospital without anesthesia, and home delivery.

Postnatal care services are designed to supervise the return to normal of the mother. They are usually given by

the staff of the same unit that was responsible for the delivery. Important considerations are the matter of

breast- or artificial feeding and the care of the infant. Today the prospects for survival of babies born

prematurely or after a difficult and complicated labour, as well as for neonates (recently born babies) with some

physical abnormality, are vastly improved. This is due to technical advances, including those that can determine

defects in the prenatal stage, as well as to the growth of neonatology as a specialty. A vital part of the family

health-care service is the child welfare clinic, which undertakes the care of the newborn. The first step is the

thorough physical examination of the child on one or more occasions to determine whether or not it is normal

both physically and, if possible, mentally. Later periodic examinations serve to decide if the infant is growing

satisfactorily. Arrangements can be made for the child to be protected from major hazards by, for example,

immunization and dietary supplements. Any intercurrent condition, such as a chest infection or skin disorder,

can be detected early and treated. Throughout the whole of this period mother and child are together, and

particular attention is paid to the education of the mother for the care of the child.

A part of the health service available to children in the developed countries is that devoted to child guidance.

This provides psychiatric guidance to maladjusted children usually through the cooperative work of a child

psychiatrist, educational psychologist, and schoolteacher.

Geriatrics

Since the mid-20th century a change has occurred in the population structure in developed countries. The

proportion of elderly people has been increasing. Since 1983, however, in most European countries the

population growth of that group has leveled off, although it is expected to continue to grow more rapidly than

the rest of the population in most countries through the first third of the 21st century. In the late 20th century

Japan had the fastest growing elderly population.

Geriatrics, the health care of the elderly, is therefore a considerable burden on health services. In the United

Kingdom about one-third of all hospital beds are occupied by patients over 65; half of these are psychiatric

patients. The physician’s time is being spent more and more with the elderly, and since statistics show that

women live longer than men, geriatric practice is becoming increasingly concerned with the treatment of

women. Elderly people often have more than one disorder, many of which are chronic and incurable, and they

need more attention from health-care services. In the United States there has been some movement toward

making geriatrics a medical specialty, but it has not generally been recognized.

Britannica LaunchPacks | The Treatment and Prevention of Disease

41 of 123© 2020 Encyclopædia Britannica, Inc.

Support services for the elderly provided by private or state-subsidized sources include domestic help, delivery

of meals, day-care centres, elderly residential homes or nursing homes, and hospital beds either in general

medical wards or in specialized geriatric units. The degree of accessibility of these services is uneven from

country to country and within countries. In the United States, for instance, although there are some federal

programs, each state has its own elderly programs, which vary widely. However, as the elderly become an

increasingly larger part of the population their voting rights are providing increased leverage for obtaining more

federal and state benefits. The general practitioner or family physician working with visiting health and social

workers and in conjunction with the patient’s family often form a working team for elderly care.

In the developing world, countries are largely spared such geriatric problems, but not necessarily for positive

reasons. A principal cause, for instance, is that people do not live so long. Another major reason is that in the

extended family concept, still prevalent among developing countries, most of the caretaking needs of the elderly

are provided by the family.

Public health practice

Ministry of Public Health worker administering water-purifying tablets in Kabul, 2005.

Tomas Munita/AP

The physician working in the field of public health is mainly concerned with the environmental causes of ill

health and in their prevention. Bad drainage, polluted water and atmosphere, noise and smells, infected food,

bad housing, and poverty in general are all his special concern. Perhaps the most descriptive title he can be

given is that of community physician. In Britain he has been customarily known as the medical officer of health

and, in the United States, as the health officer.

The spectacular improvement in the expectation of life in the affluent countries has been due far more to public

health measures than to curative medicine. These public health measures began operation largely in the 19th

century. At the beginning of that century, drainage and water supply systems were all more or less primitive;

nearly all the cities of that time had poorer water and drainage systems than Rome had possessed 1,800 years

previously. Infected water supplies caused outbreaks of typhoid, cholera, and other waterborne infections. By

the end of the century, at least in the larger cities, water supplies were usually safe. Food-borne infections were

also drastically reduced by the enforcement of laws concerned with the preparation, storage, and distribution of

food. Insect-borne infections, such as malaria and yellow fever, which were common in tropical and semitropical

climates, were eliminated by the destruction of the responsible insects. Fundamental to this improvement in

health has been the diminution of poverty, for most public health measures are expensive. The peoples of the

developing countries fall sick and sometimes die from infections that are virtually unknown in affluent countries.

Britannica LaunchPacks | The Treatment and Prevention of Disease

42 of 123© 2020 Encyclopædia Britannica, Inc.

Britain

Public health services in Britain are organized locally under the National Health Service. The medical officer of

health is employed by the local council and is the adviser in health matters. The larger councils employ a

number of mostly full-time medical officers; in some rural areas, a general practitioner may be employed part-

time as medical officer of health.

The medical officer has various statutory powers conferred by acts of Parliament, regulations and orders, such

as food and drugs acts, milk and dairies regulations, and factories acts. He supervises the work of sanitary

inspectors in the control of health nuisances. The compulsorily notifiable infectious diseases are reported to him,

and he takes appropriate action. Other concerns of the medical officer include those involved with the work of

the district nurse, who carries out nursing duties in the home, and the health visitor, who gives advice on health

matters, especially to the mothers of small babies. He has other duties in connection with infant welfare clinics,

crèches, day and residential nurseries, the examination of schoolchildren, child guidance clinics, foster homes,

factories, problem families, and the care of the aged and the handicapped.

United States

Federal, state, county, and city governments all have public health functions. Under the U.S. Department of

Health and Human Services is the Public Health Service, headed by an assistant secretary for health and the

surgeon general. State health departments are headed by a commissioner of health, usually a physician, who is

often in the governor’s cabinet. He usually has a board of health that adopts health regulations and holds

hearings on their alleged violations. A state’s public health code is the foundation on which all county and city

health regulations must be based. A city health department may be independent of its surrounding county

health department, or there may be a combined city-county health department. The physicians of the local

health departments are usually called health officers, though occasionally people with this title are not

physicians. The larger departments may have a public health director, a district health director, or a regional

health director.

The minimal complement of a local health department is a health officer, a public health nurse, a sanitation

expert, and a clerk who is also a registrar of vital statistics. There may also be sanitation personnel, nutritionists,

social workers, laboratory technicians, and others.

Japan

Japan’s Ministry of Health and Welfare directs public health programs at the national level, maintaining close

coordination among the fields of preventive medicine, medical care, and welfare and health insurance. The

departments of health of the prefectures and of the largest municipalities operate health centres. The integrated

community health programs of the centres encompass maternal and child health, communicable-disease

control, health education, family planning, health statistics, food inspection, and environmental sanitation.

Private physicians, through their local medical associations, help to formulate and execute particular public

health programs needed by their localities.

Numerous laws are administered through the ministry’s bureaus and agencies, which range from public health,

environmental sanitation, and medical affairs to the children and families bureau. The various categories of

institutions run by the ministry, in addition to the national hospitals, include research centres for cancer and

leprosy, homes for the blind, rehabilitation centres for the physically handicapped, and port quarantine services.

Britannica LaunchPacks | The Treatment and Prevention of Disease

43 of 123© 2020 Encyclopædia Britannica, Inc.

Former Soviet Union

In the aftermath of the dissolution of the Soviet Union, responsibility for public health fell to the governments of

the successor countries.

The public health services for the U.S.S.R. as a whole were directed by the Ministry of Health. The ministry,

through the 15 union republic ministries of health, directed all medical institutions within its competence as well

as the public health authorities and services throughout the country.

The administration was centralized, with little local autonomy. Each of the 15 republics had its own ministry of

health, which was responsible for carrying out the plans and decisions established by the U.S.S.R. Ministry of

Health. Each republic was divided into , or provinces, which had departments of health directly oblastiresponsible to the republic ministry of health. Each , in turn, had (municipalities), which have their oblast rayonyown health departments accountable to the health department. Finally, each was subdivided into oblast rayonsmaller (districts).uchastoki

In most rural the responsibility for public health lay with the chief physician, who was also medical rayonydirector of the central hospital. This system ensured unity of public health administration and rayonimplementation of the principle of planned development. Other health personnel included nurses, feldshers, and

midwives.

For more information on the history, organization, and progress of public health, see below.

Military practice

The medical services of armies, navies, and air forces are geared to war. During campaigns the first requirement

is the prevention of sickness. In all wars before the 20th century, many more combatants died of disease than of

wounds. And even in World War II and wars thereafter, although few died of disease, vast numbers became

casualties from disease.

The main means of preventing sickness are the provision of adequate food and pure water, thus eliminating

starvation, avitaminosis, and dysentery and other bowel infections, which used to be particular scourges of

armies; the provision of proper clothing and other means of protection from the weather; the elimination from

the service of those likely to fall sick; the use of vaccination and suppressive drugs to prevent various infections,

such as typhoid and malaria; and education in hygiene and in the prevention of sexually transmitted diseases, a

particular problem in the services. In addition, the maintenance of high morale has a striking effect on casualty

rates, for, when morale is poor, soldiers are likely to suffer psychiatric breakdowns, and malingering is more

prevalent.

The medical branch may provide advice about disease prevention, but the actual execution of this advice is

through the ordinary chains of command. It is the duty of the military, not of the medical, officer to ensure that

the troops obey orders not to drink infected water and to take tablets to suppress malaria.

Army medical organization

The medical doctor of first contact to the soldier in the armies of developed countries is usually an officer in the

medical corps. In peacetime the doctor sees the sick and has functions similar to those of the general

practitioner, prescribing drugs and dressings, and there may be a sick bay where slightly sick soldiers can

remain for a few days. The doctor is usually assisted by trained nurses and corpsmen. If a further medical

opinion is required, the patient can be referred to a specialist at a military or civilian hospital.

Britannica LaunchPacks | The Treatment and Prevention of Disease

44 of 123© 2020 Encyclopædia Britannica, Inc.

In a war zone, medical officers have an aid post where, with the help of corpsmen, they apply first aid to the

walking wounded and to the more seriously wounded who are brought in. The casualties are evacuated as

quickly as possible by field ambulances or helicopters. At a company station, medical officers and medical

corpsmen may provide further treatment before patients are evacuated to the main dressing station at the field

ambulance headquarters, where a surgeon may perform emergency operations. Thereafter, evacuation may be

to casualty clearing stations, to advanced hospitals, or to base hospitals. Air evacuation is widely used.

In peacetime most of the intermediate medical units exist only in skeleton form; the active units are at the

battalion and hospital level. When physicians join the medical corps, they may join with specialist qualifications,

or they may obtain such qualifications while in the army. A feature of army medicine is promotion to

administrative positions. The commanding officer of a hospital and the medical officer at headquarters may have

no contacts with actual patients.

Although medical officers in peacetime have some choice of the kind of work they will do, they are in a chain of

command and are subject to military discipline. When dealing with patients, however, they are in a special

position; they cannot be ordered by a superior officer to give some treatment or take other action that they

believe is wrong. Medical officers also do not bear or use arms unless their patients are being attacked.

Naval and air force medicine

Naval medical services are run on lines similar to those of the army. Junior medical officers are attached to ships

or to shore stations and deal with most cases of sickness in their units. When at sea, medical officers have an

exceptional degree of responsibility in that they work alone, unless they are on a very large ship. In peacetime,

only the larger ships carry a medical officer; in wartime, destroyers and other small craft may also carry medical

officers. Serious cases go to either a shore-based hospital or a hospital ship.

Flying has many medical repercussions. Cold, lack of oxygen, and changes of direction at high speed all have

important effects on bodily and mental functions. Armies and air forces may share the same medical services.

A developing field is aerospace medicine. This involves medical problems that were not experienced before

spaceflight, for the main reason that humans in space are not under the influence of gravity, a condition that has

profound physiological effects.

Clinical researchThe remarkable developments in medicine brought about in the 20th and 21st centuries, especially after World

War II, were based on research either in the basic sciences related to medicine or in the clinical field. Advances

in the use of radiation, nuclear energy, and space research have played an important part in this progress. Some

laypersons often think of research as taking place only in sophisticated laboratories or highly specialized

institutions where work is devoted to scientific advances that may or may not be applicable to medical practice.

This notion, however, ignores the clinical research that takes place on a day-to-day basis in hospitals and

doctors’ offices.

Historical notes

Although the most spectacular changes in the medical scene during the 20th century, and the most widely

heralded, was the development of potent drugs and elaborate operations, another striking change included the

abandonment of most of the remedies of the past. In the mid-19th century, persons ill with numerous maladies

were starved (partially or completely), bled, purged, cupped (by applying a tight-fitting vessel filled with steam

to some part and then cooling the vessel), and rested, perhaps for months or even years. Much more recently

Britannica LaunchPacks | The Treatment and Prevention of Disease

45 of 123© 2020 Encyclopædia Britannica, Inc.

they were prescribed various restricted diets and were routinely kept in bed for weeks after abdominal

operations, for many weeks or months when their hearts were thought to be affected, and for many months or

years with tuberculosis. The abandonment of these measures may not be thought of as involving research, but

the physician who first encouraged persons who had peptic ulcers to eat normally (rather than to live on the

customary bland foods) and the physician who first got his patients out of bed a week or two after they had had

minor coronary thrombosis (rather than insisting on a minimum of six weeks of strict bed rest) were as much

doing research as is the physician who first tries out a new drug on a patient. This research, by observing what

happens when remedies are abandoned, has been of inestimable value, and the need for it has not passed.

Clinical observation

Much of the investigative clinical field work undertaken in the present day requires only relatively simple

laboratory facilities because it is observational rather than experimental in character. A feature of much

contemporary medical research is that it requires the collaboration of a number of persons, perhaps not all of

them doctors. Despite the advancing technology, there is much to be learned simply from the observation and

analysis of the natural history of disease processes as they begin to affect patients, pursue their course, and

end, either in their resolution or by the death of the patient. Such studies may be suitably undertaken by

physicians working in their offices who are in a better position than doctors working only in hospitals to observe

the whole course of an illness. Disease rarely begins in a hospital and usually does not end there. It is notable,

however, that observational research is subject to many limitations and pitfalls of interpretation, even when it is

carefully planned and meticulously carried out.

Drug research

The administration of any medicament, especially a new drug, to a patient is fundamentally an experiment: so is

a surgical operation, particularly if it involves a modification to an established technique or a completely new

procedure. Concern for the patient, careful observation, accurate recording, and a detached mind are the keys

to this kind of investigation, as indeed to all forms of clinical study. Because patients are individuals reacting to a

situation in their own different ways, the data obtained in groups of patients may well require statistical analysis

for their evaluation and validation.

One of the striking characteristics in the medical field in the 20th century, and which continued in the 21st

century, was the development of new drugs, usually by pharmaceutical companies. Until the end of the 19th

century, the discovery of new drugs was largely a matter of chance. It was in that period that Paul Ehrlich, the

German scientist, began to lay down the principles for modern pharmaceutical research that made possible the

development of a vast array of safe and effective drugs. Such benefits, however, bring with them their own

disadvantages: it is estimated that as many as 30 percent of patients in, or admitted to, hospitals suffer from the

adverse effect of drugs prescribed by a physician for their treatment (iatrogenic disease). Sometimes it is

extremely difficult to determine whether a drug has been responsible for some disorder. An example of the

difficulty is provided by the thalidomide disaster between 1959 and 1962. Only after numerous deformed babies

had been born throughout the world did it become clear that thalidomide taken by the mother as a sedative had

been responsible.

In hospitals where clinical research is carried out, ethical committees often consider each research project. If the

committee believes that the risks are not justified, the project is rejected.

After a potentially useful chemical compound has been identified in the laboratory, it is extensively tested in

animals, usually for a period of months or even years. Few drugs make it beyond this point. If the tests are

satisfactory, the decision may be made for testing the drug in humans. It is this activity that forms the basis of

Britannica LaunchPacks | The Treatment and Prevention of Disease

46 of 123© 2020 Encyclopædia Britannica, Inc.

much clinical research. In most countries the first step is the study of its effects in a small number of health

volunteers. The response, effect on metabolism, and possible toxicity are carefully monitored and have to be

completely satisfactory before the drug can be passed for further studies, namely with patients who have the

disorder for which the drug is to be used. Tests are administered at first to a limited number of these patients to

determine effectiveness, proper dosage, and possible adverse reactions. These searching studies are

scrupulously controlled under stringent conditions. Larger groups of patients are subsequently involved to gain a

wider sampling of the information. Finally, a full-scale clinical trial is set up. If the regulatory authority is satisfied

about the drug’s quality, safety, and efficacy, it receives a license to be produced. As the drug becomes more

widely used, it eventually finds its proper place in therapeutic practice, a process that may take years.

An important step forward in clinical research was taken in the mid-20th century with the development of the

controlled clinical trial. This sets out to compare two groups of patients, one of which has had some form of

treatment that the other group has not. The testing of a new drug is a case in point: one group receives the

drug, the other a product identical in appearance, but which is known to be inert—a so-called placebo. At the

end of the trial, the results of which can be assessed in various ways, it can be determined whether or not the

drug is effective and safe. By the same technique two treatments can be compared, for example a new drug

against a more familiar one. Because individuals differ physiologically and psychologically, the allocation of

patients between the two groups must be made in a random fashion; some method independent of human

choice must be used so that such differences are distributed equally between the two groups.

In order to reduce bias and make the trial as objective as possible the double-blind technique is sometimes used.

In this procedure, neither the doctor nor the patients know which of two treatments is being given. Despite such

precautions the results of such trials can be prejudiced, so that rigorous statistical analysis is required. It is

obvious that many ethical, not to say legal, considerations arise, and it is essential that all patients have given

their informed consent to be included. Difficulties arise when patients are unconscious, mentally confused, or

otherwise unable to give their informed consent. Children present a special difficulty because not all laws agree

that parents can legally commit a child to an experimental procedure. Trials, and indeed all forms of clinical

research that involve patients, must often be submitted to a committee set up locally to scrutinize each proposal.

Surgery

Surgeons operating on a patient.

© gchutka/iStock.com

In drug research the essential steps are taken by the chemists who synthesize or isolate new drugs in the

laboratory; clinicians play only a subsidiary part. In developing new surgical operations clinicians play a more

important role, though laboratory scientists and others in the background may also contribute largely. Many new

Britannica LaunchPacks | The Treatment and Prevention of Disease

47 of 123© 2020 Encyclopædia Britannica, Inc.

operations have been made possible by advances in anesthesia, and these in turn depend upon engineers who

have devised machines and chemists who have produced new drugs. Other operations are made possible by

new materials, such as the alloys and plastics that are used to make artificial hip and knee joints.

Whenever practicable, new operations are tried on animals before they are tried on patients. This practice is

particularly relevant to organ transplants. Surgeons themselves—not experimental physiologists—transplanted

kidneys, livers, and hearts in animals before attempting these procedures on patients. Experiments on animals

are of limited value, however, because animals do not suffer from all of the same maladies as do humans.

Many other developments in modern surgical treatment rest on a firm basis of experimentation, often first in

animals but also in humans; among them are renal dialysis (the artificial kidney), arterial bypass operations,

embryo implantation, and exchange transfusions. These treatments are but a few of the more dramatic of a

large range of therapeutic measures that have not only provided patients with new therapies but also have led

to the acquisition of new knowledge of how the body works. Among the research projects of the late 20th and

early 21st centuries was that of gene transplantation, which had the potential of providing cures for cancer and

other diseases.

Screening proceduresDevelopments in modern medical science have made it possible to detect morbid conditions before a person

actually feels the effects of the condition. Examples are many: they include certain forms of cancer; high blood

pressure; heart and lung disease; various familial and congenital conditions; disorders of metabolism, like

diabetes; and acquired immune deficiency syndrome (AIDS). The consideration to be made in screening is

whether or not such potential patients should be identified by periodic examinations. To do so is to imply that

the subjects should be made aware of their condition and, second, that there are effective measures that can be

taken to prevent their condition, if they test positive, from worsening. Such so-called specific screening

procedures are costly since they involve large numbers of people. Screening may lead to a change in the life-

style of many persons, but not all such moves have been shown in the long run to be fully effective. Although

screening clinics may not be run by doctors, they are a factor of increasing importance in the preventive health

service.

Periodic general medical examination of various sections of the population, business executives for example, is

another way of identifying risk factors that, if not corrected, can lead to the development of overt disease.

John Walford ToddHarold Scarborough

Additional ReadingWebster’s Medical Desk Dictionary (1986), is a reference source for the layman. The Oxford Companion to

, 2 vol., edited by , , and (1986), is a comprehensive text Medicine JOHN WALTON PAUL B. BEESON RONALD BODLEY SCOTT

of 20th-century developments and persons.

GEORGE ROSEN, (1983), is a historical study. Particular The Structure of American Medical Practice, 1875–1941

kinds of medical practice are explored in and (eds.), WESLEY FABB JOHN FRY Principles of Practice Management in

(1984); , edited by Primary Care SIR DOUGLAS BLACKet al., Inequalities in Health: The Black Report PETER TOWNSEND

and (1982); and , NICK DAVIDSON DAVID SANDERS RICHARD CARVER The Struggle for Health: Medicine and the Politics of

(1985); and and (eds.), Underdevelopment V. DJUKANOVIC E.P. MACH Alternative Approaches to Meeting Basic

(1975). Also see the articles of such journals Health Needs in Developing Countries: A Joint UNICEF/WHO Study

Britannica LaunchPacks | The Treatment and Prevention of Disease

48 of 123© 2020 Encyclopædia Britannica, Inc.

as (monthly) and (semimonthly). For a view of alternative medicine, see Private Practice Modern Healthcare and (eds.), (1985); and , DOUGLAS STALKER CLARK GLYMOUR Examining Holistic Medicine RICHARD GROSSMAN The Other

(1985). The variety of roles in the health-care profession are the subject of , Medicines LOUISE SIMMERS Diversified

(1983); , , and , Health Occupations C. WESLEY EISELE WILLIAM R. FIFER TOMA C. WILSON The Medical Staff and the

(1985); and (ed.), Modern Hospital ELI GINZBERG From Physician Shortage to Patient Shortage: The Uncertain

(1986).Future of Medical Practice

Citation (MLA style):

"Medicine." , Encyclopædia Britannica, 21 Mar. Britannica LaunchPacks: The Treatment and Prevention of Disease2022. packs-ngl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

nutritional diseaseany of the nutrient-related diseases and conditions that cause illness in humans. They may include deficiencies

or excesses in the diet, obesity and eating disorders, and chronic diseases such as cardiovascular disease,

hypertension, cancer, and diabetes mellitus. Nutritional diseases also include developmental abnormalities that

can be prevented by diet, hereditary metabolic disorders that respond to dietary treatment, the interaction of

foods and nutrients with drugs, food allergies and intolerances, and potential hazards in the food supply. All of

these categories are described in this article. For a discussion of essential nutrients, dietary recommendations,

and human nutritional needs and concerns throughout the life cycle, nutrition, human.see

Children with rickets. The disease, which most commonly strikes children, causes bone deformities…

Robin Laurance/Alamy

Nutrient deficienciesAlthough the so-called diseases of civilization—for example, heart disease, stroke, cancer, and diabetes—will be

the focus of this article, the most significant nutrition-related disease is chronic undernutrition, which plagues

Britannica LaunchPacks | The Treatment and Prevention of Disease

49 of 123© 2020 Encyclopædia Britannica, Inc.

more than 925 million people worldwide. Undernutrition is a condition in which there is insufficient food to meet

energy needs; its main characteristics include weight loss, failure to thrive, and wasting of body fat and muscle.

Low birth weight in infants, inadequate growth and development in children, diminished mental function, and

increased susceptibility to disease are among the many consequences of chronic persistent hunger, which

affects those living in poverty in both industrialized and developing countries. The largest number of chronically

hungry people live in Asia, but the severity of hunger is greatest in sub-Saharan Africa. At the start of the 21st

century, approximately 20,000 people, the majority of them children, died each day from undernutrition and

related diseases that could have been prevented. The deaths of many of these children stem from the poor

nutritional status of their mothers as well as the lack of opportunity imposed by poverty.

Only a small percentage of hunger deaths is caused by starvation due to catastrophic food shortages. During the

1990s, for example, worldwide famine (epidemic failure of the food supply) more often resulted from complex

social and political issues and the ravages of war than from natural disasters such as droughts and floods.

Launching a cookbook for Irish cancer patients with involuntary weight loss.

University College Cork, Ireland

Malnutrition is the impaired function that results from a prolonged deficiency—or excess—of total energy or

specific nutrients such as protein, essential fatty acids, vitamins, or minerals. This condition can result from

fasting and anorexia nervosa; persistent vomiting (as in bulimia nervosa) or inability to swallow; impaired

digestion and intestinal malabsorption; or chronic illnesses that result in loss of appetite (e.g., cancer, AIDS).

Malnutrition can also result from limited food availability, unwise food choices, or overzealous use of dietary

supplements.

Select nutrient-deficiency diseasesSelected nutrient-deficiency diseases are listed in the table.

Protein-energy malnutrition

Chronic undernutrition manifests primarily as protein-energy malnutrition (PEM), which is the most common form

of malnutrition worldwide. Also known as protein-calorie malnutrition, PEM is a continuum in which people—all

too often children—consume too little protein, energy, or both. At one end of the continuum is kwashiorkor,

characterized by a severe protein deficiency, and at the other is marasmus, an absolute food deprivation with

grossly inadequate amounts of both energy and protein.

Britannica LaunchPacks | The Treatment and Prevention of Disease

50 of 123© 2020 Encyclopædia Britannica, Inc.

A child affected by marasmus sitting in a Nigerian relief camp during the civil war that resulted…

CDC/Dr. Lyle Conrad

An infant with marasmus is extremely underweight and has lost most or all subcutaneous fat. The body has a

“skin and bones” appearance, and the child is profoundly weak and highly susceptible to infections. The cause is

a diet very low in calories from all sources (including protein), often from early weaning to a bottled formula

prepared with unsafe water and diluted because of poverty. Poor hygiene and continued depletion lead to a

vicious cycle of gastroenteritis and deterioration of the lining of the gastrointestinal tract, which interferes with

absorption of nutrients from the little food available and further reduces resistance to infection. If untreated,

marasmus may result in death due to starvation or heart failure.

Kwashiorkor, a Ghanaian word meaning the disease that the first child gets when the new child comes, is

typically seen when a child is weaned from high-protein breast milk onto a carbohydrate food source with

insufficient protein. Children with this disease, which is characterized by a swollen belly due to edema (fluid

retention), are weak, grow poorly, and are more susceptible to infectious diseases, which may result in fatal

diarrhea. Other symptoms of kwashiorkor include apathy, hair discoloration, and dry, peeling skin with sores that

fail to heal. Weight loss may be disguised because of the presence of edema, enlarged fatty liver, and intestinal

parasites; moreover, there may be little wasting of muscle and body fat.

Kwashiorkor and marasmus can also occur in hospitalized patients receiving intravenous glucose for an

extended time, as when recovering from surgery, or in those with illnesses causing loss of appetite or

malabsorption of nutrients. Persons with eating disorders, cancer, AIDS, and other illnesses where appetite fails

or absorption of nutrients is hampered may lose muscle and organ tissue as well as fat stores.

Treatment of PEM has three components. (1) Life-threatening conditions—such as fluid and electrolyte

imbalances and infections—must be resolved. (2) Nutritional status should be restored as quickly and safely as

Britannica LaunchPacks | The Treatment and Prevention of Disease

51 of 123© 2020 Encyclopædia Britannica, Inc.

possible; rapid weight gain can occur in a starving child within one or two weeks. (3) The focus of treatment then

shifts to ensuring nutritional rehabilitation for the long term. The speed and ultimate success of recovery depend

upon the severity of malnutrition, the timeliness of treatment, and the adequacy of ongoing support. Particularly

during the first year of life, starvation may result in reduced brain growth and intellectual functioning that cannot

be fully restored.

Carbohydrates

Under most circumstances, there is no absolute dietary requirement for carbohydrates—simple sugars, complex

carbohydrates such as starches, and the indigestible plant carbohydrates known as dietary fibre. Certain cells,

such as brain cells, require the simple carbohydrate glucose as fuel. If dietary carbohydrate is insufficient,

glucose synthesis depends on the breakdown of amino acids derived from body protein and dietary protein and

the compound glycerol, which is derived from fat. Long-term carbohydrate inadequacy results in increased

production of organic compounds called ketones (a condition known as ketosis), which imparts a distinctive

sweet odour to the breath. Ketosis and other untoward effects of a very-low-carbohydrate diet can be prevented

by the daily consumption of 50 to 100 grams of carbohydrate; however, obtaining at least half of the daily

energy intake from carbohydrates is recommended and is typical of human diets, corresponding to at least 250

grams of carbohydrate (1,000 calories in a 2,000-calorie diet). A varied diet containing fruits, vegetables,

legumes, and whole-grain cereals, which are all abundant in carbohydrates, also provides a desirable intake of

dietary fibre.

Essential fatty acids

There is also a minimum requirement for fat—not for total fat, but only for the fatty acids linoleic acid (a so-

called omega-6 fatty acid) and alpha-linolenic acid (an omega-3 fatty acid). Deficiencies of these two fatty acids

have been seen in hospitalized patients fed exclusively with intravenous fluids containing no fat for weeks,

patients with medical conditions affecting fat absorption, infants given formulas low in fat, and young children

fed nonfat milk or low-fat diets. Symptoms of deficiency include dry skin, hair loss, and impaired wound healing.

Essential fatty acid requirements—a few grams a day—can be met by consuming approximately a tablespoon of

polyunsaturated plant oils daily. Fatty fish also provides a rich source of omega-3 fatty acids. Even individuals

following a low-fat diet generally consume sufficient fat to meet requirements.

Vitamins

Although deficiency diseases have been described in laboratory animals and humans deprived of single

vitamins, in human experience multiple deficiencies are usually present simultaneously. The eight B-complex

vitamins function in coordination in numerous enzyme systems and metabolic pathways; thus, a deficiency of

one may affect the functioning of others.

Vitamin A

Vitamin A deficiency is the leading cause of preventable blindness in children and is a major problem in the

developing world, especially in Africa and Southeast Asia; in the poorest countries hundreds of thousands of

children become blind each year due to a deficiency of the vitamin. Even a mild deficiency can impair immune

function, thereby reducing resistance to disease. Night blindness is an early sign of vitamin A deficiency,

followed by abnormal dryness of the eye and ultimately scarring of the cornea, a condition known as

xerophthalmia. Other symptoms include dry skin, hardening of epithelial cells elsewhere in the body (such as

mucous membranes), and impaired growth and development. In many areas where vitamin A deficiency is

Britannica LaunchPacks | The Treatment and Prevention of Disease

52 of 123© 2020 Encyclopædia Britannica, Inc.

endemic, the incidence is being reduced by giving children a single large dose of vitamin A every six months. A

genetically modified form of rice containing beta-carotene, a precursor of vitamin A, has the potential to reduce

greatly the incidence of vitamin A deficiency, but the use of this so-called golden rice is controversial.

Vitamin D

Vitamin D (also known as vitamin D hormone) is synthesized in the body in a series of steps, starting in the skin

by the action of sunlight’s ultraviolet rays on a precursor compound; thus, without adequate food sources of

vitamin D, a deficiency of the vitamin can occur when exposure to sunlight is limited. Lack of vitamin D in

children causes rickets, a disease characterized by inadequate mineralization of bone, growth retardation, and

skeletal deformities such as bowed legs. The adult form of rickets, known as osteomalacia, results in weak

muscles as well as weak bones. Inadequate vitamin D may also contribute to the thinning of bones seen in

osteoporosis. Individuals with limited sun exposure (including women who completely cover their bodies for

religious reasons), elderly or homebound persons, and those with dark skin, particularly those who live in

northern latitudes, are at risk of vitamin D deficiency. Vitamin D is found in very few foods naturally; thus

fortification of milk and other foods (e.g., margarine, cereals, and breads) with the vitamin has helped protect

those populations in which sun exposure is inadequate. Supplemental vitamin D also may help protect against

bone fractures in the elderly, who make and activate vitamin D less efficiently even if exposed to sunlight.

Vitamin E

Vitamin Edeficiency is rare in humans, although it may develop in premature infants and in people with impaired

fat absorption or metabolism. In the former, fragility of red blood cells (hemolysis) is seen; in the latter, where

deficiency is more prolonged, neuromuscular dysfunction involving the spinal cord and retina may result in loss

of reflexes, impaired balance and coordination, muscle weakness, and visual disturbances. No specific metabolic

function has been established for vitamin E; however, it is an important part of the antioxidant system that

inhibits lipid peroxidation—i.e., it protects cells and their membranes against the damaging effects of free

radicals (reactive oxygen and nitrogen species) that are produced metabolically or enter the body from the

environment. The requirement for vitamin E is increased with increasing consumption of polyunsaturated fatty

acids. People who smoke or are subjected to air pollution may also need more of the vitamin to protect against

oxidative damage to the lungs.

Vitamin K

Vitamin K is necessary for the formation of prothrombin and other blood-clotting factors in the liver, and it also

plays a role in bone metabolism. A form of the vitamin is produced by bacteria in the colon and can be utilized to

some degree. Vitamin K deficiency causes impaired clotting of the blood and internal bleeding, even without

injury. Due to poor transport of vitamin K across the placenta, newborn infants in developed countries are

routinely given the vitamin intramuscularly or orally within six hours of birth to protect against a condition known

as hemorrhagic disease of the newborn. Vitamin K deficiency is rare in adults, except in syndromes with poor fat

absorption, in liver disease, or during treatment with certain anticoagulant drugs, which interfere with vitamin K

metabolism. Bleeding due to vitamin K deficiency may be seen in patients whose gut bacteria have been killed

by antibiotics.

Thiamin

Prolonged deficiency of thiamin (vitamin B ) results in beriberi, a disease that has been endemic in populations 1

where white rice has been the staple. Thiamin deficiency is still seen in areas where white rice or flour

constitutes the bulk of the diet and thiamin lost in milling is not replaced through enrichment. Symptoms of the

form known as dry beriberi include loss of appetite, confusion and other mental symptoms, muscle weakness,

Britannica LaunchPacks | The Treatment and Prevention of Disease

53 of 123© 2020 Encyclopædia Britannica, Inc.

painful calf muscles, poor coordination, tingling and paralysis. In wet beriberi there is edema and the possibility

of an enlarged heart and heart failure. Thiamin deficiency can also occur in populations eating large quantities of

raw fish harbouring intestinal microbes that contain the enzyme thiaminase. In the developed world, thiamin

deficiency is linked primarily to chronic alcoholism with poor diet, manifesting as Wernicke-Korsakoff syndrome,

a condition with rapid eye movements, loss of muscle coordination, mental confusion, and memory loss.

Riboflavin

Riboflavin (vitamin B ) deficiency, known as ariboflavinosis, is unlikely without the simultaneous deficiency of 2

other nutrients. After several months of riboflavin deprivation, symptoms include cracks in the skin at the

corners of the mouth, fissures of the lips, and an inflamed, magenta-coloured tongue. Because riboflavin is

readily destroyed by ultraviolet light, jaundiced infants who are treated with light therapy are administered the

vitamin. Milk, milk products, and cereals, major sources of riboflavin in the diet, are packaged to prevent

exposure to light.

Niacin

Symptoms of pellagra develop about two months after niacin is withdrawn from the diet. Pellagra is

characterized by the so-called three Ds—diarrhea, dermatitis, and dementia—and, if it is allowed to progress

untreated, death ensues. Pellagra was common in areas of the southern United States in the early 1900s and

still occurs in parts of India, China, and Africa, affecting people who subsist primarily on corn. The niacin in corn

and other cereal grains is largely in bound form, unable to be absorbed well. Soaking corn in lime water, as

practiced by Native American populations for centuries, frees bound niacin and thus protects against pellagra. In

addition, unlike other cereals, corn is low in the amino acidtryptophan, which can be converted in part to niacin.

Sufficient high-quality protein (containing tryptophan) in the diet can protect against niacin deficiency even if

intake of niacin itself is inadequate.

Vitamin B6

Vitamin B (pyridoxine and related compounds) is essential in protein metabolism, the synthesis of 6

neurotransmitters, and other critical functions in the body. Deficiency symptoms include dermatitis, microcytic

hypochromic anemia (small, pale red blood cells), impaired immune function, depression, confusion, and

convulsions. Although full-blown vitamin B deficiency is rare, marginal inadequacy is more widespread, 6

especially among the elderly, who may have a reduced ability to absorb the vitamin. People with alcoholism,

especially those with the liver diseases cirrhosis and hepatitis, are at risk of deficiency. A number of drugs,

including the tuberculosis drug isoniazid, interfere with vitamin B metabolism.6

Folic acid

Vitamin B and folic acid (folate) are two B vitamins with many closely related functions, notably participation in 12

DNA synthesis. As a result, people with deficiencies of either vitamin show many of the same symptoms, such as

weakness and fatigue due to megaloblastic anemia, a condition in which red blood cells, lacking sufficient DNA

for cell division, are large and immature. Deficiency of folic acid also causes disruption of cell division along the

gastrointestinal tract, which results in persistent diarrhea, and impaired synthesis of white blood cells and

platelets. Inadequate intake of the vitamin in early pregnancy may cause neural tube defects in the fetus. Thus,

women capable of becoming pregnant are advised to take 400 micrograms (μg) of folic acid daily from

supplements, fortified foods (such as fortified cereals), or both—in addition to consuming foods rich in folic acid

Britannica LaunchPacks | The Treatment and Prevention of Disease

54 of 123© 2020 Encyclopædia Britannica, Inc.

such as fresh fruits and vegetables (especially leafy greens) and legumes. The cancer drug methotrexate

interferes with folic acid metabolism, causing side effects such as hair loss and diarrhea. Folic acid deficiency

may also result from heavy use of alcohol, which interferes with absorption of the vitamin.

Vitamin B12

Deficiency of vitamin B (cobalamin), like folic acid, results in megaloblastic anemia (large, immature red blood 12

cells), due to interference with normal DNA synthesis. Additionally, vitamin B maintains the myelin sheath that 12

protects nerve fibres; therefore, an untreated deficiency of the vitamin can result in nerve degeneration and

eventually paralysis. Large amounts of folic acid (over 1,000 μg per day) may conceal, and possibly even

exacerbate, an underlying vitamin B deficiency. Symptoms of vitamin B deficiency can include weakness, 12 12

fatigue, pain, shortness of breath, numbness or tingling sensations, mental changes, and vision problems. Only

animal foods are reliable sources of vitamin B . Vegans, who eat no foods of animal origin, are at risk of vitamin 12

B deficiency and must obtain the vitamin through fortified food or a supplement. For people who regularly eat 12

animal products, deficiency of the vitamin is unlikely, unless there is a defect in absorption. In order to be

absorbed, vitamin B must be bound to intrinsic factor, a substance secreted by the stomach. If intrinsic factor 12

is absent (due to an autoimmune disorder known as pernicious anemia) or if there is insufficient production of

hydrochloric acid by the stomach, absorption of the vitamin will be limited. Pernicious anemia, which occurs

most often in the elderly, can be treated by injections or massive oral doses (1,000 μg) of vitamin B .12

Pantothenic acid

Pantothenic acid is so widespread in foods that deficiency is unlikely under normal circumstances. Deficiency has

been seen only in individuals fed semisynthetic diets deficient in the vitamin or in subjects given a pantothenic

acid antagonist. Symptoms of deficiency include fatigue, irritability, sleep disturbances, abdominal distress, and

neurological symptoms such as tingling in the hands. Deficiency of the vitamin was suspected during World War

II when prisoners of war in Asia who exhibited “burning feet” syndrome, characterized by numbness and tingling

in the toes and other neurological symptoms, responded only to the administration of pantothenic acid.

Biotin

Deficiency of biotin is rare, and this may be due in part to synthesis of the vitamin by bacteria in the colon,

although the importance of this source is unclear. Biotin deficiency has been observed in people who regularly

eat large quantities of raw egg white, which contains a glycoprotein (avidin) that binds biotin and prevents its

absorption. A rare genetic defect that renders some infants unable to absorb a form of biotin in food can be

treated with a supplement of the vitamin. Long-term use of certain anticonvulsant drugs may also impair biotin

absorption. Symptoms of deficiency include skin rash, hair loss, and eventually neurological abnormalities.

Vitamin C

Vitamin C, also known as ascorbic acid, functions as a water-soluble antioxidant and as a cofactor in various

enzyme systems, such as those involved in the synthesis of connective tissue components and

neurotransmitters. Symptoms of scurvy, a disease caused by vitamin C deficiency, include pinpoint hemorrhages

(petechiae) under the skin, bleeding gums, joint pain, and impaired wound healing. Although rare in developed

countries, scurvy is seen occasionally in people consuming restricted diets, particularly those containing few

fruits and vegetables, or in infants fed boiled cow’s milk and no source of vitamin C. Scurvy can be prevented

with relatively small quantities of vitamin C (10 milligrams [mg] per day), although recommended intakes, which

Britannica LaunchPacks | The Treatment and Prevention of Disease

55 of 123© 2020 Encyclopædia Britannica, Inc.

aim to provide sufficient antioxidant protection, are closer to 100 mg per day. Disease states, environmental

toxins, drugs, and other stresses can increase an individual’s vitamin C needs. Smokers, for example, may

require an additional 35 mg of the vitamin daily to maintain vitamin C levels comparable to nonsmokers.

Minerals

Iron

Iron deficiency is the most common of all nutritional deficiencies, with much of the world’s population being

deficient in the mineral to some degree. Young children and premenopausal women are the most vulnerable.

The main function of iron is in the formation of hemoglobin, the red pigment of the blood that carries oxygen

from the lungs to other tissues. Since each millilitre of blood contains 0.5 mg of iron (as a component of

hemoglobin), bleeding can drain the body’s iron reserves. When iron stores are depleted a condition arises

known as microcytic hypochromic anemia, characterized by small red blood cells that contain less hemoglobin

than normal. Symptoms of severe iron deficiency anemia include fatigue, weakness, apathy, pale skin, difficulty

breathing on exertion, and low resistance to cold temperatures. During childhood, iron deficiency can affect

behaviour and learning ability as well as growth and development. Severe anemia increases the risk of

pregnancy complications and maternal death. Iron deficiency anemia is most common during late infancy and

early childhood, when iron stores present from birth are exhausted and milk, which is poor in iron, is a primary

food; during the adolescent growth spurt; and in women during the childbearing years, because of blood loss

during menstruation and the additional iron needs of pregnancy. Intestinal blood loss and subsequent iron

deficiency anemia in adults may also stem from ulcers, hemorrhoids, tumours, or chronic use of certain drugs

such as aspirin. In developing countries, blood loss due to hookworm and other infections, coupled with

inadequate dietary iron intake, exacerbates iron deficiency in both children and adults.

Iodine

Iodine deficiency disorders are the most common cause of preventable brain damage, which affects an

estimated 50 million people worldwide. During pregnancy, severe iodine deficiency may impair fetal

development, resulting in cretinism (irreversible mental retardation with short stature and developmental

abnormalities) as well as in miscarriage and stillbirth. Other more pervasive consequences of chronic iodine

deficiency include lesser cognitive and neuromuscular deficits. The ocean is a dependable source of iodine, but

away from coastal areas iodine in food is variable and largely reflects the amount in the soil. In chronic iodine

deficiency the thyroid gland enlarges as it attempts to trap more iodide (the form in which iodine functions in the

body) from the blood for synthesis of thyroid hormones, and it eventually becomes a visible lump at the front of

the neck known as a goitre. Some foods, such as cassava, millet, sweet potato, certain beans, and members of

the cabbage family, contain substances known as goitrogens that interfere with thyroid hormone synthesis;

these substances, which are destroyed by cooking, can be a significant factor in persons with coexisting iodine

deficiency who rely on goitrogenic foods as staples. Since a strategy of universal iodization of salt was adopted

in 1993, there has been remarkable progress in improving iodine status worldwide. Nonetheless, millions of

people living in iodine-deficient areas, primarily in Central Africa, Southeast and Central Asia, and even in central

and eastern Europe, remain at risk.

Zinc

A constituent of numerous enzymes, zinc plays a structural role in proteins and regulates gene expression. Zinc

deficiency in humans was first reported in the 1960s in Egypt and Iran, where children and adolescent boys with

stunted growth and undeveloped genitalia responded to treatment with zinc. Deficiency of the mineral was

attributed to the regional diet, which was low in meat and high in legumes, unleavened breads, and whole-grain

foods that contain fibre, phytic acid, and other factors that inhibit zinc absorption. Also contributing to zinc

Britannica LaunchPacks | The Treatment and Prevention of Disease

56 of 123© 2020 Encyclopædia Britannica, Inc.

deficiency was the practice of clay eating, which interferes with the absorption of zinc, iron, and other minerals.

Severe zinc deficiency has also been described in patients fed intravenous solutions inadequate in zinc and in

the inherited zinc-responsive syndrome known as acrodermatitis enteropathica. Symptoms of zinc deficiency

may include skin lesions, diarrhea, increased susceptibility to infections, night blindness, reduced taste and

smell acuity, poor appetite, hair loss, slow wound healing, low sperm count, and impotence. Zinc is highest in

protein-rich foods, especially red meat and shellfish, and zinc status may be low in protein-energy malnutrition.

Even in developed countries, young children, pregnant women, the elderly, strict vegetarians, people with

alcoholism, and those with malabsorption syndromes are vulnerable to zinc deficiency.

Calcium

Almost all the calcium in the body is in the bones and teeth, the skeleton serving as a reservoir for calcium

needed in the blood and elsewhere. During childhood and adolescence, adequate calcium intake is critical for

bone growth and calcification. A low calcium intake during childhood, and especially during the adolescent

growth spurt, may predispose one to osteoporosis, a disease characterized by reduced bone mass, later in life.

As bones lose density, they become fragile and unable to withstand ordinary strains; the resulting fractures,

particularly of the hip, may cause incapacitation and even death. Osteoporosis is particularly common in

postmenopausal women in industrial societies. Not a calcium-deficiency disease per se, osteoporosis is strongly

influenced by heredity; risk of the disease can be lessened by ensuring adequate calcium intake throughout life

and engaging in regular weight-bearing exercise. Sufficient calcium intake in the immediate postmenopausal

years does appear to slow bone loss, although not to the same extent as do bone-conserving drugs.

Fluoride

Learn what causes bad breath and how to prevent it.

© American Chemical Society

Fluoride also contributes to the mineralization of bones and teeth and protects against tooth decay.

Epidemiological studies in the United States in the 1930s and 1940s revealed an inverse relationship between

the natural fluoride content of waters and the rate of dental caries. In areas where fluoride levels in the drinking

water are low, prescription fluoride supplements are recommended for children older than six months of age;

dentists also may apply fluoride rinses or gels periodically to their patients’ teeth. Fluoridated toothpastes are an

important source of fluoride for children and also for adults, who continue to benefit from fluoride intake.

Sodium

Sodium is usually provided in ample amounts by food, even without added table salt (sodium chloride).

Furthermore, the body’s sodium-conservation mechanisms are highly developed, and thus sodium deficiency is

rare, even for those on low-sodium diets. Sodium depletion may occur during prolonged heavy sweating,

vomiting, or diarrhea or in the case of kidney disease. Symptoms of hyponatremia, or low blood sodium, include

muscle cramps, nausea, dizziness, weakness, and eventually shock and coma. After prolonged high-intensity

Britannica LaunchPacks | The Treatment and Prevention of Disease

57 of 123© 2020 Encyclopædia Britannica, Inc.

exertion in the heat, sodium balance can be restored by drinking beverages containing sodium and glucose (so-

called sports drinks) and by eating salted food. Drinking a litre of water containing two millilitres (one-third

teaspoon) of table salt also should suffice.

Chloride is lost from the body under conditions that parallel those of sodium loss. Severe chloride depletion

results in a condition known as metabolic alkalosis (excess alkalinity in body fluids).

Potassium

Potassium is widely distributed in foods and is rarely deficient in the diet. However, some diuretics used in the

treatment of hypertension deplete potassium. The mineral is also lost during sustained vomiting or diarrhea or

with chronic use of laxatives. Symptoms of potassium deficiency include weakness, loss of appetite, muscle

cramps, and confusion. Severe hypokalemia (low blood potassium) may result in cardiac arrhythmias. Potassium-

rich foods, such as bananas or oranges, can help replace losses of the mineral, as can potassium chloride

supplements, which should be taken only under medical supervision.

Water deficiency (dehydration)

Water is the largest component of the body, accounting for more than half of body weight. To replace fluid

losses, adults generally need to consume 2 to 4 litres of fluid daily in cool climates, depending on degree of

activity, and from 8 to 16 litres a day in very hot climates. Dehydration may develop if water consumption fails

to satisfy thirst; if the thirst mechanism is not functioning properly, as during intense physical exercise; or if

there is excessive fluid loss, as with diarrhea or vomiting. By the time thirst is apparent, there is already some

degree of dehydration, which is defined as loss of fluid amounting to at least 1 to 2 percent of body weight.

Symptoms can progress quickly if not corrected: dry mouth, sunken eyes, poor skin turgor, cold hands and feet,

weak and rapid pulse, rapid and shallow breathing, confusion, exhaustion, and coma. Loss of fluid constituting

more than 10 percent of body weight may be fatal. The elderly (whose thirst sensation may be dulled), people

who are ill, and those flying in airplanes are especially vulnerable to dehydration. Infants and children with

chronic undernutrition who develop gastroenteritis may become severely dehydrated from diarrhea or vomiting.

Treatment is with an intravenous or oral solution of glucose and salts.

Nutrient toxicitiesThe need for each nutrient falls within a safe or desirable range, above which there is a risk of adverse effects.

Any nutrient, even water, can be toxic if taken in very large quantities. Overdoses of certain nutrients, such as

iron, can cause poisoning (acute toxicity) and even death. For most nutrients, habitual excess intake poses a risk

of adverse health effects (chronic toxicity). Sustained overconsumption of the calorie-yielding nutrients

(carbohydrate, fat, and protein) and alcohol increases the risk of obesity and specific chronic diseases ( seebelow), and use of isolated amino acids can lead to imbalances and toxicities. However, for most individuals, the

risk of harm due to excess intake of vitamins or minerals in food is low.

Tolerable upper intake level (UL) for selected nutrients for adultsIn 1997 the U.S. Institute of Medicine

established a reference value called the Tolerable Upper Intake Level (UL) for selected nutrients, which is also

being used as a model for other countries. The UL is the highest level of daily nutrient intake likely to pose no

risk of adverse health effects for almost all individuals in the general population and is not meant to apply to

people under medical supervision. Discussed below as “safe intakes” for adults, most ULs for infants, children,

and adolescents are considerably lower.

Britannica LaunchPacks | The Treatment and Prevention of Disease

58 of 123© 2020 Encyclopædia Britannica, Inc.

Vitamins

Because they can be stored in the liver and fatty tissue, fat-soluble vitamins, particularly vitamins A and D, have

more potential for toxicity than do water-soluble vitamins, which, with the exception of vitamin B , are readily 12

excreted in the urine if taken in excess. Nonetheless, water-soluble vitamins can be toxic if taken as

supplements or in fortified food.

Symptoms of acute vitamin A poisoning, which usually require a dose of at least 15,000 μg (50,000 IU) in adults,

include abdominal pain, nausea, vomiting, headache, dizziness, blurred vision, and lack of muscular

coordination. Chronic hypervitaminosis A, usually resulting from a sustained daily intake of 30,000 μg (100,000

IU) for months or years, may result in wide-ranging effects, including loss of bone density and liver damage.

Vitamin A toxicity in young infants may be seen in a swelling of the fontanelles (soft spots) due to increased

intracranial pressure. Large doses of vitamin A taken by a pregnant woman also can cause developmental

abnormalities in a fetus, especially if taken during the first trimester; the precise threshold for causing birth

defects is unknown, but less than 3,000 μg (10,000 IU) daily appears to be a safe intake. Although most vitamins

occurring naturally in food do not cause adverse effects, toxic levels of vitamin A may be found in the liver of

certain animals. For example, early Arctic explorers are reported to have been poisoned by eating polar bear

liver. High beta-carotene intake, from supplements or from carrots or other foods that are high in beta-carotene,

may after several weeks impart a yellowish cast to the skin but does not cause the same toxic effects as

preformed vitamin A.

High intake of vitamin D can lead to a variety of debilitating effects, notably calcification of soft tissues and

cardiovascular and renal damage. Although not a concern for most people, young children are especially

vulnerable to vitamin D toxicity. Individuals with high intakes of fortified milk or fish or those who take many

supplements may exceed the safe intake of 50 μg (2,000 IU) per day.

Because of its function as an antioxidant, supplementation with large doses (several hundred milligrams per

day) of vitamin E in hopes of protecting against heart disease and other chronic diseases has become

widespread. Such doses—many times the amount normally found in food—appear safe for most people, but their

effectiveness in preventing disease or slowing the aging process has not been demonstrated. Daily intakes

greater than 1,000 mg are not advised because they may interfere with blood clotting, causing hemorrhagic

effects.

Large doses of niacin (nicotinic acid), given for its cholesterol-lowering effect, may produce a reddening of the

skin, along with burning, tingling, and itching. Known as a “niacin flush,” this is the first indicator of niacin

excess, and this symptom is the basis for the safe daily intake of 35 mg. Liver toxicity and other adverse effects

have also been reported with several grams of niacin a day.

Large doses of vitamin B have been taken in hopes of treating conditions such as carpal tunnel syndrome and 6

premenstrual syndrome. The most critical adverse effect seen from such supplementation has been a severe

sensory neuropathy of the extremities, including inability to walk. A daily intake of up to 100 mg is considered

safe, although only 1 to 2 mg are required for good health.

Use of vitamin C supplements has been widespread since 1970, when chemist and Nobel laureate Linus Pauling

suggested that the vitamin was protective against the common cold. Some studies have found a moderate

benefit of vitamin C in reducing the duration and severity of common-cold episodes, but numerous studies have

failed to find a significant effect on incidence. The most common side effect of high vitamin C intake is diarrhea

and other gastrointestinal symptoms, likely due to the unabsorbed vitamin traversing the intestine. The safe

Britannica LaunchPacks | The Treatment and Prevention of Disease

59 of 123© 2020 Encyclopædia Britannica, Inc.

intake of 2,000 mg a day is based on the avoidance of these gastrointestinal symptoms. Although other possible

adverse effects of high vitamin C intake have been investigated, none has been demonstrated in healthy people.

Minerals

A desirable dietary intake of the minerals generally falls in a fairly narrow range. Because of interactions, a high

intake of one mineral may adversely affect the absorption or utilization of another. Excessive intake from food

alone is unlikely, but consumption of fortified foods or supplements increases the chance of toxicity.

Furthermore, environmental or occupational exposure to potentially toxic levels of minerals presents additional

risks for certain populations.

Widespread calcium supplementation, primarily by children who do not drink milk and by women hoping to

prevent osteoporosis, has raised concerns about possible adverse consequences of high calcium intake. A major

concern has been kidney stones (nephrolithiasis), the majority of which are composed of a calcium oxalate

compound. For years, a low-calcium diet was recommended for people at risk of developing kidney stones,

despite disappointing effectiveness and a fair amount of research challenging the approach. However, a recent

study has provided strong evidence that a diet relatively low in sodium and animal protein with normal amounts

of calcium (1,200 mg per day) is much more effective in preventing recurrent stone formation than was the

traditional low-calcium diet. In fact, dietary calcium may be protective against kidney stones because it helps

bind oxalate in the intestine. Constipation is a common side effect of high calcium intake, but daily consumption

of up to 2,500 mg is considered safe for adults and for children at least one year old.

The use of magnesium salts in medications, such as antacids and laxatives, may result in diarrhea, nausea, and

abdominal cramps. Impaired kidney function renders an individual more susceptible to magnesium toxicity.

Excess magnesium intake is unlikely from foods alone.

High-dose iron supplements, commonly used to treat iron deficiency anemia, may cause constipation and other

gastrointestinal effects. A daily iron intake of up to 45 mg presents a low risk of gastrointestinal distress. Acute

toxicity and death from ingestion of iron supplements is a major poisoning hazard for young children. In people

with the genetic disorder hereditary hemochromatosis, a disease characterized by the overabsorption of iron, or

in those who have repeated blood transfusions, iron can build up to dangerous levels, leading to severe organ

damage, particularly of the liver and heart. It is considered prudent for men and postmenopausal women to

avoid iron supplements and high iron intakes from fortified foods. Toxicity from dietary iron has been reported in

South Africa and Zimbabwe in people consuming a traditional beer with an extremely high iron content.

Excess zinc has been reported to cause gastrointestinal symptoms such as nausea and vomiting. Chronic intake

of large amounts of zinc may interfere with the body’s utilization of copper, impair immune response, and

reduce the level of high-density lipoprotein cholesterol (the so-called good cholesterol). A safe intake of 40 mg of

zinc daily is unlikely to be exceeded by food alone, although it may be exceeded by zinc lozenges or

supplements, which are widely used despite a lack of data about their safety or efficacy.

Selenium is toxic in large amounts. Selenosis (chronic selenium toxicity) results in symptoms such as

gastrointestinal and nervous system disturbances, brittleness and loss of hair and nails, a garliclike odour to the

breath, and skin rash. There also have been reports of acute toxicity and death from ingestion of gram quantities

of the mineral. Excess selenium can be harmful whether ingested as selenomethionine, the main form found in

food, or in the inorganic forms usually found in supplements. A daily intake of up to 400 μg from all sources most

likely poses no risk of selenium toxicity.

Impaired thyroid gland function, goitre, and other adverse effects may result from high intakes of iodine from

food, iodized salt, or pharmaceutical preparations intended to prevent or treat iodine deficiency or other

Britannica LaunchPacks | The Treatment and Prevention of Disease

60 of 123© 2020 Encyclopædia Britannica, Inc.

disorders. Although most people are unlikely to exceed safe levels, individuals with certain conditions, such as

autoimmune thyroid disease, are particularly sensitive to excess iodine intake.

While the teeth are developing and before they erupt, excess fluoride ingestion can cause mottled tooth enamel;

however, this is only a cosmetic effect. In adults, excess fluoride intake is associated with effects ranging from

increased bone mass to joint pain and stiffness and, in extreme cases, crippling skeletal fluorosis. Even in

communities where water supplies naturally provide fluoride levels several times higher than recommended,

skeletal fluorosis is extremely rare.

High intakes of phosphorus (as phosphate) may affect calcium metabolism adversely and interfere with the

absorption of trace elements such as iron, copper, and zinc. However, even with the consumption of phosphate

additives in a variety of foods and in cola beverages, exceeding safe levels is unlikely. Manganese toxicity, with

central nervous system damage and symptoms similar to Parkinson disease, is a well-known occupational

hazard of inhaling manganese dust, but again, it is not likely to come from the diet. Similarly, copper toxicity is

unlikely to result from excessive dietary intake, except in individuals with hereditary or acquired disorders of

copper metabolism.

Alcohol

The acute effects of a large intake of alcohol are well known. Mental impairment starts when the blood

concentration is about 0.05 percent. A concentration of alcohol in the blood of 0.40 percent usually causes

unconsciousness, and 0.50 percent can be fatal. Accidents and violence, which are often alcohol-related, are

major causes of death for young persons. Women who drink during pregnancy risk physical and mental damage

to their babies (fetal alcohol syndrome). Alcohol also can interact dangerously with a variety of medications,

such as tranquilizers, antidepressants, and pain relievers.

Although numerous studies have confirmed that light to moderate drinkers have less heart disease and tend to

live longer than either nondrinkers or heavy drinkers, increasing chronic alcohol consumption carries with it

significant risks as well: liver disease; pancreatitis; suicide; hemorrhagic stroke; mouth, esophageal, liver, and

colorectal cancers; and probably breast cancer. In alcoholics, nutritional impairment may result from the

displacement of nutrient-rich food as well as from complications of gastrointestinal dysfunction and widespread

metabolic alterations. Thiamin deficiency, as seen in the neurological condition known as Wernicke-Korsakoff

syndrome, is a hallmark of alcoholism and requires urgent treatment.

Diet and chronic diseaseThe relationship between diet and chronic disease (i.e., a disease that progresses over an extended period and

does not resolve spontaneously) is complicated, not only because many diseases take years to develop but also

because identifying a specific dietary cause is extremely difficult. Some prospective epidemiologic studies

attempt to overcome this difficulty by following subjects for a number of years. Even then, the sheer complexity

of the diet, as well as the multifactorial origins of chronic diseases, makes it difficult to prove causal links.

Furthermore, many substances in food appear to act in a synergistic fashion—in the context of the whole diet

rather than as individual agents—and single-agent studies may miss these interactive effects.

The concept of “risk factors” has been part of the public vocabulary for several decades, ever since the

landmark Framingham Heart Study, begun in 1948, first reported in the early 1960s that cigarette smoking,

elevated blood cholesterol, and high blood pressure were predictors of one’s likelihood of dying from heart

disease. Other studies confirmed and further elucidated these findings, and an extensive body of research has

since shown that particular conditions or behaviours are strongly associated with specific diseases. Not all

Britannica LaunchPacks | The Treatment and Prevention of Disease

61 of 123© 2020 Encyclopædia Britannica, Inc.

individuals with a risk factor eventually develop a particular disease; however, the chance of developing the

disease is greater when a known risk factor is present and increases further when several risk factors are

present. Certain risk factors—such as diet, physical activity, and use of tobacco, alcohol, and other drugs—are

modifiable, although it is often difficult to effect such change, even if one is facing possible disability or

premature death. Others, including heredity, age, and sex, are not. Some risk factors are modifiable to varying

degrees; these include exposure to sunlight and other forms of radiation, biological agents, and chemical agents

(e.g., air and water pollution) that may play a role in causing genetic mutations that have been associated with

increased risk of certain diseases, particularly cancer.

Cardiovascular disease

Cardiovascular disease, a general term that encompasses diseases of the heart and blood vessels, is the leading

cause of death in developed countries. Coronary heart disease (CHD), also known as coronary artery disease or

ischemic heart disease, is the most common—and the most deadly—form of cardiovascular disease. CHD occurs

when the arteries carrying blood to the heart, and thereby oxygen and nutrients, become narrow and

obstructed. This narrowing is usually the result of atherosclerosis, a condition in which fibrous plaques (deposits

of lipid and other material) build up on the inner walls of arteries, making them stiff and less responsive to

changes in blood pressure. If blood flow is interrupted in the coronary arteries surrounding the heart, a

myocardial infarction (heart attack) may occur. Restriction of blood flow to the brain due to a blood clot or

hemorrhage may lead to a cerebrovascular accident, or stroke, and narrowing in the abdominal aorta, its major

branches, or arteries of the legs may result in peripheral arterial disease. Most heart attacks and strokes are

caused not by total blockage of the arteries by plaque but by blood clots that form more readily where small

plaques are already partially blocking the arteries.

Although atherosclerosis typically takes decades to manifest in a heart attack or stroke, the disease may

actually begin in childhood, with the appearance of fatty streaks, precursors to plaque. The deposition of plaque

is, in essence, an inflammatory response directed at repairing injuries in the arterial wall. Smoking,

hypertension, diabetes, and high blood levels of low-density lipoprotein (LDL) cholesterol are among the many

factors associated with vessel injury. Infection by certain bacteria or viruses may also contribute to inflammation

and vessel damage. Particularly vulnerable to premature CHD are middle-aged men, especially those with a

family history of the disease, and individuals with hereditary conditions such as familial hypercholesterolemias.

Diet and weight loss are influential in modifying four major risk factors for CHD: high levels of LDL cholesterol,

low levels of high-density lipoprotein (HDL) cholesterol, hypertension, and diabetes. However, the role of diet in

influencing the established risk factors is not as clear as the role of the risk factors themselves in CHD.

Furthermore, dietary strategies are most useful when combined with other approaches, such as smoking

cessation and regular exercise. Drug therapy may include cholesterol-lowering drugs such as statins, bile acid

sequestrants, and niacin, as well as aspirin or anticoagulants to prevent formation of blood clots and

antihypertensive medication to lower blood pressure. Although endogenous estrogen (that produced by the

body) is thought to confer protection against CHD in premenopausal women, recent studies call into question the

value of hormone therapy in reducing CHD risk in women who have gone through menopause.

Blood lipoproteins

Because lipids such as cholesterol, triglycerides, and phospholipids are nonpolar and insoluble in water, they

must be bound to proteins, forming complex particles called lipoproteins, to be transported in the watery

medium of blood. Low-density lipoproteins, which are the main transporters of cholesterol in the blood, carry

cholesterol from the liver to body cells, including those in the arteries, where it can contribute to plaque. Multiple

lines of evidence point to high levels of LDL cholesterol as causal in the development of CHD, and LDL is the

Britannica LaunchPacks | The Treatment and Prevention of Disease

62 of 123© 2020 Encyclopædia Britannica, Inc.

main blood lipoprotein targeted by intervention efforts. Furthermore, clinical trials have demonstrated that LDL-

lowering therapy reduces heart attacks and strokes in people who already have CHD.

High-density lipoproteins, on the other hand, are thought to transport excess cholesterol to the liver for removal,

thereby helping to prevent plaque formation. HDL cholesterol is inversely correlated with CHD risk; therefore

intervention efforts aim to increase HDL cholesterol levels. Another blood lipoprotein form, the very-low-density

lipoprotein (VLDL), is also an independent CHD risk factor, but to a lesser extent than LDL and HDL. As the major

carrier of triglyceride (fat) in the blood, VLDL is particularly elevated in people who are overweight and in those

with diabetes and metabolic syndrome.

Although LDL cholesterol is popularly referred to as “bad” cholesterol and HDL cholesterol is often called “good”

cholesterol, it is actually the lipoprotein form—not the cholesterol being carried in the lipoprotein—that is related

to CHD risk. Total cholesterol levels, which are highly correlated with LDL cholesterol levels, are typically used

for initial screening purposes, although a complete lipoprotein evaluation is more revealing. A desirable blood

lipid profile is a total cholesterol level below 200 milligrams per decilitre (mg/dl), an HDL cholesterol level of at

least 40 mg/dl, a fasting triglyceride level of less than 150 mg/dl, and an LDL cholesterol level below 100, 130, or

160 mg/dl, depending on degree of heart attack risk.

Dietary fat

Learn about researchers using olive oil to create healthier hot dogs.

© American Chemical Society

It is widely accepted that a low-fat diet lowers blood cholesterol and is protective against heart disease. Also, a

high-fat intake is often, although not always, linked to obesity, which in turn can increase heart disease risk. Yet,

the situation is complicated by the fact that different fatty acids have differing effects on the various lipoproteins

that carry cholesterol. Furthermore, when certain fats are lowered in the diet, they may be replaced by other

components that carry risk. High-carbohydrate diets, for example, may actually increase cardiovascular risk for

some individuals, such as those prone to metabolic syndrome or type 2 diabetes. Heredity also plays a role in an

individual’s response to particular dietary manipulations.

Overview of saturated and polyunsaturated fats.

Britannica LaunchPacks | The Treatment and Prevention of Disease

63 of 123© 2020 Encyclopædia Britannica, Inc.

Contunico © ZDF Enterprises GmbH, Mainz

In general, saturated fatty acids, which are found primarily in animal foods, tend to elevate LDL and total blood

cholesterol. However, the most cholesterol-raising saturated fatty acids (lauric, myristic, and palmitic acids) can

come from both plant and animal sources, while stearic acid, derived from animal fat as well as from cocoa

butter, is considered neutral, neither raising nor lowering blood cholesterol levels.

When saturated fatty acids are replaced by unsaturated fatty acids—either monounsaturated or

polyunsaturated—LDL and total blood cholesterol are usually lowered, an effect largely attributed to the

reduction in saturated fat. However, polyunsaturated fatty acids tend to lower HDL cholesterol levels, while

monounsaturated fatty acids tend to maintain them. The major monounsaturated fatty acid in animals and

plants is oleic acid; good dietary sources are olive, canola, and high-oleic safflower oils, as well as avocados,

nuts, and seeds. Historically, the low mortality from CHD in populations eating a traditional Mediterranean diet

has been linked to the high consumption of olive oil in the region, although the plentiful supply of fruits and

vegetables could also be a factor. Olive oil consumption is also associated with decreased levels of amyloid-beta

plaque formation and a reduction of inflammation in the brain; such plaques and inflammation are indicative of

Alzheimer disease. Studies in animals suggest that a diet rich in olive oil helps preserve memory and learning

ability.

The two types of polyunsaturated fatty acids found in foods are omega-3 fatty acids and omega-6 fatty acids.

Linoleic acid, the primary omega-6 fatty acid in most diets, is widespread in foods; the major source is vegetable

oils such as sunflower, safflower, and corn oils. Low cardiovascular disease rates in Eskimo populations eating

traditional diets high in omega-3 fatty acids initially provoked the speculation that these fatty acids may be

protective against CHD. The primary lipid-altering effect of omega-3 fatty acids is the reduction of blood

triglycerides. Omega-3 fatty acids may also protect the heart and blood vessels by lowering blood pressure,

reducing blood clotting, preventing irregular heart rhythms, and acting as anti-inflammatory agents. The long-

chain omega-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are derived from alpha-

linolenic acid, a shorter-chain member of the same family. Fatty fish such as salmon, herring, sardines,

mackerel, and tuna are high in both EPA and DHA. Flaxseed is an excellent source of alpha-linolenic acid, which

the body can convert to the long-chain omega-3 fatty acids. Other sources of omega-3 fatty acids include

walnuts, hazelnuts, almonds, canola oil, soybean oil, dark green leafy vegetables such as spinach, and egg yolk.

A diet high in polyunsaturated fatty acids may increase LDL lipid oxidation and thereby accelerate

atherosclerosis; therefore, it should be accompanied by increased intakes of vitamin E, an antioxidant. Fish oil

supplements are not advised without medical supervision because of possible adverse effects, such as bleeding.

The safety of (as opposed to naturally occurring ) unsaturated fatty acids has been called into question trans cisbecause -fatty acids in the diet raise LDL cholesterol to about the same extent as do saturated fatty acids, transand they can also lower HDL cholesterol. -fatty acids are found naturally in some animal fats, such as beef, Transbutter, and milk, but they are also produced during the hydrogenation process, in which unsaturated oils are

made harder and more stable. Certain margarines, snack foods, baked goods, and deep-fried products are major

food sources of -fatty acids.trans

Dietary cholesterol

Cholesterol in food and cholesterol in the blood are distinct entities, and they are often confused. Dietary

cholesterol is found only in foods of animal origin, and it is particularly high in egg yolk and organ meats.

Cholesterol in the diet raises LDL cholesterol but not as much as saturated fatty acids do. If dietary cholesterol is

already high, consuming even more cholesterol may not increase blood cholesterol levels further because of

feedback control mechanisms. Also, there is great individual variation in response to dietary cholesterol. For

Britannica LaunchPacks | The Treatment and Prevention of Disease

64 of 123© 2020 Encyclopædia Britannica, Inc.

healthy people, a cholesterol intake averaging less than 300 mg daily is recommended; however, because

cholesterol is synthesized by the body, none is required in the diet.

Other dietary factors

Ingestion of soluble fibre, a component of dietary fibre (indigestible plant material), lowers LDL and total blood

cholesterol levels and has been linked to decreased mortality from cardiovascular disease. Sources of soluble

fibre include whole oats, barley, legumes, some vegetables, and fruits, particularly apples, plums, apricots,

blueberries, strawberries, and citrus fruits; psyllium and other fibre supplements may also be recommended. The

mechanism whereby soluble fibre lowers cholesterol levels is unclear, although it is probably related to its ability

to bind with cholesterol and bile acids in the gut, thereby removing them from circulation. Other factors may

contribute as well, such as fermentation of fibre by bacteria in the colon, resulting in compounds that inhibit

cholesterol synthesis.

Light to moderate alcohol intake (up to two drinks per day for men and one drink per day for women) is

associated with reduced CHD risk, primarily because of its ability to raise HDL cholesterol levels and possibly

because it helps prevent blood clot formation. Alcohol intake may help explain the so-called French paradox:

heart disease rates in France are low despite a CHD risk profile comparable to that in the United States, where

rates are relatively high. Wine also contains antioxidant compounds, such as resveratrol from grape skins, that

may inhibit LDL oxidation, but the beneficial effect of these substances is likely far less than that of alcohol itself.

Mortality from stroke and heart disease is significantly associated with dietary sodium (salt) intake, but only in

overweight individuals, who may have an increased sensitivity to dietary sodium. Sodium intake also appears to

have a direct effect on risk of stroke beyond its effect on blood pressure, which itself influences stroke risk. On

the other hand, diets rich in potassium are linked to reduced risk of stroke.

Soy foods are associated with decreased LDL and total blood cholesterol levels, as well as other vascular effects

associated with reduced CHD risk. Tofu, tempeh, miso, soy flour, soy milk, and soy nuts are among the soy foods

that contain isoflavones, estrogen-like compounds that are thought to be responsible for these beneficial

cardiovascular effects.

Antioxidant substances found in food and taken as dietary supplements include vitamin C, vitamin E, and beta-

carotene (a plant precursor to vitamin A). Dietary antioxidants may lower CHD risk, although clinical trials have

not yet supported this notion.

Other factors

For blood pressure that is equal to or greater than the “prehypertension” level of 120/80 millimetres of mercury

(mm Hg), the more elevated the blood pressure, the greater the risk of heart disease. Hypertension (140/90 mm

Hg and above) and atherosclerosis are mutually reinforcing: hypertension injures artery walls, thereby

encouraging plaque formation; and once plaque has formed and arteries are less elastic, hypertension is

aggravated. If hypertension is treated, the incidence of CHD, stroke, and congestive heart failure decreases.

Diabetes—often accompanied by hypertension, high blood triglyceride levels, and obesity—is an important risk

factor for heart disease and also warrants aggressive intervention. Furthermore, for people with diabetes who

have a heart attack, there is an unusually high death rate, immediately or in the ensuing years. If blood glucose

levels are strictly controlled, vascular complications will be decreased.

Obesity is also an important factor in cardiovascular disease, primarily through its influence on other

simultaneously present risk factors. Obese individuals often have an abnormal glucose tolerance and diabetes,

hypertension, and blood lipoprotein abnormalities, including higher triglyceride levels and lower HDL cholesterol

Britannica LaunchPacks | The Treatment and Prevention of Disease

65 of 123© 2020 Encyclopædia Britannica, Inc.

levels. Fat accumulation around the waist (the so-called apple shape) puts one at greater risk for premature

heart disease than does fat accumulation around the hips (pear shape). A waist circumference greater than 102

cm (40 inches) for men or 88 cm (35 inches) for women is considered a high risk. Besides helping to control

weight, regular exercise is thought to decrease CHD risk in several ways: slowing the progression of

atherosclerosis, increasing the blood supply to the heart muscle, increasing HDL cholesterol, reducing VLDL

levels, improving glucose tolerance, and reducing blood pressure. At a minimum, 30 minutes of moderate

aerobic activity, such as brisk walking, on most days is recommended.

A newly described constellation of CHD risk factors called metabolic syndrome is marked by abdominal obesity,

low HDL cholesterol, elevated blood triglycerides, high blood pressure, and insulin resistance. First named

Syndrome X in 1988 by American endocrinologist Gerald Reaven, this condition is exacerbated when susceptible

people eat high-carbohydrate diets. Individuals with metabolic syndrome benefit from regular physical activity

and weight reduction, along with a diet lower in carbohydrates and saturated fat and higher in unsaturated fat.

Individuals with the genetic disease hereditary hemochromatosis excessively absorb iron, which can build up to

dangerously high levels and damage the heart, liver, and other organs. Approximately 1 in 9 people of European

descent are carriers (i.e., have one of two possible genes) for the disease and have an increased risk of heart

disease. However, studies examining the possible role of dietary iron in heart disease risk for those who lack the

gene for hemochromatosis have been inconclusive.

The amino acid homocysteine, when present in elevated amounts in blood, may damage arteries and promote

atherosclerosis. Inadequate intake of vitamin B , vitamin B , or folic acid can increase blood homocysteine 6 12

levels, although folic acid deficiency is the most common cause. While elevated homocysteine is not yet an

established risk factor for CHD, it is prudent to ensure adequate intake of folic acid.

Dietary recommendations

Although plaque formation starts in childhood, infants or children under two years of age should not have any

dietary restriction placed on cholesterol and fat. After age two, dietary recommendations to reduce CHD risk

generally focus on controlling intake of total fat, saturated and -fatty acids, and dietary cholesterol, transcombined with physical activity and weight management. Since atherosclerosis is so common, such diets are

considered useful not only for the general public but also for people with high LDL cholesterol or other CHD risk

factors. A preventive diet for adults might include 20 to 35 percent of kilocalories as dietary fat, with low intake

of saturated and -fatty acids (no more than 10 percent of kilocalories), and cholesterol intake below 300 mg transdaily. A therapeutic diet, which should be managed by a registered dietitian or other qualified nutrition

professional, is even more restrictive. Practical suggestions include reducing intake of fatty spreads, organ

meats, fatty meats, egg yolks, full-fat dairy products, baked goods and fried foods; removing skin from poultry;

and carefully reading food labels to reduce hidden fats in processed foods. An emphasis on oats and other whole

grains, vegetables, and fruits—with the inclusion of nonfat or low-fat dairy products, fish, legumes, poultry, and

lean meats—is likely to benefit not only cardiovascular health but also overall health.

Hypertension

Hypertension, or high blood pressure, is one of the most common health problems in developed countries. It is

an important risk factor for other diseases, such as coronary heart disease, congestive heart failure, stroke,

aneurysm, and kidney disease. Most people with high blood pressure have essential, or primary, hypertension,

for which no specific cause can be determined. Heredity plays a role in the development of the disease, but so

do modifiable factors such as excess weight, physical inactivity, high alcohol intake, and diets high in salt. For

reasons that are not entirely clear, African Americans have among the highest rates of hypertension in the world.

Britannica LaunchPacks | The Treatment and Prevention of Disease

66 of 123© 2020 Encyclopædia Britannica, Inc.

Hypertension is usually defined as a blood pressure equal to or greater than 140/90 mm Hg, i.e., equivalent to

the pressure exerted by a column of mercury 140 mm high during contraction of the heart (systole) and 90 mm

high during relaxation (diastole); either systolic or diastolic blood pressure, or both, may be elevated in

hypertension. Individuals with hypertension can be asymptomatic for years and then suddenly experience a fatal

stroke or heart attack. Prevention and management of hypertension can significantly decrease the chance of

complications. Early identification of hypertension is important so that lifestyle modification can begin as early

as possible.

Overweight people, especially those with excess abdominal fat, have a much greater risk of developing

hypertension than do lean people. Weight loss alone, sometimes as little as 4.5 kg (10 pounds), can be

extremely effective in reducing high blood pressure. Increasing physical activity can, of course, help with weight

control, but it also appears to lower blood pressure independently.

Large studies examining salt intake and blood pressure in communities around the world have clearly

established that blood pressure is positively related to dietary intake of sodium (salt). Primitive societies in which

sodium intake is low have very little hypertension, and the increase in blood pressure that typically occurs with

age in industrialized societies fails to occur. On the other hand, in countries with extremely high salt

consumption, hypertension is common and stroke is a leading cause of death. Furthermore, experimental

studies indicate that decreasing sodium intake can reduce blood pressure.

Some people appear to be genetically sensitive to salt. Although salt restriction may only help lower blood

pressure in those who are salt sensitive, many individuals consume more salt than is needed. Dietary

recommendations typically encourage the general population to limit sodium intake to no more than 2,400 mg

daily, which amounts to a little more than a teaspoon of salt. This level can be achieved by restricting salt used

in cooking, not adding salt at the table, and limiting highly salted foods, processed foods (many of which have

hidden sodium), and so-called fast foods. Canned vegetables, breakfast cereals, and luncheon meats are

particularly high in sodium.

Heavy alcohol consumption (more than two drinks a day) is associated with hypertension. Vegetarians, and

particularly vegans (who consume no foods of animal origin, including milk and eggs) tend to have lower blood

pressure than do meat eaters. The diet recommended for reducing blood pressure, which will also benefit

cardiovascular health, emphasizes fruits, vegetables, and low-fat dairy products; includes whole grains, poultry,

fish, and nuts; and contains only small amounts of red meat and sugary foods and beverages. Reducing salt

intake should further increase the effectiveness of the diet.

A variety of drugs is used to treat hypertension, some of which have nutritional repercussions. Thiazide diuretics,

for example, increase potassium loss from the body, usually necessitating the intake of additional potassium,

which is particularly plentiful in foods such as bananas, citrus fruits, vegetables, and potatoes. Use of potassium-

based salt substitutes is not advised without medical supervision.

Cancer

Second only to cardiovascular disease as a cause of death in much of the world, cancer is a major killer of adults

ages 45 and older. The various types of cancer differ not only in location in the body and affected cell type but

also in the course of the disease, treatments, and suspected causal or contributory factors.

Studies of identical twins reveal that, even for those with an identical genetic makeup, the risk for most cancers

is still largely related to environmental factors. Another line of evidence supporting the limited role of heredity in

most cancers is studies of migrant populations, in which cancer rates tend to grow more like a group’s adopted

country with each passing generation. For example, rates of breast and colorectal cancers in individuals who

Britannica LaunchPacks | The Treatment and Prevention of Disease

67 of 123© 2020 Encyclopædia Britannica, Inc.

migrate from rural Asia to the United States gradually increase to match the higher cancer rates of the United

States. On the other hand, risk of stomach cancer gradually decreases after Japanese migrants move to the

United States. Nutrition is among the critical environmental and lifestyle factors investigated in migration

studies, although identifying specific dietary components that affect the changing disease rates has been more

elusive. A number of cancer organizations around the world have estimated that 30 to 40 percent of all cases of

cancer could be prevented by appropriate dietary means.

Most cancer-causing substances (carcinogens) probably enter the body through the alimentary canal in food and

beverages. Although some foodborne toxins, pesticides, and food additives may be carcinogenic if taken in

sufficient quantity, it is primarily the foodstuffs themselves that are associated with cancer. Some dietary

patterns or components may promote cancer, while others may inhibit it.

Substances in the diet, or other environmental factors, can act anywhere along the multistage process of cancer

development (carcinogenesis): initiation, in which DNA, the genetic material in a cell, is altered; promotion, in

which cells with altered DNA multiply; and progression, in which cancer cells spread to surrounding tissue and

distant sites (metastasis).

Studies attempting to relate total fat or specific types of fat to various cancers have been inconsistent. High

intake of dietary fat may promote cancer, but this could be due at least in part to the extra energy (calories) that

fat provides. Obesity is associated with several types of cancer, including colorectal, prostate, uterine,

pancreatic, and breast cancers. A possible mechanism for this effect is the higher circulating levels of estrogen,

insulin, and other hormones that accompany increased body fat. Furthermore, regular exercise has been shown

in a number of studies to reduce the risk of breast and colon cancers. In laboratory animals, restricting energy

intake is the most effective method for reducing cancer risk; chronic underfeeding inhibits the growth of many

spontaneous tumours and most experimentally induced tumours.

High alcohol consumption is another factor that has been implicated in the development of various cancers,

especially of the mouth, throat, liver, and esophagus (where it acts synergistically with tobacco) and probably of

the breast, colon, and rectum. The fact that moderate use of alcohol has a beneficial effect on cardiovascular

disease underscores how complex and potentially confusing is the connection between food and health.

Foods also contain substances that offer some protection against cancer. For example, fresh fruits and

vegetables, and specifically vitamins C and E, eaten at the same time as nitrate-containing foods (such as ham,

bacon, sausages, frankfurters, and luncheon meats), inhibit nitrosamine production and thus help protect

against stomach cancer. Several hundred studies have found a strong association between diets high in

vegetables and fruits and lower risk for various cancers, although identifying specific protective factors in such

diets has been more difficult. Vitamin C, vitamin E, carotenoids such as beta-carotene (a plant precursor of

vitamin A), and the trace mineral selenium act in the body’s antioxidant systems to help prevent DNA damage

by reactive molecules known as free radicals. Specific vegetables, notably the cruciferous vegetables (broccoli,

cauliflower, Brussels sprouts, kale, and other members of the cabbage family), contain sulforaphane and other

compounds known as isothiocyanates, which induce enzymes that detoxify carcinogens and have been

demonstrated to protect against cancer in animal studies. Dietary fibre in plant foods may also be protective: it

dilutes potential carcinogens, binds to them, and speeds up transit time through the gut, thereby limiting

exposure. Fruits and vegetables are rich in phytochemicals (biologically active plant substances), which are

currently being investigated for potential anticarcinogenic activity. Animal studies suggest that antioxidant

compounds known as polyphenols, which are found in both black and green tea, may be protective against the

growth of cancer. Regular consumption of tea, especially in Japan and China, where green tea is the preferred

type, has been associated with a decreased risk of various cancers, especially stomach cancer, but the evidence

has been conflicting.

Britannica LaunchPacks | The Treatment and Prevention of Disease

68 of 123© 2020 Encyclopædia Britannica, Inc.

The dietary approach most likely to reduce cancer risk is one that is rich in foods from plant sources, such as

fruits, vegetables (especially cruciferous ones), whole grains, beans, and nuts; has a limited intake of fat,

especially animal fat; includes a balance of energy intake and physical activity to maintain a healthy body

weight; and includes alcohol in moderation, if at all. Intake of carcinogenic compounds can also be reduced by

trimming fat and removing burned portions from meat before eating.

Colorectal cancer

Consumption of meat, particularly red meat and processed meat, is associated with a modest increase in risk of

colorectal cancer. However, it is unclear whether this effect is related to a specific component of meat; to the

fact that other nutrients, such as fibre, may be in short supply in a high-meat diet; or to carcinogenic

substances, such as heterocyclic amines and polycyclic aromatic hydrocarbons, which are produced during high-

temperature grilling and broiling, particularly of fatty muscle meats. High alcohol consumption and low intakes

of calcium and folic acid have also been linked to an increased rate of colorectal cancer.

Although fibre-rich foods also appear to be protective against colorectal cancer in many studies, attempts to

demonstrate a specific protective effect of dietary fibre, distinct from the nonfibre constituents of vegetables

and fruits, have been inconclusive. Obesity is an important risk factor for colorectal cancer in men and

premenopausal women, and mild or moderate physical activity is strongly associated with a decreased risk of

colon cancer.

Prostate cancer

There is a growing body of evidence that a diet low in fat and animal products and rich in fruits and vegetables,

including the cruciferous type, is protective against prostate cancer. This protection may be partially explained

by a fibre found in fruits and vegetables called pectin, which has been shown to possess anticancer properties.

Lower prostate cancer risk has been associated with the consumption of tomatoes and tomato products, which

are rich sources of the carotenoid antioxidant lycopene. Prostate cancer rates are low in countries such as Japan

where soy foods are consumed regularly, but there is no direct evidence that soy protects against the disease.

The possible protective effect against prostate cancer of vitamin E and the mineral selenium is under

investigation.

Breast cancer

The relationship between diet and breast cancer is unclear. High-fat diets have been suspected of contributing to

breast cancer, based on international correlations between fat intake and breast cancer rates, as well as animal

studies. However, large prospective studies have not confirmed this connection, even though a diet high in fat

may be inadvisable for other reasons. Similarly, a diet high in fruits and vegetables is certainly healthful but

provides no known protection against breast cancer. Alcohol intake is associated with breast cancer, but the

increased risk appears related only to heavy drinking. Lifelong regular exercise may be protective against breast

cancer, possibly because it helps to control weight, and obesity is associated with increased risk of

postmenopausal breast cancer. Heredity and levels of estrogen over the course of a lifetime are the primary

established influences on breast cancer risk.

Enthusiasm for soy foods and soy products as protection against breast cancer has been growing in recent years

in the industrialized world. Although Japanese women, who have low breast cancer rates, have a lifelong

exposure to high dietary soy, their situation is not necessarily comparable to midlife supplementation with soy

isoflavones (estrogen-like compounds) in Western women. Isoflavones appear to compete with estrogen (e.g., in

premenopausal women), and thereby blunt its effect; when in a low-estrogen environment (e.g., in

postmenopausal women) they exert weak estrogenic effects. There is as yet no consistent evidence that soy in

Britannica LaunchPacks | The Treatment and Prevention of Disease

69 of 123© 2020 Encyclopædia Britannica, Inc.

the diet offers protection against breast cancer or any other cancer; and the effects of dietary soy once cancer

has been initiated are unknown (estrogen itself is a cancer promoter). Ongoing research on the benefits of soy is

promising, and consumption of soy foods such as tofu is encouraged, but consumption of isolated soy

constituents such as isoflavones, which have unknown risks, is not warranted.

Diabetes mellitus and metabolic disorders

Diabetes mellitus is a group of metabolic disorders of carbohydrate metabolism characterized by high blood

glucose levels (hyperglycemia) and usually resulting from insufficient production of the hormone insulin (type 1

diabetes) or an ineffective response of cells to insulin (type 2 diabetes). Secreted by the pancreas, insulin is

required to transport blood glucose (sugar) into cells. Diabetes is an important risk factor for cardiovascular

disease, as well as a leading cause of adult blindness. Other long-term complications include kidney failure,

nerve damage, and lower limb amputation due to impaired circulation.

Type 1 diabetes (formerly known as juvenile-onset or insulin-dependent diabetes) can occur at any age but often

begins in late childhood with the pancreas failing to secrete adequate amounts of insulin. Type 1 diabetes has a

strong genetic link, but most cases are the result of an autoimmune disorder, possibly set off by a viral infection,

foreign protein, or environmental toxin. Although elevated blood sugar is an important feature of diabetes, sugar

or carbohydrate in the diet is not the cause of the disease. Type 1 diabetes is managed by injections of insulin,

along with small, regularly spaced meals and snacks that spread glucose intake throughout the day and

minimize fluctuations in blood glucose.

Type 2 diabetes (formerly known as adult-onset or non-insulin-dependent diabetes) is the more common type of

diabetes, constituting 90 to 95 percent of cases. With this condition, insulin resistance renders cells unable to

admit glucose, which then accumulates in the blood. Although type 2 diabetes generally starts in middle age, it

is increasingly reported in childhood, especially in obese children. Genetic susceptibility to this form of diabetes

may not be expressed unless a person has excess body fat, especially abdominal obesity. Weight loss often

helps to normalize blood glucose regulation, and oral antidiabetic agents may also be used. Lifestyle

intervention (e.g., diet and exercise) is highly effective in delaying or preventing type 2 diabetes in high-risk

individuals.

Migration studies have shown that urbanization and adoption of a Western diet and habits can dramatically

increase the rate of type 2 diabetes. For example, a high prevalence of the disorder is seen in the Pima Indians

of Arizona, who are sedentary and eat a high-fat diet, whereas prevalence is low in a closely related group of

Pimas living a traditional lifestyle—physically active, with lower body weight and a diet that is lower in fat—in a

remote, mountainous region of Mexico. Type 2 diabetes is a serious health problem among Native Americans

and other ethnic minorities in the United States. Worldwide, the prevalence of type 2 diabetes has increased

sharply, along with the rise in obesity.

Specific treatment plans for diabetics are designed after individual medical assessment and consultation with a

registered dietitian or qualified nutrition professional. The therapeutic diet, which has changed considerably over

the years, focuses on complex carbohydrates, dietary fibre (particularly the soluble type), and regulated

proportions of carbohydrate, protein, and fat. Because heart disease is the leading cause of death among

diabetics, saturated fatty acids and -fatty acids are also restricted, and physical activity and weight control transare strongly encouraged. Older dietary recommendations restricted sugar in the diabetic diet, but recent

guidelines allow a moderate intake of sugars, so long as other carbohydrates are reduced in the same meal. Diet

and exercise are also used to manage a condition known as gestational diabetes, which develops in a small

percentage of pregnant women and usually resolves itself after delivery, though such women are subsequently

at increased risk of developing type 2 diabetes.

Britannica LaunchPacks | The Treatment and Prevention of Disease

70 of 123© 2020 Encyclopædia Britannica, Inc.

Research in the 1990s led to the development of a new tool, the glycemic index, which reflects the finding that

different carbohydrate foods have effects on blood glucose levels that cannot be predicted on the basis of their

chemical structure. For example, the simple sugars formed from digestion of some starchy foods, such as bread

or potatoes, are absorbed more quickly and cause a faster rise in blood glucose than does table sugar (sucrose),

fruit, or milk. In practical terms, however, if a carbohydrate food is eaten as part of a mixed meal, its so-called

glycemic effect is less consequential. The glycemic index may prove to be a useful option for planning diabetic

diets, but it in no way obviates the need for other established therapeutic practices, such as limiting total

carbohydrate intake and managing body weight.

The trace element chromium is a cofactor for insulin and is important for glucose tolerance. Malnourished infants

with impaired glucose tolerance have been shown to benefit from additional chromium, but there is no evidence

that most people with diabetes are deficient in chromium or in need of chromium supplementation.

If a diabetic injects too much insulin, blood glucose may drop to dangerously low levels; the irritability,

shakiness, sweating, headache, and confusion that ensue are indicative of low blood sugar, known as

hypoglycemia. Severe hypoglycemia, if untreated, can lead to seizures, coma, and even death. Reactive

hypoglycemia of nondiabetic origin is a distinct disorder of carbohydrate metabolism in which blood glucose falls

unduly (below 50 mg/dl) after an overproduction of the body’s own insulin in response to a meal high in simple

sugars; symptoms of hypoglycemia occur simultaneously. However, this condition is uncommon.

Numerous inherited metabolic disorders, also known as inborn errors of metabolism, respond to dietary

treatment. Most of these relatively rare disorders are inherited as autosomal recessive traits (i.e., both parents

must be carriers) and result in a specific enzyme or cofactor that has reduced activity or is absent altogether.

Biochemical pathways of amino acid, carbohydrate, or fatty acid metabolism may be affected, each having a

number of possible enzyme defects. In some cases, newborn screening programs, and even prenatal diagnosis,

allow for early identification and successful intervention. Without prompt and aggressive treatment, most of

these disorders have a poor prognosis, resulting in severe intellectual disability and other forms of illness.

Phenylketonuria (PKU), a condition in which the amino acid phenylalanine is not properly metabolized to the

amino acid tyrosine, is the most recognized of these disorders. Treatment involves lifelong restriction of

phenylalanine in the diet and supplementation with tyrosine. With early detection and meticulous management,

normal growth and intellectual functioning are possible.

Obesity and weight control

The discovery of the leptin protein in mice and its connection to diabetes and obesity.

HudsonAlpha Institute for Biotechnology

The World Health Organization (WHO) has recognized obesity as a worldwide epidemic affecting more than 500

million adults and paradoxically coexisting with undernutrition in both developing and industrialized countries.

There also have been reports of an alarming increase in childhood obesity worldwide. Obesity (excess body fat

Britannica LaunchPacks | The Treatment and Prevention of Disease

71 of 123© 2020 Encyclopædia Britannica, Inc.

for stature) contributes to adverse health consequences such as high blood pressure, blood lipid abnormalities,

coronary heart disease, congestive heart failure, ischemic stroke, type 2 diabetes, gallbladder disease,

osteoarthritis, several common cancers (including colorectal, uterine, and postmenopausal breast cancers), and

reduced life expectancy. Genes play a significant role in the regulation of body weight. Nevertheless,

environmental factors such as calorie-rich diets and a sedentary lifestyle can be instrumental in determining how

an individual’s genetic heritage will unfold.

Dietary carbohydrates are not the problem in obesity. In some Asian cultures, for example, where carbohydrate

foods such as rice are the predominant food, people are relatively thin and heart disease and diabetes rates are

lower than they are in Western cultures. What matters in weight control is the ratio of food energy (calories)

consumed to energy expended, over time.

Encyclopædia Britannica, Inc.

Height-weight tables as a reference for healthy weights have been supplanted by the parameter known as the

body mass index (BMI). The BMI estimates total body fat, although it is less sensitive than using a skinfold

caliper or other method to measure body fat indirectly. The BMI is defined as weight in kilograms divided by the

square of the height in metres: weight ÷ height = BMI. In 1997 the WHO recommended international adoption 2

of the definition of a healthy BMI for adult women and men as between 18.5 and 24.9. A BMI lower than 18.5 is

considered underweight, while a BMI of 25 to 29.9 denotes overweight and 30 or higher indicates obesity.

Definitions of overweight and obesity are more difficult to quantify for children, whose BMI changes with age.

A healthful eating plan for gradual weight loss in adults will likely contain about 1,200 to 1,500 kilocalories (kcal)

per day, probably accompanied by a balanced vitamin and mineral supplement. A desirable weight loss is about

one pound per week from fat stores (as opposed to lean tissue), which requires an energy deficit of 3,500 kcal,

or about 500 kcal per day. Consuming less than 1,000 kcal per day is not recommended; a preferred approach

would be to increase physical activity, which has the added benefit of helping to maintain lean tissue. Individuals

who are severely obese and unable to lose weight may, after medical consultation, consider weight-loss

medications that suppress appetite or decrease nutrient absorption or even surgery to reduce the volume of the

stomach or bypass it altogether. Carbohydrate-restricted diets, very-low-fat diets, and novelty diets—those in

which one food or food group is emphasized—may result in weight loss but generally fail to establish the good

dietary and exercise practices necessary to maintain the desired weight, and weight is often regained shortly

after the diet is stopped.

Britannica LaunchPacks | The Treatment and Prevention of Disease

72 of 123© 2020 Encyclopædia Britannica, Inc.

A successful approach to long-term weight management requires establishing new patterns: eating healthfully,

but eating less; engaging in regular physical activity; and changing behaviour patterns that are

counterproductive, such as eating while watching television. Limiting intake of fatty foods, which are more

energy-rich, is also helpful, as is eating smaller portions and drinking water instead of calorie-containing drinks.

Low-fat foods are not always low in total calories, as the fat may be replaced by sugars, which themselves

provide calories. Individuals who use artificial or nonnutritive sweeteners do not necessarily reduce their total

calorie intake.

Research with genetically obese laboratory animals led to the discovery of the gene in mice and humans. obUnder the direction of this gene, adipose (fat) tissue cells secrete leptin, a protein hormone. When fat stores

increase, leptin sends a signal to the hypothalamus (a regulatory centre in the brain) that stimulates one to eat

less and expend more energy. Certain genetic mutations result in insufficient production of functional leptin or in

a failure to respond to the leptin signal. Treatment with leptin may prove useful for the small percentage of

obese persons who have a defect in the gene, although it is not yet known whether leptin therapy will induce obweight loss in those who are leptin-resistant or who do not have mutations in the gene.ob

Eating disorders

Eating disorders such as anorexia nervosa and bulimia nervosa are serious health problems reflecting an undue

concern with body weight. Girls and young women are most vulnerable to the pressures of society to be thin,

although boys and men can also fall prey to these disorders, which have lifelong consequences and can even be

fatal. The incidence of eating disorders has risen during the last 50 years, particularly in the United States and

western Europe.

Anorexia nervosa is characterized by low body weight, propensity for drastic undereating, intense fear of gaining

weight or becoming fat (despite being underweight), and a distorted body image. Consequences include

impaired immunity, anemia, and diminished digestive function. Without intervention, a state of semi-starvation

similar to marasmus may occur, requiring hospitalization and even force-feeding to prevent death. Treatment

usually requires a coordinated approach, with the participation of a physician, psychiatrist, dietitian, and possibly

other health professionals.

Bulimia nervosa is thought to be more prevalent than anorexia nervosa, and both disorders may even occur in

the same person. In bulimia nervosa recurrent episodes of “binge eating” are followed by a form of purging,

such as self-induced vomiting, fasting, excessive exercise, or the use of laxatives, enemas, or diuretics.

Treatment usually involves a structured eating plan.

Young athletes often restrict energy intakes to meet weight guidelines and body-image expectations of their

sport. Females are most affected, but male athletes, such as gymnasts, wrestlers, boxers, and jockeys, are also

vulnerable. Intense training among young female athletes, coupled with food energy restriction, often results in

amenorrhea (failure to menstruate for at least three consecutive months) and bone loss similar to that at

menopause. Calcium supplementation may be required.

Tooth decay

Dental caries (tooth decay) is an oral infectious disease in which bacteria, primarily , in the Streptococcus mutansdental plaque metabolize simple sugars and other fermentable carbohydrates into acids that dissolve tooth

enamel. Dental plaque (not to be confused with the lipid-containing plaque found in arteries) is a mass of

bacteria and sticky polymers that shield the tooth from saliva and the tongue, thereby facilitating decay. All

dietary forms of sugar, including honey, molasses, brown sugar, and corn syrup, can cause tooth decay;

Britannica LaunchPacks | The Treatment and Prevention of Disease

73 of 123© 2020 Encyclopædia Britannica, Inc.

fermentable carbohydrates in crackers, breads, cereals, and other grain products, as well as milk, fruits, and fruit

juices, also have cariogenic (decay-causing) potential. Eating sugary or starchy foods between meals, especially

sticky foods that stay on the teeth longer, increases the time that teeth are exposed to destructive acids.

Artificial sweeteners are not cariogenic, and xylitol, a sugar alcohol used in some chewing gums, is even

cariostatic, i.e., it reduces new tooth decay by inhibiting plaque and suppressing decay-causing bacteria. Putting

an infant to sleep with a bottle, especially one containing juice or other sweetened beverages, milk, or infant

formula can lead to a condition called “baby bottle tooth decay.”

Fluoride is extremely effective at protecting tooth enamel from decay, especially while enamel is being formed

in the jaws before the permanent teeth erupt. Fluoridation of water in communities where fluoride is not

naturally high is a safe and effective public health measure. Water with approximately one part per million of

fluoride protects against dental caries without causing the mottling of teeth that can occur at higher levels. In

areas without fluoridated water, fluoride supplements are recommended for children. Brewed tea, marine fish

consumed with bones, and seaweed are significant food sources of fluoride.

Regular brushing and flossing of the teeth and gums, as well as rinsing the mouth after meals and snacks, are

important measures that protect against periodontal (gum) disease as well as dental caries. Gum health also

depends on a properly functioning immune system and good overall nutrition. Key nutrients include vitamin C,

which helps protect against gingivitis (inflamed gums), and calcium and vitamin D, which help ensure a strong

jawbone and teeth.

Heartburn and peptic ulcer

When gastric contents, containing hydrochloric acid, flow backward from the stomach, the lining of the

esophagus becomes inflamed, leading to the burning sensation known as heartburn. Occasional heartburn (also

known as acid indigestion) is a common occurrence, typically precipitated by eating certain foods. However,

some people experience heartburn regularly, a condition known as gastroesophageal reflux disease (GERD).

Individuals with GERD are advised to limit their intake of alcohol and caffeine, which relax the lower esophageal

sphincter and actually promote reflux, as well as their intake of fat, which delays gastric emptying. Chocolate,

citrus fruit and juices, tomatoes and tomato products, spearmint and peppermint oils, and certain spices may

aggravate heartburn, but these foods do not appear to cause the condition.

For overweight or obese individuals with GERD, weight loss may have a beneficial effect on symptoms. Eating

smaller meals, chewing food thoroughly, eating more slowly, avoiding tight-fitting clothes, not smoking, and not

lying down before about three hours after eating are among the factors that may improve the condition. Without

medical supervision, drugs such as antacids and acid controllers should be used only infrequently.

It is now known that a peptic ulcer (a sore on the lining of the stomach or duodenum) is not caused by stress or

eating spicy foods, as was once thought; rather, most peptic ulcers are caused by the infectious bacterial agent

and can be treated by a simple course of antibiotics. However, stress and dietary factors—Helicobacter pylorisuch as coffee, other caffeinated beverages, and alcohol—can aggravate an existing ulcer.

Bowel conditions and diseases

Constipation, a condition characterized by the difficult passage of relatively dry, hardened feces, may arise from

insufficient dietary fibre (roughage) or other dietary factors, such as taking calcium or iron supplements, in

addition to daily routines that preclude relaxation. Straining during defecation can also contribute to

diverticulosis, small outpouchings in the colonic wall, which may become inflamed (diverticulitis) and present

serious complications. Another possible consequence of straining is hemorrhoids, swollen veins of the rectum

and anus that typically lead to pain, itching, and bleeding. Constipation can usually be treated by eating high-

Britannica LaunchPacks | The Treatment and Prevention of Disease

74 of 123© 2020 Encyclopædia Britannica, Inc.

fibre foods such as whole-grain breads and cereals, drinking sufficient amounts of water, and engaging in regular

exercise. By drawing water into the large intestine (colon), fibre—especially the insoluble type—helps form a

soft, bulky stool. Eating dried fruits such as prunes, which contain a natural laxative substance (dihydroxyphenyl

isatin) as well as being high in fibre, also helps stimulate the bowels. Although laxatives or enemas may be

helpful, frequent use may upset fluid, mineral, and electrolyte (salt) balances and interfere with vitamin

absorption. Any persistent change in bowel habits should be evaluated by a physician.

In contrast to constipation, diarrhea—loose, watery stools, and possibly an increased frequency of bowel

movements—can be a cause for immediate concern. Acute diarrhea of bacterial origin is relatively common and

often self-limiting. Other common causes of acute diarrhea include viral infections, parasites, food intolerances

or allergies, medications, medical or surgical treatments, and even stress. Regardless of cause, drinking fluids is

important for treating a temporary bout of diarrhea. However, if severe and persisting, diarrhea can lead to

potentially dangerous dehydration and electrolyte imbalances and requires urgent medical attention, especially

in infants and children. Prolonged vomiting presents similar risks.

Inflammatory bowel disease (IBD), such as Crohn disease (regional ileitis) or ulcerative colitis, results in impaired

absorption of many nutrients, depending upon which portion of the gastrointestinal tract is affected. Children

with IBD may fail to grow properly. Treatment generally includes a diet low in fat and fibre, high in protein and

easily digestible carbohydrate, and free of lactose (milk sugar). Increased intakes of certain nutrients, such as

iron, calcium, and magnesium, and supplementation with fat-soluble vitamins may also be recommended, along

with additional fluid and electrolytes to replace losses due to diarrhea.

Irritable bowel syndrome (IBS) is a common gastrointestinal disorder characterized by a disturbance in intestinal

peristalsis. Symptoms include excessive gas, abdominal discomfort, and cramps, as well as alternating diarrhea

and constipation. Although it can be extremely uncomfortable, IBS does not cause intestinal damage. Dietary

treatment involves identifying and avoiding “problem” foods, notably legumes and other gas-producing

vegetables and dairy products, and possibly reducing caffeine consumption. For most people with IBS, a low-fat

diet, smaller meals, and a gradual increase in fibre intake are helpful.

Food-drug interactionsDrugs may interfere with or enhance the utilization of nutrients, sometimes leading to imbalances. A common

example is the increased loss of potassium that results from the use of certain diuretics to treat high blood

pressure. Nutrient absorption can also be affected by drugs that change the acidity of the gastrointestinal tract,

alter digestive function, or actually bind to nutrients. For example, regular use of laxatives, antacids, or mineral

oil can reduce nutrient absorption and over time may lead to deficiency. Elderly individuals who take multiple

medicines are particularly at risk of impaired nutritional status.

On the other hand, foods can alter drug absorption or interact with drugs in undesirable ways, resulting in drug

ineffectiveness or toxicity. For example, protein and vitamin B interfere with the effectiveness of levodopa, 6

used to treat Parkinson disease. Tyramine, an amino-acid derivative found in certain aged cheeses and red

wines, may cause hypertension in individuals being treated for depression with monoamine oxidase (MAO)

inhibitors. Grapefruit juice contains unique substances that can block the breakdown of some drugs, thereby

affecting their absorption and effectiveness. These drugs include certain cholesterol-lowering statins, calcium

channel blockers, anticonvulsant agents, estrogen, antihistamines, protease inhibitors, immunosuppressants,

antifungal drugs, and psychiatric medications. Eating grapefruit or drinking grapefruit juice within a few hours or

even a few days of taking these medications could result in unintended consequences.

Britannica LaunchPacks | The Treatment and Prevention of Disease

75 of 123© 2020 Encyclopædia Britannica, Inc.

Vitamin and mineral supplements and herbal products can also interact with medicines. For example, one or

more of the supplemental antioxidants studied—vitamin C, vitamin E, beta-carotene, and selenium—may blunt

the effectiveness of certain drugs (e.g., high-dose niacin, when used in combination with statins) in raising HDL

cholesterol levels and improving cardiovascular health. Also, the herbal supplement St. John’s wort can alter the

metabolism of drugs such as protease inhibitors, anticlotting drugs, and antidepressants, and it can reduce the

effectiveness of oral contraceptives.

Food allergies and intolerances

Research suggests that oral desensitization therapy, in which escalating doses of peanuts are…

Science in Seconds (www.scienceinseconds.com)

A true food allergy involves an abnormal immunologic response to an otherwise harmless food component,

usually a protein. In the case of antibody-mediated (immediate hypersensitivity) food allergies, within minutes or

hours of exposure to the allergen, the body produces specific immunoglobulin E antibodies and releases

chemical mediators such as histamine, resulting in gastrointestinal, skin, or respiratory symptoms ranging from

mild to life-threatening. Much less common are cell-mediated (delayed hypersensitivity) food allergies, in which

a localized inflammatory process and other symptoms may not start for up to a day. Adverse food reactions that

do not involve the immune system, aside from foodborne infection or poisoning, are called food intolerances or

sensitivities. Most common of these is lactose intolerance, which is a genetically determined deficiency of the

enzyme lactase that is needed to digest the milk sugar, lactose.

Milk allergy and lactose intolerance are distinct conditions that are often confused. Only about 1 percent of the

population has a true allergy to the protein in cow’s milk. Milk allergy is found most often in infants, whose

immune and digestive systems are immature. On the other hand, much of the world’s population, except those

of northern European descent, is to some degree lactose intolerant after early childhood. Undigested lactose

reaching the large intestine can cause abdominal discomfort, flatulence, and diarrhea. Lactose-intolerant

individuals can often handle with little or no discomfort small quantities of dairy products, especially yogurt or

other milk products containing the bacterium ; alternatives are the use of lactose-Lactobacillus acidophilushydrolyzed milk products or lactase tablets or drops, which convert lactose to simple, digestible sugars.

Celiac disease (also known as celiac sprue, nontropical sprue, or gluten-sensitive enteropathy) is a hereditary

disorder in which consumption of wheat gluten and related proteins from rye and barley is not tolerated. Recent

studies indicate that oats may be safe if not contaminated with wheat. Celiac disease, which may be a type of

cell-mediated food allergy, affects primarily individuals of European descent and rarely those of African or Asian

descent. It is characterized by inflammatory damage to the mucosal cells lining the small intestine, leading to

malabsorption of nutrients and such symptoms as diarrhea, fatigue, weight loss, bone pain, and neurological

disturbances. Multiple nutritional deficiencies may ensue and, in children, growth is impaired. The disorder is

often associated with autoimmune conditions, particularly autoimmune thyroid disease and type 1 diabetes.

Britannica LaunchPacks | The Treatment and Prevention of Disease

76 of 123© 2020 Encyclopædia Britannica, Inc.

Although celiac disease can be life-threatening if untreated, patients can recover if gluten is eliminated from the

diet.

Other adverse reactions to foods or beverages may be drug effects, such as those caused by caffeine or alcohol.

Certain foods, such as ripened cheese, chocolate, red wine, and even ice cream, trigger headaches in some

individuals. Food additives that can cause reactions in susceptible people include sulfite preservatives, used in

some wines, dried fruits, and dried potato products; nitrate and nitritepreservatives, used in processed meats;

certain food colorants, particularly tartrazine (also known as FD&C Yellow #5); and the flavour enhancer

monosodium glutamate (MSG). Some adverse reactions to food are purely psychological and do not occur when

the food is served in a disguised form.

Learn about parental guidelines introduced in 2017 to combat peanut allergies in children.

Encyclopædia Britannica, Inc.

Nearly any food has allergenic potential, but foods that most commonly cause antibody-mediated allergic

reactions are cow’s milk, eggs, wheat, fish, shellfish, soybeans, peanuts, and tree nuts (such as almonds,

walnuts, and cashews). Depending on processing methods, edible oils and other products derived from these

foods may still contain allergenic protein residues. Severely allergic people may react to extremely small

amounts of an offending food, even inhaled vapours.

Studies differ significantly as to the percentage of adults and children who have true food allergies. However,

most seem to agree that few adults (about 2 to 5 percent) and slightly more children (roughly 3 to 8 percent) are

affected. Most children outgrow food allergies, particularly if the offending food is avoided for a year or two.

However, food allergies can develop at any time, and some allergies, such as those to peanuts, tree nuts, and

shellfish, may be lifelong. Common symptoms of antibody-mediated food allergy include tightening of the throat,

swelling of the lips or tongue, itchy lips, wheezing, difficulty breathing, headache, nasal congestion, skin rash

(eczema), hives, nausea, vomiting, stomach cramps, diarrhea and, in severe cases, life-threatening anaphylactic

shock. People susceptible to anaphylaxis are advised to carry a syringe loaded with epinephrine at all times and

to seek emergency medical care if an allergic reaction begins.

Food allergies are often hard to document, even by physicians trained in allergy and immunology. Blood tests for

antibodies to specific allergens, skin tests, and even an elimination diet, in which suspect foods are eliminated

from the diet and then added back one at a time, may not be definitive. The most conclusive diagnostic test is a

so-called double-blind food challenge, in which neither doctor nor patient knows whether a suspect food or a

harmless placebo is being given; however, these controlled clinical tests are expensive and time-consuming.

Labels are important for identifying hidden ingredients in packaged foods, although they are often imprecise and

cannot be relied on naively. For example, even if a product is labeled as nondairy, a listing of casein, caseinate,

or whey indicates the presence of milk protein. Peanuts may be found in unlikely foods, such as chili, stew,

processed meats, oils, flours, cream substitutes, and desserts.

Britannica LaunchPacks | The Treatment and Prevention of Disease

77 of 123© 2020 Encyclopædia Britannica, Inc.

Research has indicated that, at least in some instances, the severity of an allergy may be reduced through

desensitization, in which a person is exposed over time to increasing amounts of the antigen to which he or she

is allergic. Methods of desensitization differ; for example, allergy shots involve the injection of antigens, whereas

sublingual immunotherapy involves small doses of antigen given as drops under the tongue. Sublingual

immunotherapy in particular is considered a safe and effective means of building tolerance to food allergens.

Toxins in foodsEdible skins of fruits and vegetables are rich in vitamins, minerals, and fibre; however, pesticide residues and

other environmental contaminants are typically more plentiful in the outer layers of these foods. Pesticides also

tend to accumulate in the fat and skin of animals. Intake of toxic substances is reduced by consuming a wide

variety of foods; washing fruits and vegetables carefully; and trimming fat from meat and poultry and removing

skin from poultry and fish. Even organic produce requires thorough washing: it may not have synthetic

chemicals, but mold, rot, fecal matter or other natural substances can contaminate it at any point from field to

market. Peeling helps reduce these unwanted chemicals and microbes, although valuable nutrients will be lost

as well.

A greenish tinge on potatoes, although merely the harmless substance chlorophyll, indicates that the natural

toxicant solanine may be present. Solanine builds up when a potato is handled roughly, exposed to light or

extremes of temperature, or is old. Symptoms of solanine poisoning include diarrhea, cramps, and headache,

although many damaged potatoes would have to be eaten to cause serious illness. Peeling away green areas or

removing sprouts or the entire skin (despite its high nutrient content) reduces solanine intake.

Swordfish and shark, as well as tuna steaks, may contain high levels of methylmercury (which remains after

cooking) and should be avoided by pregnant women. Nonbacterial toxins in seafood include scombrotoxin

(histamine) in spoiled fish, which can result in a severe allergic reaction when eaten; dinoflagellates (microscopic

algae), associated with the so-called red tide, which can cause paralytic shellfish poisoning when consumed; and

ciguatera, found in certain warm-water reef fish.( fish poisoning; shellfish poisoning.)See also

Natural toxins in some species of mushrooms cause symptoms ranging from gastrointestinal upset to

neurological effects, even hallucinations. Most mushroom fatalities are due to consumption of amatoxins in

, the mushroom species known as the death cap, which, if not lethal, can cause lasting liver Amanita phalloidesand kidney damage. As there are no antidotes for mushroom poisoning, and identification of mushroom species

by inexperienced mushroom pickers is often imprecise, consumption of wild mushrooms is not advised.

Foodborne illnessesContamination of foods or beverages with disease-causing organisms—bacteria, viruses, fungi, and parasites—

can result in symptoms ranging from mild stomach upset, headache, muscle aches, and fever to abdominal

cramps, vomiting, and diarrhea. Severe cases can result in dangerous dehydration, nerve damage, paralysis,

kidney failure, and death. Symptoms may develop within hours or days after eating contaminated food, and they

are not always easy to distinguish from influenza or other illnesses. Drinking clear liquids (such as chicken broth,

juices, and water) helps replace fluids and electrolytes lost during a mild infection, but immediate medical

attention is required when symptoms are severe. Most susceptible are infants and young children, pregnant

women, the elderly, and people with weakened immune systems or certain chronic diseases. Particularly risky

foods include raw or undercooked meat, poultry, eggs, seafood, and unpasteurized (raw) milk products and

juices.

Britannica LaunchPacks | The Treatment and Prevention of Disease

78 of 123© 2020 Encyclopædia Britannica, Inc.

Common food-borne illnessesSome common foodborne illnesses are listed in the table.

Most cases of foodborne illness are caused by bacteria and the toxins they produce. , found Campylobacter jejuniin raw or undercooked foods of animal origin, especially poultry, is responsible for more diarrheal illness

throughout the world than any other bacterium. Travelers’ diarrhea is often caused by specific types of

bacteria, while other types cause much of the diarrhea in infants, particularly during Escherichia coli E. coliweaning, in developing countries. Other common foodborne infections are caused by various strains of

bacteria and the Norwalk family of viruses.Salmonella

Smoking, drying, fermenting, and the adding of sugar or salt are traditional methods used to preserve food and

keep it safe. During the 20th century public health advances such as disinfection of water supplies,

pasteurization of milk, safe canning, widespread use of refrigeration, and improved food-safety practices

eliminated typhoid fever, tuberculosis, and cholera, for example, as common foodborne diseases. However,

others have taken their place. New causes of foodborne illness continue to be discovered or described. A

recently characterized microscopic parasite, , was the cause of outbreaks of diarrheal Cyclospora cayetanensisillness in the United States and Canada starting in 1996. Guatemalan raspberries, contaminated with Cyclosporavia the water supply, were the suspected source of infection. Another recently described parasite,

, contaminates water supplies and foods and is an important cause of diarrhea Cryptosporidium parvumthroughout the world, particularly in children and in persons with HIV.

In 1993, undercooked hamburgers emerged in the United States as a potential source of O157:H7, a E. colideadly strain of a normally harmless bacterium found in the human digestive tract. Subsequently, this microbe

has also been found in unpasteurized fruit juice, such as fresh-pressed apple cider, and other foods possibly

contaminated with animal feces. The bacterium produces a potent toxin that may result in bloody diarrhea;

hemolytic uremic syndrome, a possible complication, is a major cause of acute kidney failure in children. E. coliO157:H7 infection, which can be spread by persons who unknowingly incubate the bacterium in their intestines

and transmit it through poor toilet hygiene, appears to be on the rise worldwide, particularly in North America

and western Europe.

Certain other bacteria also produce a toxin, which then causes a poisoning or intoxication rather than a bacterial

infection per se. For example, , found in improperly canned foods, produces the lethal Clostridium botulinumneurotoxin that causes botulism. The toxin responsible for staphylococcal food poisoning is produced by

typically after contamination from nasal passages or cuts on the skin.Staphylococcus aureus

Many molds (fungi) on food are harmless and, in fact, are flavour enhancing, such as those used to ripen certain

cheeses. However, some molds—particularly those on grains, nuts, fruits, and seeds—produce poisons known as

mycotoxins. The mycotoxins of greatest concern are aflatoxins, which can infect nuts, peanuts, corn, and wheat.

Prolonged low-level exposure to aflatoxins, as seen in regions of Asia and Africa, is suspected of contributing to

liver cancer. Discarding nuts that are discoloured, shriveled, or moldy helps reduce the risk.

Eating raw shellfish, sushi, or undercooked fish puts one at risk for parasites, such as tapeworms, as well as for

bacteria and viruses, all of which are killed by proper cooking. The great majority of seafood-related illness is

caused by the consumption of raw bivalve mollusks. Clams, mussels, and scallops, which are usually served

cooked, are of less public health concern than oysters, which are often eaten raw.

Bovine spongiform encephalopathy (BSE), commonly known as mad cow disease, was first seen in British cattle

in the 1980s. However, it was not linked to human disease until 1996, when 10 young adults in the United

Kingdom died of variant Creutzfeldt-Jakob disease, a fatal brain-wasting disease thought to have been

transmitted by consumption of meat containing brain or spinal tissue from BSE-infected cattle. It is suspected

Britannica LaunchPacks | The Treatment and Prevention of Disease

79 of 123© 2020 Encyclopædia Britannica, Inc.

that the diseased cattle had been infected by eating the ground remains of sheep with the neurodegenerative

disease scrapie. BSE appears to be caused by infectious protein particles called prions, which kill brain cells and

leave holes in the brain. Details of disease transmission are still unclear, as is the potential risk from cosmetics,

dietary supplements, gelatin, or vaccines containing bovine ingredients, or from blood transfusions. Ground beef,

sausages, and frankfurters are more likely to be contaminated with nervous-system tissue than are whole cuts of

beef. Dairy products are considered safe, even if they come from BSE-infected cows.

Good personal hygiene and food safety practices are important in protecting against foodborne illness. The main

source of contamination is fecal matter, which is reduced by frequently washing hands with soap and hot water,

especially before preparing food. Thorough washing also decontaminates towels, surfaces, cutting boards,

utensils, and other equipment that has touched uncooked meat. Other food safety guidelines include keeping

cold foods cold, keeping hot foods hot, and refrigerating leftovers quickly.

Growth of microorganisms, parasites, and insects on certain foods (such as meat, poultry, spices, fruits, and

vegetables) can be controlled by low-dose irradiation, which has been approved for specific uses in a number of

countries, such as Japan, France, Italy, Mexico, and the United States. Food irradiation technology—which does

not make foods radioactive—is considered safe by the World Health Organization and various health agencies,

but it has yet to receive wide consumer acceptance.

Botanicals and functional foodsMany herbal products show sufficient promise in preventing or treating disease that they are being tested in

rigorous scientific studies, including clinical trials. However, the “botanicals” currently on the market in many

countries are untested with regard to safety and efficacy, and consumers should approach their use in an

informed and cautious way. Just as with pharmaceuticals, herbal products can have mild to severe side effects,

and “natural” does not mean “safe.” Furthermore, the amounts of active ingredients in supplements can vary

widely and, according to laboratory analyses, the potency specified on labels is often inaccurate. Some

preparations even contain none of the active ingredients listed on the label or may have unwanted contaminants.

Potentially dangerous herbal products include comfrey and kava, which can cause liver damage, and ephedra

(ma huang), which has caused fatal reactions in some people, especially those with high blood pressure or heart

disease. Because of possible complications, patients scheduled to undergo surgery or other medical procedures

may be advised to discontinue certain supplements for days or even weeks before surgery. Safety and efficacy

concerns also need to be addressed, as “designer foods” fortified with herbs and bioactive substances continue

to proliferate.

The distinction between foods, dietary supplements, and drugs is already being blurred by the burgeoning

market in so-called functional foods (such as cholesterol-lowering margarine), which aim to provide health

benefits beyond mere nutrient value. Moreover, recent advances in molecular biology offer the possibility of

using genetic profiles to determine unique nutrient requirements, thereby providing customized dietary

recommendations to more effectively delay or prevent disease.

Jean Weininger

Additional ReadingComprehensive treatments of nutrition in health and disease are found in et al. (eds.), MAURICE E. SHILS Modern

, 10th ed. (2006); and (eds.), Nutrition in Health and Disease BARBARA A. BOWMAN ROBERT M. RUSSELL Present

, 9th ed. (2006); , , and , Knowledge in Nutrition JAMES L. GROFF SAREEN S. GROPPER JACK L. SMITH Advanced Nutrition

Britannica LaunchPacks | The Treatment and Prevention of Disease

80 of 123© 2020 Encyclopædia Britannica, Inc.

, 5th ed. (2009); and , and Human Metabolism GORDON M. WARDLAW CAROL BYRD-BREDBENNER Wardlaw’s Perspectives

, 9th ed. (2012); and , , 12th ed. in Nutrition ELEANOR NOSS WHITNEY SHARON RADY ROLFES Understanding Nutrition

(2011); and and (eds.), , 12th ed. (2008).L. KATHLEEN MAHAN SYLVIA ESCOTT-STUMP Krause’s Food & Nutrition Therapy

World hunger and malnutrition

Useful guides for understanding malnutrition include (ed.), THOMAS J. MARCHIONE Scaling Up; Scaling Down:

(1999); and and , Overcoming Malnutrition in Developing Countries FRANCES MOORE LAPPÉ ANNA LAPPÉ Hope’s

(2002); and (eds.), Edge: The Next Diet for a Small Planet RICHARD D. SEMBA MARTIN W. BLOEM Nutrition and

(2001); and , Health in Developing Countries MICHAEL C. LATHAM Human Nutrition in the Developing World

(1997).

Diet and chronic disease

Comprehensive sources on diet and chronic disease are NATIONAL RESEARCH COUNCIL (U.S.) COMMITTEE ON DIET

, (1989); and AND HEALTH Diet and Health: Implications for Reducing Chronic Disease Risk INSTITUTE OF

, , a series of reports published periodically starting in 1997. MEDICINE (U.S.) Dietary Reference Intakes

Dietary recommendations for cardiovascular disease are discussed in EXPERT PANEL ON DETECTION,

, “Executive Summary of the Third Report EVALUATION, AND TREATMENT OF HIGH BLOOD CHOLESTEROL IN ADULTS

of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and

Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III),” Journal of the American , vol. 285 (May 16, 2001); and , Medical Association NUTRITION COMMITTEE OF THE AMERICAN HEART ASSOCIATION

“AHA Dietary Guidelines,” , vol. 102, p. 2284 (2000).Circulation

A major international report on diet and cancer prevention is WORLD CANCER RESEARCH FUND/AMERICAN

, (1997). INSTITUTE FOR CANCER RESEARCH Food, Nutrition, and the Prevention of Cancer: A Global Perspective

A classic article on the value of nutrition in cancer prevention is and , “The Causes of R. DOLL R. PETO

Cancer: Quantitative Estimates of Avoidable Risks of Cancer in the United States Today,” Journal of the , vol. 66, pp. 1191–1308 (1981). An evaluation of claims for the use of diet, National Cancer Institute

herbs, and supplements in cancer prevention and treatment is , AMERICAN CANCER SOCIETY American Cancer

, 2nd ed. (2009).Society Complete Guide to Complementary & Alternative Cancer Methods

Practical information for controlling diabetes with proper nutrition is detailed in AMERICAN DIABETES

, (1999).ASSOCIATION Guide to Medical Nutrition Therapy

Obesity and eating disorders

The causes and consequences of obesity and a comprehensive framework for treatment are considered in

and (eds.), (2002). The relationship THOMAS A. WADDEN ALBERT J. STUNKARD Handbook of Obesity Treatment

between body image and eating disorders and approaches to treatment are examined in and IRA M. SACKER

, MARC A. ZIMMER Dying to Be Thin: Understanding and Defeating Anorexia Nervosa and Bulimia—A Practical

(1987, rev. ed. 2001); and and (eds.), Lifesaving Guide J. KEVIN THOMPSON LINDA SMOLAK Body Image, Eating

, 2nd ed. (2009).Disorders, and Obesity in Youth: Assessment, Prevention, and Treatment

Food allergies and intolerances

Information about food allergies is presented in and , JONATHAN BROSTOFF LINDA GAMLIN Food Allergies and

(2000); and Food Intolerance: The Complete Guide to Their Identification and Treatment CELIDE BARNES

and , (1998).KOERNER ANNE MUNOZ-FURLONG Food Allergies: How to Eat Safely and Enjoyably

Britannica LaunchPacks | The Treatment and Prevention of Disease

81 of 123© 2020 Encyclopædia Britannica, Inc.

Citation (MLA style):

"Nutritional disease." , Encyclopædia Britannica LaunchPacks: The Treatment and Prevention of DiseaseBritannica, 11 Nov. 2020. packs-ngl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

therapeuticsBritannica Note:

The treatment and care of someone to combat disease, injury, or mental disorder is known as therapy, or therapeutics.

treatment and care of a patient for the purpose of both preventing and combating disease or alleviating pain or

injury. The term comes from the Greek , which means “inclined to serve.”therapeutikos

Prozac pills.

Tom Varco

In a broad sense, therapeutics means serving and caring for the patient in a comprehensive manner, preventing

disease as well as managing specific problems. Exercise, diet, and mental factors are therefore integral to the

prevention, as well as the management, of disease processes. More specific measures that are employed to

treat specific symptoms include the use of drugs to relieve pain or treat infection, surgery to remove diseased

tissue or replace poorly functioning or nonfunctioning organs with fully operating ones, and counseling or

psychotherapy to relieve emotional distress. Confidence in the physician and in the method selected enhances

effectiveness.

Preventive medicineThe rationale for preventive medicine is to identify risk factors in each individual and reduce or eliminate those

risks in an attempt to prevent disease. Primary prevention is the preemptive behavior that seeks to avert

Britannica LaunchPacks | The Treatment and Prevention of Disease

82 of 123© 2020 Encyclopædia Britannica, Inc.

disease before it develops—for example, vaccinating children against diseases. Secondary prevention is the

early detection of disease or its precursors before symptoms appear, with the aim of preventing or curing it.

Examples include regular cervical Papanicolaou test (Pap smear) screening and mammography. Tertiary

prevention is an attempt to stop or limit the spread of disease that is already present. Clearly, primary

prevention is the most cost-effective method of controlling disease.

Leading causes of death worldwide include cardiovascular disease, cancer, cerebrovascular disease, accidental

injuries, and chronic lung disease. A major preventable cause of death is cigarettesmoking, which is linked to

increased risk of cardiovascular disease (e.g., heart attack), cancer, stroke, and chronic lung diseases such as

emphysema and chronic bronchitis.

Multiple organizations worldwide have established recommendations and guidelines for disease prevention. For

example, in the United States, following earlier work by the Canadian Task Force on the Periodic Health

Examination, the U.S. Preventive Services Task Force was established to evaluate the effectiveness of various

screening tests, immunizations, and prophylactic regimens based on a critical review of the scientific literature.

Its report, , lists the recommendations for a variety of conditions evaluated Guide to Clinical Preventive Servicesby the panel.

A nurse administering a vaccine via intramuscular injection into the left shoulder muscle of a…

James Gathany/Centers for Disease Control and Prevention (CDC)

Immunization is the best method for preventing infectious diseases. Standard immunizations of infants and

children include those for diphtheria, tetanus, and pertussis, or whooping cough (DTP); polio (OPV); measles,

mumps, and rubella (MMR); type b (HbCV); and hepatitis B (HBV). A yearly vaccine Haemophilus influenzaeagainst the influenza virus should be administered to infants and young children, to adults who are over age 65,

to those at risk because of chronic cardiopulmonary disease, and to those in chronic care facilities. Adults at age

65 should also be immunized against pneumococcal pneumonia with a vaccine containing 23 of the most

common strains of .Streptococcus pneumoniae

Acquired immunodeficiency syndrome (AIDS), caused by human immunodeficiency virus (HIV), is a major

infectious disease problem. Although a vaccine is expected, obstacles to its development are great. Primary

preventive measures include abstaining from sexual contact, using condoms, and, among intravenous drug

users, avoiding the sharing of needles.

The risk factors for coronary artery disease that can be modified to prevent heart attack are cigarette smoking,

hypertension, an elevated serum cholesterol level, a sedentary lifestyle, obesity, stress, and excessive alcohol

consumption. In addition to an elevated total serum cholesterol level, an elevated low-density lipoprotein (LDL)

level and a decreased high-density lipoprotein (HDL) level are significant risk factors. The total cholesterol level

and elevated LDL level can be reduced by appropriate diet, whereas a low HDL can be raised by stopping

Britannica LaunchPacks | The Treatment and Prevention of Disease

83 of 123© 2020 Encyclopædia Britannica, Inc.

smoking and increasing physical activity. If those measures do not provide adequate control, a variety of drugs

capable of lowering the cholesterol level are available.

The major risk factor for stroke is hypertension, with cigarette smoking and type 2 diabetes mellitus significantly

increasing the risk. Transient ischemic attacks (TIAs) occur before stroke in 20 percent of patients and consist of

sudden onset of one or more of the following symptoms: temporary loss of vision in one eye, unilateral

numbness, temporary loss of speech or slurred speech, and localized weakness of an arm or leg. Attacks last

less than 24 hours and resolve without permanent damage until the stroke occurs.

The most important preventive behaviour in averting cancer is the avoidance of cigarette smoke. Smoking

accounts for 30 percent of all cancer deaths, and there is increasing recognition of the danger of environmental

or secondhand smoke to the nonsmoker. Primary prevention of skin cancer includes restricting exposure to

ultraviolet light by using sunscreens or protective clothing. For other cancers, secondary preventive measures

include mammography, clinical breast examinations, and breast self-examinations for breast cancer; pelvic

examinations and Pap tests for cervical cancer and ovarian cancer; and sigmoidoscopy, digital rectal

examinations, and stool tests for occult blood for colorectal cancer.

Demineralization of bone and a reduction in bone mass (osteoporosis) occur most often in men and women age

70 or older and may result in fractures, low back pain, and loss of stature. Osteoporosis in postmenopausal

women that is caused by estrogen deficiency is the most common manifestation. The most effective method for

preventing loss of bone mass after menopause is estrogen replacement therapy and increased calcium intake.

Primary preventive measures include increasing physical activity and avoiding cigarettes and heavy alcohol

consumption.

Alcohol abuse is the primary reason that accidents are a major cause of death in the United States. Other factors

are failure to wear seat belts or motorcycle helmets, sleep deprivation, and guns in the home. Taking reasonable

precautions and being aware of the potential dangers of alcohol and firearms can help reduce the number of

deaths due to accidents.

Treatment of symptomsPain

Pain is the most common of all symptoms and often requires treatment before its specific cause is known. Pain is

both an emotional and a physical experience and is difficult to compare from one person to another. One patient

may have a high pain threshold and complain only after the disease process has progressed beyond its early

stage, while another with a low pain threshold may complain about pain that would be ignored or tolerated by

most people. Pain from any cause can be increased by anxiety, fear, depression, loneliness, and frustration or

anger.

Acute pain serves a useful function as a protective mechanism that leads to the removal of the source of the

pain, whether it be localized injury or infection. Chronic pain serves a less useful function and is often more

difficult to treat. Although acute pain requires immediate attention, its cause is usually easily found, whereas

chronic pain complaints may be more vague and difficult to isolate.

The ideal method for treating pain is to eliminate the cause, such as to surgically remove an inflamed structure,

to apply hot compresses to a muscle spasm, or to set a fractured bone in a cast. Alternatives to drug therapy,

such as physical therapy, are relied on whenever possible. Analgesic drugs (pain relievers) most often used to

alleviate mild and moderate pain are the nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin,

Britannica LaunchPacks | The Treatment and Prevention of Disease

84 of 123© 2020 Encyclopædia Britannica, Inc.

ibuprofen, acetaminophen, or indomethacin. If these are ineffective, a weak opiate such as codeine,

hydrocodone, or oxycodone may be prescribed. Severe pain not controlled by these agents requires a strong

opiate such as morphine or meperidine. Because opiates are addictive, their use is controlled. In the United

States, for example, opiate use is regulated by the Controlled Substances Act, and individuals prescribing or

dispensing these drugs must register annually with the Drug Enforcement Administration. Each drug is assigned

to one of five groups, from schedule I, which includes drugs that have the highest potential for abuse, to

schedule V, which includes drugs with a limited dependence-causing potential.

Nausea and vomiting

Nausea and vomiting are common symptoms that may arise from diseases of the gastrointestinal tract

(including gastroenteritis or bowel obstruction), from medications such as analgesics or digoxin, or from nervous

system disturbances such as migraines or motion sickness. Vomiting is controlled by a vomiting centre located

in the medulla oblongata of the brainstem.

Identifying and treating the cause is important, especially if the condition responds well to treatment and is

serious if not addressed. A bowel obstruction can occur as a result of adhesions from previous abdominal

surgery. Obstruction or decreased bowel motility also can occur with constipation and fecal impaction. Such

important and treatable causes must be ruled out before resorting to antiemetic (serving to prevent or cure

vomiting) drugs. The most frequently used antiemetic agents are the phenothiazines, the most popular being

prochlorperazine (Compazine). Antihistamines may be useful in motion sickness. Newer and more powerful

drugs are needed to control the vomiting associated with cancer chemotherapy. Ondansetron is given to

patients undergoing cancer chemotherapy, surgery, or radiation therapy with agents that cause severe nausea

and vomiting. This drug is very effective in these patients.

Learn what causes morning sickness, or nausea, in pregnant women.

© American Chemical Society

Nausea and vomiting are experienced by more than 50 percent of pregnant women during the first trimester.

These symptoms are referred to as morning sickness, although they can occur at any time of the day. They may

be distressing, but they cause no adverse effect on the fetus. Drug therapy not only is unnecessary but should

be avoided unless proved safe for the fetus. Treatment involves rest and intake of frequent small meals and

pyridoxine (vitamin B ).6

Diarrhea

Acute diarrhea can result from food poisoning, laxatives, alcohol, and some antacids. Usually it is caused by an

acute infection with bacteria such as , , and . These agents can Escherichia coli Salmonella Staphylococcus aureusenter the body on food, in water, or when contaminated objects (e.g., a teething ring) are put into the mouth. In

infants, acute diarrhea is usually self-limiting, and treatment consists primarily of preventing dehydration.

Traveler’s diarrhea affects up to half of those traveling to developing areas of the world. Preventive measures

Britannica LaunchPacks | The Treatment and Prevention of Disease

85 of 123© 2020 Encyclopædia Britannica, Inc.

include chewing two tablets of bismuth subsalicylate (Pepto-Bismol) four times a day, drinking only bottled water

or other bottled or canned beverages, and eating only fruits that may be peeled, canned products, and

restaurant food that is thoroughly cooked. Avoiding dairy products, raw seafood and vegetables, and food served

at room temperature also limits exposure. Severe cases require antibiotic therapy.

Cough

Coughing is a normal reflex that helps clear the respiratory tract of secretions and foreign material. It also can

result from irritation of the airway or from stimulation of receptors in the lung, diaphragm, ear (tympanic

membrane), and stomach. The most common cause of acute cough is the common cold. Chronic cough is most

often caused by irritation and excessive mucus production that results from cigarette smoking or from postnasal

drainage associated with an allergic reaction.

Treatment includes humidification of the air to loosen secretions and to counteract the drying effect of coughing

and inflammation. Moist air from a vaporizer or a hot shower helps, as do hot drinks and soups. Antihistamines

are often used to treat acute cough, but their value is questionable if an allergy is not present. They may also

cause additional drying of the respiratory mucosa. Guaifenesin is widely used in cough preparations to help

liquefy secretions and aid expectoration. Decongestants reduce secretions by causing vasoconstriction of the

nasopharyngeal mucosa. The most common decongestants found in many cough preparations are

pseudoephedrine, phenylephrine, and phenylpropanolamine. They may cause high blood pressure, restlessness,

and urinary retention and should be used with caution in anyone being treated for hypertension. Narcotics are

powerful cough suppressants, codeine being one of the most frequently used. Several safer nonnarcotic

antitussive (cough-preventing) agents are available, such as dextromethorphan, which has almost equal

effectiveness but fewer side effects. Most cough preparations containing dextromethorphan also contain a

decongestant and an expectorant. Because coughing is an important defense mechanism in clearing secretions

from blocked airways, a productive cough (one that produces secretions) should not be suppressed.

Insomnia

Insomnia is a difficulty in falling asleep or staying asleep or the feeling that sleep is not refreshing. Transient

insomnia can occur following stressful life events or schedule changes, as shift workers or those who travel

across multiple time zones experience. Disturbed sleep can also be related to the intake of stimulating drugs or

to the presence of anxiety, depression, or medical conditions associated with pain. The elderly spend less time

sleeping, and their sleep is lighter and marked by more frequent awakenings. This situation may be exacerbated

by afternoon napping.

The treatment of insomnia involves establishing good sleep hygiene: maintaining a consistent schedule of when

to retire and awaken, setting a comfortable room temperature, and minimizing disruptive stimuli such as noise

and light. Daily exercise is beneficial but should be avoided immediately before bedtime. Stimulants should be

avoided, including nicotine and caffeine. Alcohol disrupts the normal sleep pattern and should also be avoided.

Drinkers sleep more lightly and frequently awaken unknowingly, which leaves them feeling unrefreshed the next

day.

When medication is required, physicians usually prescribe one of the sleep-inducing benzodiazepines. They may

have long-, intermediate-, or ultrashort-acting effects. None should be used regularly for long periods. Various

nonbenzodiazepine hypnotics and sedatives are also available, and their usefulness varies according to

individual preference.

Britannica LaunchPacks | The Treatment and Prevention of Disease

86 of 123© 2020 Encyclopædia Britannica, Inc.

Designing a therapeutic regimenOnce the physician makes a diagnosis or identifies the most likely cause of the symptoms and decides on the

appropriate treatment, an entirely new set of conditions becomes operative. One of the first conditions to be

considered is the patient’s reason for seeking medical advice and the patient’s expectations. The patient’s visit

may have been precipitated by the feeling that a minor symptom may be something serious. If tests can rule out

this possibility, reassurance may serve as a therapeutic action. When possible, physicians work to cure a disease

and thereby relieve the symptoms, but many times the disease is unknown or chronic and incurable. In either

case, relief from or improvement of symptoms or restoration of normal functioning is the goal. When neither a

cure nor complete relief of symptoms is possible, an explanation of the disease and knowledge of the cause and

what to expect may provide significant relief. Patients often want to know the name of the disease, what caused

it, how long it will last, what additional symptoms may occur, and what they can do to assist the physician’s

treatment to hasten recovery. Providing information about the disease can help to alleviate anxiety and fears

that could otherwise impede the patient’s progress.

An essential ingredient of any successful therapeutic regimen is the positive attitude of the patient toward the

physician. A relationship of trust and respect for the physician based on reputation or years of supportive care is

one of the physician’s most powerful therapeutic tools.

When selecting a management plan, the physician usually has several options, and the outcomes or

consequences of each will vary. Often, the best choice is one made together with the patient, who may have

definite preferences for a trial of therapy over further testing or for oral medication rather than an injection,

even if the latter would provide more rapid relief. The possible side effects of the medicine or treatment may

well influence therapeutic choice, such as if a person would prefer dizziness to nausea. Once a course of therapy

is selected, a new decision tree arises that leads to new options, depending on the response. Further testing,

increasing the dose of medication, or changing to a new drug may be required. Almost every treatment has

some degree of risk, from either unwanted side effects or unexpected complications. The physician describes

these risks in terms of probability, expecting the patient to accept or reject the treatment based on these odds

and his or her willingness to suffer the side effects or to risk the complications to achieve relief.

Another factor affecting therapeutic success is patient compliance—the degree to which patients adhere to the

regimen recommended by their physician. Therapeutic regimens that require significant changes in lifestyle,

such as recommendations to follow a special diet, begin an exercise program, or discontinue harmful habits such

as smoking cigarettes, are likely to result in poor compliance. Also, the greater the number of drugs prescribed

and the more complicated the regimen, the poorer is the compliance. A patient is much more likely to

successfully follow a regimen of taking a single dose of medication daily than one prescribed four times daily.

Patients also may not fully realize the need to continue taking the medication after their symptoms have

subsided, despite a physician’s instruction to finish the medicine. Patient compliance may be most difficult to

achieve in chronic but generally asymptomatic illnesses such as hypertension. Patients who experience no

symptoms may need to be convinced of the necessity of taking their medication daily to prevent the occurrence

of an untoward event (e.g., in hypertension, a stroke or other cardiovascular problems). Similarly, patients with

depression or anxiety may want to discontinue medication once their symptoms abate. Until a relapse occurs,

they may not recognize the need to continue taking the medication until instructed to taper the dosage slowly.

In deciding which therapeutic regimen is likely to be most effective, the physician must depend on scientific

studies that compare one drug or treatment regimen with others that have been proved effective. The most

dependable study is one that is truly objective and removes the possibility of bias on the part of the patient who

Britannica LaunchPacks | The Treatment and Prevention of Disease

87 of 123© 2020 Encyclopædia Britannica, Inc.

wants the drug to work and the bias of the physician who may expect a certain outcome and subtly influence the

interpretation. Such a study is “double-blind”: it controls for both possible tendencies by comparing an active

drug with an inactive placebo (an inert drug). Neither the patient nor the physician knows which drug the patient

is taking, so that neither one’s bias can influence the result. Although this is the best way to demonstrate the

effectiveness of a drug, it is sometimes very difficult to control for all the variables that could influence the

outcome, such as varying degrees of stress one group or another may be under. Physicians will use the results

of a wide variety of clinical studies to decide whether a regimen or drug is likely to work in a given patient;

however, they will depend most heavily on their past experience with drugs or other techniques that have

worked under similar circumstances. It is knowledge based on experience and on understanding of the patient

that leads to the greatest therapeutic success.

DietProphylactic measures of nutrition

General requirements

Adequate nutritional intake is required to maintain health and prevent disease. Certain nutrients are essential;

without them a deficiency disease will result. Required nutrients that cannot be synthesized by the body and

therefore must be taken regularly include essential amino acids, water-soluble and fat-soluble vitamins,

minerals, and essential fatty acids. The U.S. Recommended Dietary Allowances (RDAs), one of many sets of

recommendations put out by various countries and organizations, have been established for these essential

nutrients by the Food and Nutrition Board of the National Academy of Sciences. The RDAs are guidelines and not

absolute minimums. Intake of less than the RDA for a given nutrient increases the risk of inadequate intake and

a deficiency disorder. Nutritional requirements are greater during the periods of rapid growth (infancy,

childhood, and adolescence) and during pregnancy and lactation. Requirements vary with physical activity,

aging, infections, medications, metabolic disorders (e.g., hyperthyroidism), and other medical situations. RDAs

do not address all circumstances and are designed only for the average healthy person.

Protein, needed to maintain body function and structure, consists of nine essential amino acids that must be

provided from different foods in a mixed diet. Ten to 15 percent of calories should come from protein. The

oxidation of 1 gram (0.036 ounce) of protein provides 4 kilocalories of energy. The same is true for

carbohydrate. Fat yields 9 kilocalories.

According to the U.S. RDAs, carbohydrate should provide about 45 to 65 percent of calories in the diet, in the

form of sugars, starches (complex carbohydrates), and dietary fibre (indigestible carbohydrates). Fibre is not

digestible but increases the bulk of the stool and facilitates faster intestinal transit. A high-fibre diet is associated

with a reduced risk of colorectal cancer, owing in part to the diminished time that cancer-producing substances

in the diet remain in contact with the bowel wall; increasing bulk also decreases the concentration of these

substances. Dietary fibre can be insoluble (wheatbran) or soluble (oat bran and psyllium). Only the soluble fibres

found in oats, fruit, and legumes lower blood cholesterol and benefit individuals with diabetes by delaying the

absorption of glucose.

The most concentrated source of energy is fat, the source of fat-soluble vitamins and essential fatty acids. More

than one-third of calories in the American diet come from fat, though the ideal is slightly less than that amount.

Most Americans also consume excess cholesterol; 200 milligrams is recommended daily.

Britannica LaunchPacks | The Treatment and Prevention of Disease

88 of 123© 2020 Encyclopædia Britannica, Inc.

Requirements in infancy

Nutritional needs are greatest during the first year of life. Meeting the energy demands during this period of

rapid growth requires 100 to 120 kilocalories per kilogram per day. Breast milk, the ideal food, is not only readily

available at the proper temperature, it also contains antibodies from the mother that help protect against

disease. Infant formulas closely approximate the contents of breast milk, and both contain about 50 percent of

calories from carbohydrate, 40 percent from fat, and 10 percent from protein.

Breast milk or commercial formula is recommended for the first six months of life and may be continued through

the first year. Solid foods are introduced at four to six months of age, starting with rice cereal and then

introducing a new vegetable, fruit, or meat each week. Cow’s milk generally is not given to infants younger than

six months of age, and low-fat milk is avoided throughout infancy, since it does not contain adequate calories

and polyunsaturated fats required for development. Additional iron and vitamins may be given, especially to

infants at high risk of iron deficiency, such as those with a low birth weight.

Toddlers are usually picky eaters, but attempts should be made to include the following four basic food groups in

their diet: meat, fish, poultry, or eggs; dairy products such as milk or cheese; fruits and vegetables; and cereals,

rice, or potatoes. Mealtime presents an opportunity for social interaction and strengthening of the family unit.

This starts with the bonding between mother and child during breast-feeding and continues as a source of family

interaction throughout childhood.

Requirements in adolescence

Nutritional needs during adolescence vary according to activity levels, with some athletes requiring an

extremely high-calorie diet. Other adolescents, however, who are relatively sedentary consume calories in

excess of their energy needs and become obese. Peer pressure and the desire for social acceptance can

profoundly affect the quality of nutrition of the adolescent as food intake may shift from the home to fast-food

establishments.

Pregnancy during adolescence can present special hazards if the pregnancy occurs before the adolescent has

finished growing and if she has established poor eating habits. Pregnancy increases the already high

requirements for calcium, iron, and vitamins in these teenagers.

Eating disorders such as anorexia nervosa and bulimia arise predominantly in young individuals as a result of

biological, psychological, and social factors. An excessive concern with body image and a fear of becoming fat

are hallmarks of these conditions. The patient with anorexia nervosa has a distorted body image and an

inordinate fear of gaining weight; consequently, he or she reduces nutritional intake below the amount needed

to maintain a normal minimal weight. Severe electrolyte disturbances and death can result. Bulimia is a

behavioral disorder marked by binge eating followed by acts of purging (e.g., self-induced vomiting, ingestion of

laxatives or diuretics, or vigorous exercising) to avoid weight gain.

Requirements of the elderly

The elderly often have decreased intestinal motility and decreased gastric acid secretion that can lead to

nutritional deficiencies. The problem can be accentuated by poorly fitting dentures, poor appetite, and a

decreased sense of taste and smell. Although lower levels of activity reduce the need for calories, older persons

may feel something is wrong if they do not have the appetite of their younger years, even if caloric intake is

adequate to maintain weight. The reduction in gastric acid secretion can lead to decreased absorption of

vitamins and other nutrients. Nutritional deficiencies can reduce the level of cognitive functioning. Vitamin

supplementation, especially with cobalamin (vitamin B ), may be particularly valuable in the elderly.12

Britannica LaunchPacks | The Treatment and Prevention of Disease

89 of 123© 2020 Encyclopædia Britannica, Inc.

The diet of the geriatric population is often deficient in calcium and iron, with the average woman ingesting only

half the amount of calcium needed daily. Decreased intake of vegetables can also contribute to various

nutritional deficiencies.

Constipation, which is common in the elderly, results from decreased intestinal motility and immobility and is

worsened by reduced fluid and fibre intake. The multiple medications that the elderly are likely to be taking may

contribute to constipation and prevent the absorption of certain nutrients. Some drugs, such as the

phenothiazines, may interfere with temperature regulation and lead to problems during hot weather, especially

if fluid intake is inadequate.

Requirements in pregnancy

The growing fetus depends on the mother for all nutrition and increases the mother’s usual demand for certain

substances such as iron, folic acid, and calcium, which should be added as supplements to a balanced diet that

contains most of the other required nutrients. The diet of adolescent girls, however, is often deficient in calcium,

iron, and vitamins. If poor nutritional habits have been established previously and are maintained during

pregnancy, the pregnant adolescent and her fetus are at increased risk.

In addition to avoiding non-nutritious foods, the pregnant woman should abstain from alcohol, smoking, and

illicit drugs, which all have a detrimental effect on the fetus. Caution should be used in taking over-the-counter

medicines during pregnancy, including vitamin and mineral supplements. Although the average recommended

weight gain during pregnancy is approximately 11.3 kilograms (25 pounds), the pregnant woman should be less

concerned with a maximum weight gain than she is with meeting the nutritional requirements of pregnancy. Low

weight gain (less than 9.1 kilograms) has been associated with intrauterine growth retardation and prematurity.

Women who are breast-feeding should continue taking vitamin supplements and increasing their intake of

calcium and protein to provide adequate breast milk. This regimen will not interfere with the mother’s ability to

slowly lose the weight gained during pregnancy.

Therapeutic measures of nutrition

Changes in diet can have a therapeutic effect on obesity, diabetes mellitus, hypertension, peptic ulcer, and

osteoporosis.

Obesity

A significant number of persons in developed countries meet the definition of obesity (20 percent above ideal

body weight). Obesity occurs when the number of calories consumed exceeds the number that is metabolized,

the remainder being stored as adipose (fat) tissue. Many theories address the causes of obesity, but no single

cause is apparent. Multiple factors influence weight, including genetic factors, hormone levels, activity levels,

metabolic rates, eating patterns, and stress.

The treatment of obesity requires reducing calorie intake while increasing calorie expenditure (exercise).

Because obesity is a chronic illness, it requires long-term lifestyle changes unless surgery is performed to effect

permanent changes in the digestion of food. Thus fad diets, no matter how effective they are in the short term,

remain inadequate for long-term weight control. A reduction in calorie intake of 500 kilocalories per day should

lead to a loss of 0.45 kilogram (1 pound) per week. This reduction can be increased by greater calorie reduction

or an accompanying exercise program. With exercise, the weight loss will be primarily fat, whereas without it,

muscle is lost as well. Exercise also leads to a “positive” addiction that makes it easier to sustain regular

exercising for long periods. It reduces the risk of heart disease and can improve self-esteem.

Britannica LaunchPacks | The Treatment and Prevention of Disease

90 of 123© 2020 Encyclopædia Britannica, Inc.

Weight-reduction diets for the obese individual should be similar to those used by nonobese persons but with

fewer calories—namely, a low-fat diet that avoids high-calorie foods. One of the most popular and successful of

these diets is the very-low-calorie diet (VLCD) that results in rapid fat loss while minimizing the loss of lean

muscle tissue. These diets require supplementation with potassium and a vitamin-mineral complex. Fad diets

that eliminate one foodstuff, such as carbohydrate or protein, may give short-term results but fail in the long

term to maintain weight loss. Furthermore, these diets can lead to medically significant problems, such as

ketosis (a buildup of ketones in the body). Fad diets promoting high protein intake, particularly those

emphasizing high meat consumption and excluding key food groups such as grains and dairy, can increase a

person’s risk of heart disease.

Appetite-suppressing drugs have limited short-term and no long-term effectiveness. Surgery can provide long-

term benefits but may not be an option for some individuals. The most frequently performed procedures are

vertical banded gastroplasty and gastric bypass, both of which reduce the size of the stomach. Gastric bypass is

effective in teenagers as well as adults, with potentially lifelong benefits in young persons, especially when

combined with behavioral changes to improve eating habits and physical activity.

Diabetes mellitus

Diet is the cornerstone of diabetic treatment whether or not insulin is prescribed. The goal is to regulate the

patient’s blood glucose level to as close to normal as possible and for the patient to achieve and maintain an

ideal weight. Refined and simple sugars are avoided, and saturated fat is reduced by focusing the diet on poultry

and fish rather than meat as a major source of protein. Soluble fibre such as that found in beans and oatmeal is

recommended in contrast to the insoluble fibre found in wheat and bran. Artificial sweeteners may be used as

low-calorie replacements for simple sugar. In order to minimize the risk of long-term consequences, diabetic

patients generally must adhere to a balanced diet with restricted saturated fat intake while maintaining normal

weight. Meals of equal caloric content may be spaced throughout the day, especially when supplemental insulin

is needed.

Hypertension

Many patients with hypertension benefit from a low-sodium diet (reduced sodium chloride [table salt] intake)

and physicians often recommend this as part of the initial therapy for hypertension. If alterations in diet fail to

counteract the hypertension, drugs such as diuretics may be prescribed along with potassium supplements

(because most diuretics may deplete potassium). Other dietary measures are directed toward achieving an ideal

body weight because obesity contributes to hypertension and increases the risk of cardiovascular disease. An

adequate low-sodium diet can be achieved with a no-added-salt diet—that is, no salt is added to food after

preparation, and foods with a high-sodium content such as cured meats are avoided. Low-sodium diets should

be combined with increased potassium, which can be obtained by eating fruits, especially bananas, and

vegetables, or using salt substitutes.

Peptic ulcer

In the past a bland diet and frequent ingestion of milk and cream were the mainstays of ulcer treatment. Today

the only dietary regimen is the avoidance of irritating foods, such as spicy and highly seasoned foods, and

coffee. Certain drug therapies can decrease gastric acidity much more than antacids and other dietary

measures. Infection of the stomach by the bacterium is recognized as a major factor in Helicobacter pylorichronic gastritis and recurrent peptic ulcer in many patients. The bacterial infection requires a treatment

regimen consisting of antibiotics and a bismuth-containing compound, which is different from the treatment of

an ulcer that is not caused by .H. pylori

Osteoporosis

Britannica LaunchPacks | The Treatment and Prevention of Disease

91 of 123© 2020 Encyclopædia Britannica, Inc.

Osteoporosis

Although little can be done to treat osteoporosis once it is established, a great deal can be accomplished to

prevent it, as has been discussed above ( Preventive medicine). Osteoporosis, which is a loss of bone see abovedensity, occurs in men and women, often those over age 70, and is manifested primarily in hip and vertebral

fractures. It is most noticeable in postmenopausal women who have not taken estrogen. Estrogen replacement

therapy, which should be combined with supplemental calcium, is most effective in decreasing bone resorption

when begun during menopause, although it will provide some benefit if started later. In women who have an

intact uterus, estrogen must be taken with progesterone to reduce the risk of endometrial cancer ( hormone seereplacement therapy).

Biological therapyBlood and blood cells

Blood transfusions were not clinically useful until about 1900, when the blood typesA, B, and O were identified

and cross-matching of the donor’s blood against that of the recipient to prove compatibility became possible.

When blood with the A antigen (type A or AB) is given to someone with anti-A antibodies (type B or O blood),

lysis of the red blood cells occurs, which can be fatal. Persons with blood type O are universal red cell donors

because this blood type does not contain antigen A or B. However, because type O blood contains antibodies

against both A and B, patients with this blood type can receive only type O blood. Type O is the most common

blood type, occurring in 40 to 60 percent of people, depending on the selected population (e.g., about 40

percent of the white population has blood type O, while the majority of Native Americans are type O).

Conversely, persons with type AB blood are universal recipients of red blood cells. Having no antibodies against

A or B, they can receive type O, A, or B red blood cells ( ABO blood group system).see

How Rh hemolytic disease develops.

Encyclopædia Britannica, Inc.

Most individuals are Rh-positive, which means that they have the D antigen of the complex Rh blood group

system. Approximately 15 percent of the population lacks this antigen; such individuals are described as Rh-

negative. Although anti-D antibodies are not naturally present, the antigen is so highly immunogenic (able to

provoke an immune response) that anti-D antibodies will usually develop if an Rh-negative person is transfused

with Rh-positive blood. Severe lysis of Rh-positive red blood cells will occur at any subsequent transfusion. The

condition erythroblastosis fetalis, or hemolytic disease of the newborn, occurs when Rh-positive infants are born

to Rh-negative mothers who have developed anti-D antibodies either from a previous transfusion or by maternal-

fetal exchange during a previous pregnancy. The maternal antibodies cross the placenta and cause distress of

the red blood cells of the fetus, often leading to severe hemolytic anemia and brain damage, heart failure, or

death of the fetus. If an Rh-negative mother has not developed anti-D antibodies, she may be treated with Rho

(D) immune globulin in the 28th week of pregnancy, when the therapy is most effective. Rho (D) immune

globulin prevents the mother’s immune system from recognizing the fetal Rh-positive blood cells. However, if

Britannica LaunchPacks | The Treatment and Prevention of Disease

92 of 123© 2020 Encyclopædia Britannica, Inc.

the mother develops antibodies, the fetus and the mother must be closely monitored. If delivery occurs at the

normal time following a full-length pregnancy, the infant may receive a blood transfusion to replace damaged or

diseased red blood cells with healthy blood cells. Early delivery, however, is often necessary, and in severe

cases, blood transfusion in the womb is performed.

Whole blood, which contains red blood cells, plasma, platelets, and coagulation factors, is almost never used for

transfusions because most transfusions only require specific blood components. It can be used only up to 35

days after it has been drawn and is not always available, because most units of collected blood are used for

obtaining components.

Packed red blood cells are what remains of whole blood after the plasma and platelets have been removed. A

450-millilitre unit of whole blood is reduced to a 220-millilitre volume. Packed red blood cells are used most often

to raise a low hemoglobin or hematocrit level in patients with chronic anemia or mild hemorrhage.

Leukocyte-poor red blood cells are obtained by employing a filter to remove white blood cells (leukocytes) from

a unit of packed red blood cells. This type of transfusion is used to prevent febrile (fever) reactions in patients

who have had multiple febrile transfusion reactions in the past, presumably to white blood cell antigens.

Removal of leukocytes from blood components is referred to as leukocyte reduction, or leukoreduction. In

addition to lowering the risk of febrile transfusion reactions, leukoreduced blood components may have a

decreased chance of transmitting cytomegalovirus, a member of the herpesvirus family, as well as other strictly

cell-associated viruses. Transfusion using leukoreduced blood components also reduces the risk of immunization

to white cells and to platelet antigens and perhaps reduces the risk of the immunosuppressive effects of

transfusion.

A micrograph of a round aggregation of platelets (magnified 1,000×).

Dr. F. Gilbert/Centers for Disease Control and Prevention (CDC) (Image Number: 6645)

Platelet transfusions are used to prevent bleeding in patients with very low platelet counts, usually less than

20,000 cells per microlitre, and in those undergoing surgery or other invasive procedures whose counts are less

than 50,000 cells per microlitre.

Autologous transfusion is the reinfusion of one’s own blood. The blood is obtained before surgery and its use

avoids transfusion reactions and transfusion-transmitted diseases. Donation can begin one month before surgery

and be repeated weekly, depending on the number of units likely to be needed. Intraoperative blood salvage is

another form of autologous transfusion. The intraoperative blood salvage device recovers the shed blood, which

is then anticoagulated, centrifuged to concentrate the red blood cells, and washed in a sterile centrifuge bowl.

The salvaged blood (primarily washed red cells) can be rapidly infused into the patient during surgical

procedures.

Plasma

Britannica LaunchPacks | The Treatment and Prevention of Disease

93 of 123© 2020 Encyclopædia Britannica, Inc.

Plasma

Plasma, the liquid portion of the blood, is more than 90 percent water. It contains all the noncellular components

of whole blood including the coagulation factors, immunoglobulins and other proteins, and electrolytes. When

frozen, the coagulation factors remain stable for up to one year but are usually transfused within 24 hours after

thawing. However, some of the clotting factors, such as factor VIII (or antihemophilic factor, AHF) and factor V,

are very labile even after the plasma is frozen and require the addition of stabilizing substances (e.g., glycine) or

the use of special freezing procedures. Fresh frozen plasma is used in patients with multiple clotting factor

deficiencies, such as in those with severe liver disease or massive hemorrhage.

Cryoprecipitate is prepared from fresh frozen plasma and contains about half the original amount of coagulation

factors, although these factors are highly concentrated in a volume of 15–20 millilitres. Cryoprecipitate is used to

treat patients with deficiencies of factor VIII, von Willebrand factor, factor XIII, and fibrinogen because it is rich in

these factors.

Specific clotting factor concentrates are prepared from pooled plasma or pooled cryoprecipitate. Factor VIII

concentrate, the antihemophilic factor, is the preferred treatment for hemophilia A. A monoclonal antibody–

purified human factor VIII is also available. Factor IX complex, the prothrombin complex, is also available for

treating hemophilia B (factor IX deficiency).

Immunoglobulins

The five main classes of antibodies (immunoglobulins): IgG, IgA, IgD, IgE, and IgM.

Encyclopædia Britannica, Inc.

Immune serum globulin (ISG), obtained from the plasma of a pool of healthy donors, contains a mixture of

immunoglobulins, mainly IgG, with lesser amounts of IgM and IgA. It is used to provide passive immunity to a

variety of diseases such as measles, hepatitis A, and hypogammaglobulinemia. Intravenous immunoglobulins

(IVIGs) provide immediate antibody levels and avoid the need for painful intramuscular injections.

Hyperimmune serum globulin is prepared in the same way as the nonspecific immunoglobulin above but from

patients who are selected because of their high titres of specific antibodies. Rh-immune globulin is given to

pregnant Rh-negative women to prevent hemolytic disease of the newborn. Other hyperimmune serum globulins

are used to prevent hepatitis B, tetanus, rabies, and varicella-zoster (chickenpox or shingles) in exposed

individuals.

Bone marrow transplantation

Bone marrow transplantation does not involve the transfer of a discrete anatomic organ, as occurs in other

forms of transplantation, but it does carry the same risk of rejection by the recipient, which is called graft-versus-

host disease (GVHD). The main indications for bone marrow transplantation are leukemia, aplastic anemia, and

congenital immunologic defects.

Britannica LaunchPacks | The Treatment and Prevention of Disease

94 of 123© 2020 Encyclopædia Britannica, Inc.

Immunosuppressive drugs and irradiation are usually used to prepare the recipient. Close matching of tissue

between donor and recipient is also essential to minimize GVHD, with autologous transplantation being the best

method to avoid the disease (the patients donate their own bone marrow at times of remission to be used later).

Allogeneic (homologous) bone marrow transplants by a matched donor (preferably a sibling) are the most

common.

Bone marrow transplantation initially was not recommended for patients over age 50, because of the potential

for increased mortality and morbidity and because the incidence of GVHD increases in those over age 30. Many

transplant centres, however, have performed successful bone marrow transplantations in patients beyond age

50. People who donate bone marrow incur no risk, because they generate new marrow to replace that which has

been removed. General anesthesia is required, however, to aspirate the bone marrow from the iliac crests; the

marrow is then infused into the recipient.

Hematopoietic growth factors

The hematopoietic growth factors are potent regulators of blood cell proliferation and development in the bone

marrow. They are able to augment hematopoiesis when bone marrow dysfunction exists. Recombinant DNA

technology has made it possible to clone the genes responsible for many of these factors. Some are

commercially available and can be used to stimulate white blood cell development in patients with neutropenia

(a decrease in the number of neutrophilic leukocytes) associated with cancer chemotherapy.

The first to be developed was erythropoietin, which stimulates red blood cell production. It is used to treat the

anemia associated with chronic renal failure and that related to therapy with zidovudine (AZT) in patients

infected with HIV. It may also be useful in reversing anemia in cancer patients receiving chemotherapy.

Filgrastim (granulocyte colony-stimulating factor [G-CSF]) is used to stimulate the production of white blood

cells, which prevents infection in patients whose white blood cell count has diminished because of the effects of

anticancer drugs. G-CSF also mobilizes stem cells, prompting them to enter the peripheral blood circulation.

Stem cells can be harvested and used for bone marrow rescue. Another agent is sargramostim (granulocyte-

macrophage colony-stimulating factor [GM-CSF]), which is used to increase the white blood cell count in patients

with Hodgkin lymphoma or acute lymphoblastic leukemia who are undergoing autologous bone marrow

transplantation.

Biological response modifiers

Biological response modifiers, used to treat cancer, exert their antitumour effects by improving host defense

mechanisms against the tumour. They have a direct antiproliferative effect on tumour cells and also enhance the

ability of the host to tolerate damage by toxic chemicals that may be used to destroy the cancer.

Biological response modifiers include monoclonal antibodies, immunomodulating agents such as the bacille

Calmette-Guérin (BCG) vaccine used against tuberculosis, lymphokines and cytokines such as interleukin-2, and

the interferons.

Britannica LaunchPacks | The Treatment and Prevention of Disease

95 of 123© 2020 Encyclopædia Britannica, Inc.

Three vials filled with human leukocyte interferon.

National Institutes of Health

The three major classes of interferons are interferon-α, produced by white blood cells; interferon-β, produced by

fibroblasts; and interferon-γ, produced by lymphocytes. The interferons are proteins produced by these cells in

response to viral infections or other stimuli; they have antiviral, antiproliferative, and immunomodulatory

properties that make them useful in treating some viral infections and cancers. They do not act directly on the

viruses but rather indirectly, increasing the resistance of cells to viral infections. This can be particularly useful in

patients who have an impaired immune system and a diminished ability to fight viral infections, especially those

with AIDS.

Interferon-α is produced by a recombinant DNA process using genetically engineered . Escherichia coliRecombinant interferon-α appears to be most effective against hairy-cell leukemia and chronic myelogenous

leukemia, lymphoma, multiple myeloma, AIDS-associated Kaposi sarcoma, and chronic hepatitis C. It is

moderately effective in treating melanoma, renal cell carcinoma, and carcinoid. It also can enhance the

effectiveness of chemotherapy in some cancers. Unfortunately, treatment with this drug can be quite toxic.

Interferon-γ may prove useful in treating a different set of diseases—for example, chronic conditions such as

rheumatoid arthritis.

Hormones

The term is derived from the Greek , meaning “to set in motion.” It refers to a chemical hormone hormaeinsubstance that has a regulatory effect on a certain organ or organs. There are sex hormones such as estrogen

and progesterone, thyroid hormones, insulin, adrenal cortical and pituitary hormones, and growth hormones.

Estrogens (estradiol, estone, and estriol) promote the growth and development of the female reproductive

system—the vagina, uterus, and fallopian tubes—and the breasts. They are responsible for the development of

secondary sex characteristics—growth of pubic and axillary hair and pigmentation of the nipples and genitals—

and contribute to bone formation. The decrease in estrogen after menopause contributes to bone

demineralization and osteoporosis, and hormone replacement therapy is often recommended to counteract this

occurrence. Postmenopausal estrogen also prevents atrophic vaginitis, in which the vaginal mucosa becomes

thin and friable. Estrogens can be administered orally, transdermally (through the skin), vaginally, or

intramuscularly.

Progestins combined with estrogens constitute the oral contraceptives that inhibit ovulation by affecting the

hypothalamus and pituitary. Progestin-only pills and injections are also effective contraceptives; they work by

forming a thick cervical mucus that is relatively impenetrable to sperm. The mortality associated with all forms

Britannica LaunchPacks | The Treatment and Prevention of Disease

96 of 123© 2020 Encyclopædia Britannica, Inc.

of birth control is less than that associated with childbirth, except for women older than age 35 who smoke

cigarettes. Their risk of stroke, heart attacks, and other cardiovascular problems is greatly increased, so their

use of oral contraceptives is contraindicated. Levonorgestrel is a form of synthetic progestin that provides birth

control.

Androgens consist of testosterone and its derivatives, the anabolic steroids. Testosterone is produced in the

testes in males, and small amounts are produced by the ovary and adrenal cortex in females. Testosterone is

used to stimulate sexual organ development in androgen-deficient males and to initiate puberty in selected boys

with delayed growth. The anabolic steroids are testosterone derivatives that provide anabolic activity with less

stimulation of growth of the sexual organs. Anabolic steroids have been used to increase muscle strength and

endurance. However, this practice can have serious long-term consequences, such as the development of

atherosclerotic disease because of the steroids’ effects on the blood lipids, especially the lowering of high-

density lipoproteins. Their use in juvenile athletes can cause premature epiphyseal closure (early ossification of

the growth zone of bones), compromising the attainment of their full adult height, and can have an impact on

sexual development.

Human chorionic gonadotropin (HCG), a hormone produced by cells of the placenta, can be extracted from the

urine of pregnant women days after fertilization and thus is used in the early detection of pregnancy. It is also

used to stimulate descent of the testicles in boys with prepubertal cryptorchidism and to treat infertility in men

with underdeveloped testicles.

Growth hormone, produced by the pituitary gland, stimulates linear growth and regulates metabolic functions.

Inadequate secretion of this hormone by the pituitary will impair growth in children, which is evidenced by their

poor rate of growth and delayed bone age (i.e., slowed bone development). A synthetic preparation of the

hormone is used to treat children who have a congenital deficiency of growth hormone.

Adrenal corticosteroids are any of the steroid hormones produced by the adrenal cortex except for the sex

hormones. These include the mineralocorticoids (aldosterone) and glucocorticoids (cortisol), the secretion of

which is regulated by the adrenocorticotrophic hormone (ACTH) produced in the anterior pituitary.

Overproduction of ACTH by the pituitary gland leads to excessive secretion of glucocorticoids from the adrenal

gland, resulting in Cushing syndrome. This syndrome also can result from an increased concentration of

corticosteroids secreted by benign and malignant tumours of the adrenal gland. Conversely, the production of an

insufficient amount of adrenal corticosteroids results in primary adrenocortical insufficiency (Addison disease).

The glucocorticoids are used primarily for their potent anti-inflammatory effects in rheumatic disorders, collagen

diseases, dermatologic diseases, allergic disorders, and respiratory diseases and for the palliative management

of leukemia and lymphoma. Cortisone and hydrocortisone are less potent than prednisone and triamcinolone,

whereas dexamethasone and betamethasone have the greatest anti-inflammatory potency. Disadvantages of

corticosteroid use include the masking of signs of infection, an increase in the risk of peptic ulcer, the

development of edema and muscle weakness, the loss of bone substance (osteoporosis), and glucose

intolerance resembling diabetes mellitus.

Insulin, secreted by the pancreas, is the principal hormone governing glucose metabolism. Insulin preparations

were extracted from beef or pork pancreas until recombinant DNA technology made it possible to manufacture

human insulin. Other antidiabetic agents are also available for treating type 2 diabetes. The sulfonylureas are

oral hypoglycemic agents used as adjuncts to diet and exercise in the treatment of type 2 diabetes.

Thyroid hormones include thyroxine and triiodothyronine, which regulate tissue metabolism. Natural desiccated

thyroid produced from beef and pork and the synthetic derivatives levothyroxine and liothyronine are used in

replacement therapy to treat hypothyroidism that results from any cause.

Britannica LaunchPacks | The Treatment and Prevention of Disease

97 of 123© 2020 Encyclopædia Britannica, Inc.

Drug therapyGeneral features

Principles of druguptake and distribution

Study of the factors that influence the movement of drugs throughout the body is called pharmacokinetics,

which includes the absorption, distribution, localization in tissues, biotransformation, and excretion of drugs. The

study of the actions of the drugs and their effects is called pharmacodynamics. Before a drug can be effective, it

must be absorbed and distributed throughout the body. Drugs taken orally may be absorbed by the intestines at

different rates, some being absorbed rapidly, some more slowly. Even rapidly absorbed drugs can be prepared in

ways that slow the degree of absorption and permit them to remain effective for 12 hours or longer. Drugs

administered either intravenously or intramuscularly bypass problems of absorption, but dosage calculation is

more critical.

Individuals respond differently to the same drug. Elderly persons, because of reduced kidney and liver function,

may metabolize and excrete drugs more slowly. Because of this and other factors, the elderly usually require

lower doses of medication than do younger people.

Other factors that affect the individual’s response to drugs are the presence of disease, degree of nutrition or

malnutrition, genetics, and the presence of other drugs in the system. Furthermore, just as the pain threshold

varies among individuals, so does the response to drugs. Some people need higher-than-average doses; some,

being very sensitive to drugs, cannot tolerate even average doses, and they experience side effects when others

do not.

Infants and children may have different rates of absorption than adults because bowel motility is irregular or

gastric acidity is decreased. Drug distribution may be different in some people, such as premature infants who

have little fatty tissue and a greater proportion of body water. Metabolic rates, which affect pharmacokinetics,

are much higher during childhood. The dosages of drugs for children are usually calculated on the basis of

weight (milligrams per kilogram) or on the basis of body surface area (milligrams per square metre). If a drug

has a wide margin of safety, it may be given as a fraction of the adult dose based on age, but the great variation

in size among children of the same age complicates this computation. Children are not small adults, and drug

dosages may be quite different than they are for adults.

The elderly are particularly susceptible to adverse drug effects because they often have multiple illnesses that

require their taking various medications, some of which may be incompatible with others. In addition to

decreased renal and hepatic function, gastric acid secretion decreases with age, and arteriosclerosis narrows the

arteries, decreasing blood flow to the intestines and other organs. The precautions followed in prescribing

medication for the elderly are an excellent example of the principle that should govern all drug therapy—drugs

should be used in the lowest effective dose, especially because side effects increase with concentration.

Because of illness or frailty, elderly people often have less reserve and may not be able to tolerate minor side

effects that younger adults might not even notice.

When drugs are given in repeated doses, a steady state is achieved: the amount being administered equals the

amount being excreted or metabolized. With some drugs, however, it may be difficult to determine the proper

dose because of individual variations. In these cases, determining the plasma level of the drug may be useful,

especially if the therapeutic window (i.e., the concentration above which the drug is toxic and below which it is

ineffective) is relatively small. Plasma levels of phenytoin, used to control epilepsy; digitalis, prescribed to

combat heart failure; and lithium, used to moderate bipolar disorder, are often monitored to ensure safety.

Britannica LaunchPacks | The Treatment and Prevention of Disease

98 of 123© 2020 Encyclopædia Britannica, Inc.

Indications for use

The purpose of using drugs is to relieve symptoms, treat infection, reduce the risk of future disease, and destroy

selected cells such as in the chemotherapeutic treatment of cancer. The best treatment, however, may not

require a drug at all. Recognizing that no effective medication exists is just as important as knowing which one

to select. When more than one drug is useful, physicians select the one that is most effective and least

hazardous. A recently developed drug may promise better results, yet it will be less predictable and possibly

more expensive.

Every drug has multiple actions: it will affect organs and systems beyond those to which it is specifically

targeted. Some patients may also experience idiosyncratic effects (abnormal reactions peculiar to that

individual) as well as allergic reactions to certain drugs—additional reasons to select drugs carefully and avoid

their use altogether when simpler measures will work. A case in point is the belief that penicillin or other

antibiotics will cure viral infections—they will not. While new antiviral drugs are under development, using

antibiotics unnecessarily is unwise and potentially dangerous. The number of drug-resistant organisms is

growing and must be counteracted by the judicious prescribing of these chemicals.

Unnecessary drug use also increases the possibility of drug interactions that may interfere with drug

effectiveness. Interaction can occur in the stomach or intestinal tract where the presence of one drug may

interfere with the absorption of another. Antacids, for example, reduce the absorption of the antibiotic

tetracycline by forming insoluble complexes. Of greater importance is the interference of one drug with another.

Some drugs can inhibit the metabolism of another drug, which allows the amount of the drug to accumulate in

the system, leading to potential toxicity if the dose is not decreased. Cimetidine, a drug used to treat peptic

ulcers, causes few side effects by itself, but it does inhibit drug-metabolizing microsomal enzymes in the liver,

increasing concentrations of many drugs that depend on these enzymes to be metabolized. This inhibition can

be serious if the other drug is the anticoagulant warfarin. Bleeding can result if the dose is not reduced. Many

other drugs are affected, such as antihypertensives (e.g., calcium channel blockers), antiarrhythmics (e.g.,

quinidine), and anticonvulsants (e.g., phenytoin). One drug can also decrease the renal excretion of another.

Sometimes this effect is used to advantage, as, for example, when probenecid is given with penicillin to

decrease its removal and thereby increase its concentration in the blood. But this type of interaction can be

deadly: quinidine, for instance, can reduce the clearance of digoxin, a drug used to treat heart failure, potentially

increasing its concentration to dangerous levels. Two drugs can also have additive effects, leading to toxicity,

though either one alone would be therapeutic.

Problems with drug interactions can occur when a patient is being treated by different physicians and one

physician is not aware of the drug(s) that another has prescribed. Sometimes a physician may prescribe a drug

to treat a symptom that actually is a side effect of another drug. Discontinuing the first drug is preferable to

adding another that may have side effects of its own. When a new symptom occurs, a recently initiated drug

usually is suspected before other causes are investigated. Patients are advised to inform their physicians of any

new drugs they are taking, as well as consult with the pharmacist about possible interactions that a

nonprescription drug might have with a prescription drug already being taken. Having a personal physician who

monitors all the drugs, both prescription and nonprescription, that the patient is taking is a wise course to follow.

In the United States, responsibility for assuring the safety and efficacy of prescription drugs is delegated to the

Food and Drug Administration (FDA). This includes the approval of new drugs, identification of new indications,

official labeling (to prevent unwarranted claims), surveillance of adverse drug reactions, and approval of

methods of manufacture. Before an investigational new drug (IND) can be tested in humans, it must be

submitted to and approved by the FDA. If clinical trials are successful, a new drug application (NDA) must be

Britannica LaunchPacks | The Treatment and Prevention of Disease

99 of 123© 2020 Encyclopædia Britannica, Inc.

approved before it can be licensed and sold. This process usually takes years, but if the drug provides benefit to

patients with life-threatening illnesses when existing treatments do not, then accelerated approval is possible.

Physicians can receive permission to use an unapproved drug for a single patient. This consent, called

emergency use and sometimes referred to as single-patient compassionate use, is granted if the situation is

desperate and no other treatment is available. The FDA also sometimes grants approval to acquire drugs from

other countries that are not available in the United States if a life-threatening situation seems to warrant this

action. Another way to gain access to an investigational drug is to participate in a clinical trial. If it is a well-

controlled, randomized, double-blind trial rather than an “open trial”—in which the investigator is not “blinded”

and knows who is the subject and who is the control—the patient runs the risk of being given a placebo rather

than the active drug. The Federal Trade Commission (FTC) has responsibility for “truth in advertising” to assure

that false or misleading claims are not made about foods, over-the-counter drugs, or cosmetics.

Similar drug regulatory agencies exist in other countries and governing bodies as well. For example, China has

its own Food and Drug Administration, which regulates drugs, medical devices, and cosmetics. In Europe the

European Medicines Agency approves drugs, while also overseeing the scientific evaluation of medicines and

monitoring the safety and effectiveness of drugs marketed within countries in the European Union.

A rare disease presents a unique problem in treatment because the number of patients with the disease is so

small that it is not worthwhile for companies to go through the lengthy and expensive process required for

approval and marketing. In the United States, drugs produced for such cases are made available under the

Orphan Drug Act of 1983, which was intended to stimulate the development of drugs for rare diseases.

Controlled substances are drugs that foster dependence and have the potential for abuse. In the United States,

the Drug Enforcement Administration (DEA) regulates their manufacture, prescribing, and dispensing. Controlled

substances are divided into five classes, or schedules, based on their potential for abuse or physical and

psychological dependence. Schedule I encompasses heroin and other drugs with a high potential for abuse and

no accepted medical use in the United States. Schedule II drugs, including narcotics such as opium and cocaine

and stimulants such as amphetamines, have a high potential for abuse and dependence. Schedule III includes

those drugs such as certain stimulants, depressants, barbiturates, and preparations containing limited amounts

of codeine that cause moderate dependence. Schedule IV contains drugs that have limited potential for abuse or

dependence, and includes some sedatives, antianxiety agents, and nonnarcotic analgesics. Schedule V drugs

have an even lower potential for abuse than do schedule IV substances. Some, such as cough medicines and

antidiarrheal agents containing limited amounts of codeine, can be purchased without a prescription. Physicians

must have a DEA registration number to prescribe any controlled substance. Special triplicate prescription forms

are required in certain states for schedule II drugs, and a patient’s supply of these drugs cannot be replenished

without a new prescription. Similar drug enforcement agencies exist in other countries and international regions

as well, including the European Union, where illicit drug use and drug trafficking are policed by the European

Drug Enforcement Agency.

Systemic drug therapy

Systemic drug therapy involves treatment that affects the body as a whole or that acts specifically on systems

that involve the entire body, such as the cardiovascular, respiratory, gastrointestinal, or nervous systems.

Mental disorders also are treated systemically.

Britannica LaunchPacks | The Treatment and Prevention of Disease

100 of 123© 2020 Encyclopædia Britannica, Inc.

The cardiovascular system

Cross-sectional diagrams of human blood vessels showing a normal, healthy artery and a narrowed,…

Encyclopædia Britannica, Inc.

Atherosclerosis, the most common form of arteriosclerosis (generally called hardening of the arteries), is the

thickening of large and medium-size arterial walls by cholesterol deposits that form plaques, causing the size of

the arterial lumen to diminish. This narrowing compromises the artery’s ability to supply blood to tissues and is

most serious when the coronary arteries (those feeding the heart muscle) become clogged. A heart attack, with

the death of a portion of the heart muscle, results; if the damage is extensive, sudden death will follow. The

arteriosclerotic process can be slowed or even reversed by lowering serum cholesterol, especially the low-

density lipoprotein (LDL) component. Cholesterol-reducing drugs, a low-cholesterol diet, exercise, and weight

control can help. One form of cholesterol, high-density lipoprotein (HDL), is actually beneficial and helps to carry

the harmful cholesterol out of the arterial wall. While some drugs will raise blood levels of HDL cholesterol, the

most effective means of increasing it is to avoid smoking and to increase exercise.

Narrowing of the coronary arteries can reduce the flow of blood to the heart and cause chest pain (angina

pectoris). This condition can be treated with drugs such as nitroglycerin that primarily dilate the coronary

arteries or with drugs such as the beta-blockers and calcium channel blockers that primarily reduce myocardial

oxygen requirements.

Drugs that increase the strength of the heart muscle have long been used to treat congestive heart failure.

Digitalis, derived from the foxglove plant, was the first drug found to have a positive inotropic effect (affects the

force of muscular contraction) on the heart. Digoxin, the most commonly used form of this substance, can be

given orally or intravenously. Digitalis has a relatively narrow therapeutic range: too much is toxic and can cause

cardiac arrhythmias. Because toxicity is increased if the patient’s serum potassium is low, close attention is paid

to maintaining adequate potassium levels.

Drugs that dilate arterial smooth muscle and lower peripheral resistance (vasodilators) are also effective in

treating heart failure by reducing the workload of the heart. The angiotensin converting enzyme (ACE) inhibitors

are vasodilators used to treat heart failure. They also lower blood pressure in patients who are hypertensive.

The majority of cases of hypertension are due to unknown causes and are called essential, or primary,

hypertension. Approximately five percent of all hypertensive patients have secondary hypertension, which is

high blood pressure that results from a known cause (e.g., kidney disease). While the first treatment of

hypertension typically is to have the patient achieve normal weight, exercise, and reduce sodium in the diet, a

wide variety of drugs are available to lower blood pressure, whether it be the systolic or diastolic measurement

Britannica LaunchPacks | The Treatment and Prevention of Disease

101 of 123© 2020 Encyclopædia Britannica, Inc.

that is too high. A stepped-care approach has traditionally been used, starting with a single, well-tolerated drug,

such as a diuretic. If it proves inadequate, a second drug is added and the combination manipulated until the

most effective regimen with the fewest side effects is found. Occasionally, a third drug may be necessary.

The respiratory system

The drugs most frequently used for respiratory treatment are those that relieve cough in acute bronchitis.

Antibiotics are effective only if the cause is bacterial. Most often, however, a virus is responsible, and the

symptoms rather than the cause of the disease are treated, primarily with drugs that loosen or liquefy thick

mucus (expectorants) and humidification (steam) that soothes the irritated mucous lining. While these

treatments are widely prescribed, they have not been proven effective clinically. Likewise, although cough

suppressants are used to reduce unnecessary coughing, they subvert the cough’s natural protective mechanism

of ridding the airway of secretions and foreign substances. A commonly used non-opioid cough suppressant is

dextromethorphan, which is nearly as effective as codeine and is available in over-the-counter preparations. If

nasal congestion and postnasal drainage are present, an antihistamine and decongestant may be useful.

Asthma is a narrowing of the airways characterized by episodic wheezing. Bronchodilators are effective in a mild

to moderate attack. Frequent attacks require long-term treatment with anti-inflammatory drugs such as

cromolyn sodium, nedocromil sodium, or a corticosteroid.

Chronic obstructive pulmonary disease (COPD) manifests late in life with chronic cough and shortness of breath.

Although most of the damage has already occurred, some benefit can still be obtained by stopping smoking,

using bronchodilators, and administering antibiotics early when superimposed infection occurs. Supplemental

oxygen therapy is used in severe cases.

The gastrointestinal system

Drugs are frequently used to reduce lower bowel activity when diarrhea occurs or to increase activity if

constipation is the problem. Laxatives in the form of stimulants (e.g., cascara sagrada), bulk-forming agents (e.

g., psyllium seed), osmotics (e.g., milk of magnesia), or lubricants (e.g., mineral oil) are commonly used.

Diarrhea must be treated with appropriate antibiotics if the cause is bacterial, as in traveler’s diarrhea, or with

an antiparasitic agent if a parasite is to blame. Antidiarrheal agents include narcotics (codeine, paregoric),

nonnarcotic analogs (loperamide hydrochloride), and bismuth subsalicylate (Pepto-Bismol).

Chronic gastritis and recurrent peptic ulcer often result from infection with and are treated Helicobacter pyloriwith antibiotics and bismuth. Ulcers not caused by are treated with drugs that reduce the secretion of H. pylorigastric acid, such as the H -receptor antagonists (e.g., cimetidine), or agents that form a barrier protecting the 2

stomach against the acid (e.g., sucralfate). Antacids are used for additional symptomatic relief.

Nausea and vomiting are protective reflexes that should not be totally suppressed without the underlying cause

being known. They may be psychogenic or caused by gastrointestinal or central nervous system disorders,

medications, or systemic conditions (pregnancy or diabetic acidosis). Among the most widely used antiemetics

are the phenothiazines (e.g., Compazine), but new drugs continue to be developed that help control the

vomiting related to cancer chemotherapy.

The nervous system

Alzheimer disease is the most prevalent form of dementia (loss of intellectual function), and treatment had been

primarily supportive until drugs that showed modest promise for improving cognition (e.g., tacrine) were

Britannica LaunchPacks | The Treatment and Prevention of Disease

102 of 123© 2020 Encyclopædia Britannica, Inc.

developed. Evidence that the continual use of cognitive faculties slows memory loss in the elderly has been

supported by research showing that older persons who are stimulated regularly with memory exercises retain

information better than those who are not.

Parkinsonism, named after English surgeon James Parkinson, who described the condition in 1817 as “the

shaking palsy,” is a chronic neurological disorder involving progressive loss of motor function. Although no

treatment is known to halt the advance of the disease, levodopa and certain other drugs can significantly relieve

the symptoms of tremor, muscular rigidity, and postural instability.

Migraine, a condition thought to arise in part from abnormal neurophysiological responses, can be alleviated by

one of the many forms of ergotamine and nonsteroidal anti-inflammatory drugs (NSAIDs). Sumatriptan is a drug

that has significantly improved the treatment of severe migraine attacks, causing fewer side effects than

ergotamine or dihydroergotamine.

Mental disorders

Some of the greatest recent advances in pharmacotherapy have been in the treatment of anxiety disorders and

depression. The benzodiazepines were the mainstay of treatment for anxiety disorders beginning in the 1960s,

although their prolonged use incurs the risk of mild dependence. The azapirones (e.g., buspirone) have little

potential for producing dependency and are not affected by alcohol intake. Newer and safer medications are also

available for treating panic disorder and obsessive-compulsive disorder.

Depression is among the most common life-threatening diseases, and considerable advances have been made in

managing this disorder. The selective serotonin reuptake inhibitors (SSRIs), such as Prozac, match previous

antidepressants in effectiveness and have fewer unpleasant side effects. They also are safer if an overdose is

taken, which is a significant threat in the case of severely depressed patients.

Local drug therapy

Local anesthetics produce loss of sensation and make it possible for many surgical procedures to be performed

without a general anesthetic. Barring any complications, the need for the patient to remain overnight in the

hospital is obviated. Local anesthetics are also used to anesthetize specific peripheral nerves or larger nerve

trunks. These nerve blocks can provide relief in painful conditions like rib fractures, but they are most frequently

used to anesthetize an extremity during hand or foot surgery. Spinal anesthesia and epidural anesthesia, in

which a local anesthetic is injected into the subarachnoid or epidural space of the lumbar (lower back) area of

the spinal canal, provide pain relief during childbirth or surgery that involves the pelvic area yet lack the

problems associated with a general anesthetic. Topical anesthetics, a type of local anesthetic, are also used on

the skin, in the eye’s conjunctiva and cornea, and in the mucous membranes of the nose, mouth, larynx, vagina,

or urethra.

Medications prescribed for dermatologic disorders account for a large amount of local drug therapy, whether it

be a substance to stimulate hair growth or to soothe a burning and itching rash. Many different corticosteroid

preparations are available to treat eczema, allergic reactions to substances like poison ivy, or seborrheic

dermatitis. Sunblocks are used to protect the skin against ultraviolet rays and prevent skin cancer that can result

from exposure to such radiation. Acne is controlled with skin cleansers, keratolytics to promote peeling, and

topical antibiotics to prevent or treat infection. Physicians use various wet dressings, lotions, gels, creams, and

ointments to treat acutely inflamed weeping and crusting sores and to moisturize and protect dry, cracked, and

scaling skin. Burns heal more rapidly and with less scarring when treated appropriately with topical preparations

like silver sulfadiazine. infections of the mucous lining of the mouth (i.e., thrush) or the vagina respond Candidato nystatin or one of the imidazole drugs. The traditional treatment of genital warts has been the topical

Britannica LaunchPacks | The Treatment and Prevention of Disease

103 of 123© 2020 Encyclopædia Britannica, Inc.

application of podophyllin, a crude resin. The emergence of new technologies in the latter part of the 20th

century, however, made possible the development of interferon-α, which is effective in the majority of patients

when injected into the lesion itself or subcutaneously below it.

Most ophthalmic drugs are local—eye drops to treat glaucoma, steroid-antibacterial mixtures to treat infection,

artificial tears for dry-eye syndromes, or mydriatics (drugs causing dilation of the pupil), like atropine, that

facilitate refraction and internal examination of the eye.

Chemotherapy

Chemotherapy is the treatment of disease using chemical agents that are intended to eliminate the causative

organism without harming the patient. In the strict sense, this applies to the use of antibiotics to treat such

invading organisms as bacteria, viruses, fungi, or parasites. The term is commonly used, however, to describe

the use of drugs to treat cancer, in which case the target is not a causative organism but abnormally multiplying

cells. The purpose of the therapy is to selectively kill tumour cells and to leave normal cells unharmed—a very

difficult task because most drugs have a narrow therapeutic zone beyond which they harm normal cells as well

as cancer cells.

Watch the subtle disruptive effect that the widely used anticancer drug Taxol has inside a cell.

Displayed by permission of The Regents of the University of California. All rights reserved.

Anticancer drugs are only relatively selective for cancer cells, and the toughest task for the physician is to select

a drug that will destroy the most cancer cells, leave normal cells unharmed, and cause the fewest unpleasant

and undesirable side effects. The therapeutic goal is to favourably balance the risk-benefit ratio in which the

morbidity of the treatment is weighed against its potential benefits. If a treatment causes patients to be

miserable and has only a slight chance of prolonging life, many patients will forego further treatment. However,

if the potential for significantly prolonging survival by aggressive therapy exists, the patient may decide to

continue with the therapy.

The effectiveness of chemotherapy depends on the highest possible concentration of the drug being at the

tumour site sufficiently long to kill the tumour cells. The maximal opportunity for a cure exists in the early stage

of the disease, when the tumour is small and localized. The larger and more disseminated the tumour, the more

difficult it is to eradicate. The stage the tumour is in will also determine the route of administration, which can be

oral, intravenous, intra-abdominal, intrathecal (into the subarachnoid space of the spinal cord), or intra-arterial—

specifically, into the artery feeding the tumour.

Suppression of bone marrow activity, which results in a decrease in blood cell production, represents the most

limiting factor in chemotherapy. Because chemotherapy is most effective when used at the highest nontoxic

dose, the interval between treatments may need to be prolonged to prevent complete bone marrow

Britannica LaunchPacks | The Treatment and Prevention of Disease

104 of 123© 2020 Encyclopædia Britannica, Inc.

suppression. Supportive measures undertaken when bone marrow suppression occurs include repeated platelet

transfusions (to combat bleeding caused by diminished platelet production) and white blood cell transfusions (to

control infection).

Adjuvant chemotherapy is the use of drugs to eradicate or suppress residual disease after surgery or irradiation

has been used to treat the tumour. This is necessary because distant micrometastases often occur beyond the

primary tumour site. Adjuvant chemotherapy reduces the rate of recurrence of some cancers, especially ovarian

cancer, osteogenic sarcoma, colorectal cancer, and nephroblastoma. The antiestrogen drug tamoxifen has been

effective in selected patients with breast cancer.

Surgical therapyMajor categories of surgery

Wound treatment

Wounds, whether caused by accidental injury or a surgical scalpel, heal in three ways: (1) primary intention

(wound edges are brought together, as in a clean surgical wound), (2) secondary intention (the wound is left

open and heals by epithelization), or (3) third intention, or delayed closure (the wound is identified as potentially

infected, is left open until contamination is minimized, and is then closed).

Choosing which method is best depends on whether excessive bacterial contamination is present, whether all

necrotic material and foreign bodies can be identified and removed, and whether bleeding can be adequately

controlled. Normal healing can occur only if the wound edges are clean and can be closely opposed without

undue stress on the tissue. An adequate blood supply to the wound is essential. If the tissue is tight and the

edges cannot be closed without tension, the blood supply will be compromised. Cutting under the skin to free it

from the underlying subcutaneous tissue may allow the edges to be brought together without tension. If direct

approximation is still not possible, then skin grafts or flaps are used for closure.

Wound closure begins with a thorough cleansing of the wound and the installation of a local anesthetic, usually

lidocaine, which takes effect quickly and lasts for one to two hours. If the wound is contaminated, further

cleansing is performed after instilling the local anesthetic, especially if foreign material is present. If the injury

resulted from a fall on gravel or asphalt as in some motorcycle accidents, then aggressive scrubbing is needed

to remove the many small pieces imbedded beneath the skin. High-pressure irrigation with saline solution will

remove most foreign material and reduce the risk of subsequent infection. Contaminated wounds must be

considered to be prone to infection with , which causes tetanus, and appropriate immunization Clostridium tetaniis given to prevent the condition.

Sutures are the most commonly used means of wound closure, although staples and adhesive tissue tape may

be more appropriate in certain circumstances. Silk sutures were originally used to close skin wounds, but nylon

is stronger and causes less tissue reaction. Ideally, sutures are of the smallest possible diameter that will still

maintain approximation of the wound edges. Absorbable sutures made of catgut (made not from cat but from

sheep intestines) or a synthetic material such as polyglycolic acid are used to approximate the deeper layers of

tissue beneath the skin so that tissue reaction will be lessened. The objective is to eliminate any unfilled space

that could delay healing or allow fluid to accumulate. Drains connected to closed suction are used to prevent the

collection of fluid when it is likely to accumulate, but drains serve as a source of contamination and are used

infrequently. Staples permit faster closure of the skin but are less precise than sutures. When the edges can be

Britannica LaunchPacks | The Treatment and Prevention of Disease

105 of 123© 2020 Encyclopædia Britannica, Inc.

brought together easily and without tension, tape is very useful. Although it is comfortable, easy to apply, and

avoids the marks left by sutures, tape may come loose or be removed by the patient and is less successful if

much wound edema occurs.

Sutures typically are removed after 3 to 14 days, depending on the area involved, the cosmetic result desired,

the blood supply to the area, and the amount of reaction that occurs around the sutures. Sutures on the face

usually are removed in three to five days to avoid suture marks. Tape is often used to provide support for the

remainder of the time the wound needs to heal. Sutures on the trunk or leg will be removed after 7 to 10 days or

longer if there is much tension on the wound. Tension and scarring are minimized in surgical procedures by

making an incision parallel to normal skin lines, as in the horizontal neck incision for thyroidectomy.

Dressings protect the wound from external contamination and facilitate absorption of drainage. Because a

surgical wound is most susceptible to surface contamination during the first 24 hours, an occlusive dressing is

applied, consisting of gauze held in place by tape. Materials such as transparent semipermeable membranes

permit the wound to be observed without removal of the dressing and exposure of the wound to contamination.

Dressings support the wound and, by adding compression, aid healing, as skin grafts do.

Flat keloids.

Courtesy of Michael H. Tirgan, M.D.

The healing of a wound results in scar formation; a strong yet minimally apparent scar is desirable. In some

individuals a keloid, or thick overgrowth of scar, occurs no matter how carefully the wound was closed. The four

phases of wound healing are inflammatory, migratory, proliferative, and late. The first, or inflammatory, phase

occurs in the first 24 hours when platelets form a plug by adhering to the collagen exposed by damage to blood

vessels. Fibrin joins the platelets to form a clot, and white blood cells invade the area to remove contamination

by foreign material. Local blood vessels dilate to increase blood supply to the area, which hastens healing. In the

second, or migratory, phase, fibroblasts and macrophages infiltrate the wound to initiate reconstruction.

Capillaries grow in from the periphery, and epithelial cells advance across the clot to form a scab. In the

proliferative phase, the fibroblasts produce collagen that increases wound strength, new epithelial cells cover

Britannica LaunchPacks | The Treatment and Prevention of Disease

106 of 123© 2020 Encyclopædia Britannica, Inc.

the wound area, and capillaries join to form new blood vessels. In the late phase, the production of new and

stronger collagen remodels the scar, blood vessels enlarge, and the epithelium at the surface heals.

Many factors, including diabetes mellitus or medications, can affect wound healing. In a patient whose diabetes

is well controlled, wound healing is essentially normal, but, if the blood glucose level is elevated, it can impair

healing and predispose the wound to infection. Kidney failure or liver failure and malnutrition also will delay

wound healing, as will poor circulation caused by arteriosclerosis. Having steroids or anticancer or other drugs in

the system can adversely affect the normal healing process.

Surgical extirpation

Medical team performing surgery.

© Brasiliao/Shutterstock.com

Extirpation is the complete removal or eradication of an organ or tissue and is a term usually used in cancer

treatment or in the treatment of otherwise diseased or infected organs. The aim is to completely remove all

cancerous tissue, which usually involves removing the visible tumour plus adjacent tissue that may contain

microscopic extensions of the tumour. Excising a rim of adjacent, seemingly normal tissue ensures a complete

cure unless there has been extension through the lymphatic system, which is the primary route for cancer to

spread. For this reason, local lymph nodes are often removed with the tumour. Pathological examination of the

nodes will show whether the cancer has metastasized (spread). This indicates the likelihood of cure and whether

additional treatment such as radiation therapy or chemotherapy is needed. If complete removal of a tumour is

not possible, palliative surgery, which provides relief but is not a cure, may be useful to relieve pain or pressure

on adjacent structures. Radical surgery may not always be best, as in the early stages of breast cancer. Removal

of the entire breast and surrounding structures, including the axillary lymph nodes, has been shown to provide

no greater benefit than a lumpectomy (removal of the tumour only) followed by radiation to the area in early

stages of breast cancer, while it often causes the patient increased psychological distress. However, because of

improvements in breast reconstruction techniques, the trauma of a radical mastectomy is becoming less severe.

Britannica LaunchPacks | The Treatment and Prevention of Disease

107 of 123© 2020 Encyclopædia Britannica, Inc.

Reconstructive surgery

Reconstructive surgery is employed when a significant amount of tissue is missing as a result of trauma or

surgical removal. A skin graft may be required if the wound cannot be closed directly. If a large surface area is

involved, a thin split-thickness skin graft, consisting of epidermis only, is used. Unfortunately, although these

grafts survive transplantation more successfully and heal more rapidly than other types of grafts, they are

aesthetically displeasing because their appearance differs markedly from that of normal skin. In a small defect,

especially one involving the face or hand, a full-thickness skin graft, consisting of epidermis and dermis, is used,

and skin is generally donated from the ear, neck, or groin. Exposure of bone, nerve, or tendon requires a skin

flap. This can be a local flap, in which tissue is freed and rotated from an adjacent area to cover the defect, or a

free flap, in which tissue from another area of the body is used. An example of a local flap is the rotation of

adjacent tissue (skin and subcutaneous tissue) to cover the defect left from removing a skin cancer. A free flap is

used when the amount of tissue needed is not available locally, as in an injury to the lower leg from an

automobile bumper. The amount and type of tissue needed and the blood supply available determine the type of

flap to be used. The blood supply must be adequate to supply the separated flap and wound edge with

nourishment.

Tissue expanders are another way of creating extra tissue that can be used to cover a defect. Inflatable plastic

reservoirs are implanted under the normal skin of an adjacent area. For several weeks the reservoir is expanded

with saline to stretch the overlying skin, which is then used to cover the defect.

Reconstructive surgery is performed for a variety of surgical conditions. It may require the fashioning of a new

“organ,” as in an artificial bladder, or may involve insertion of prosthetic devices such as artificial heart valves,

pacemakers, joints, blood vessels, or bones.

U.S. soldier with two prostheses playing table football.

Walter Reed/U.S. Army

Prosthetic devices can be used to replace diseased tissue. They usually perform better than donated tissue

because they are made of material that does not stimulate rejection. The first prosthetic limbs were developed

in the early 16th century. In cardiovascular medicine, one of the first prosthetic devices to be used was the

Dacron aortic graft, which was introduced by American surgeon Michael DeBakey and colleagues in 1954 to

replace aortic aneurysms (dilated vessels that risk rupture and death) or vessels obstructed by arteriosclerotic

plaques. Grafts made of similar materials were later employed to replace diseased arteries throughout the body.

Other prosthetic devices include heart valves and metal joints (e.g., hip, knee, or shoulder).

Britannica LaunchPacks | The Treatment and Prevention of Disease

108 of 123© 2020 Encyclopædia Britannica, Inc.

Transplantation surgery

The success of organtransplantation was greatly improved following the discovery in the early 1970s of the

immunosuppressive substance cyclosporine. Other immunosuppressive drugs have since been developed,

including certain glucocorticoids (e.g., prednisone), macrolide lactones (e.g., sirolimus), and antibodies (e.g.,

muromonab-CD3 and basiliximab).

Surgeon Andrew Ready flushing a donated kidney during a transplant at Queen Elizabeth Hospital,…

Christopher Furlong/Getty Images

Kidney transplants are among the most commonly performed types of transplant surgery. Kidneys often are

donated from living relatives to ensure the greatest prospects of long-term survival. The best survival rates are

between identical twins. Cadaver transplants are also used. The one-year graft survival rate is 90 percent or

higher. Approximately 50 percent of grafts cease to function after 8 to 11 years; others, however, last 20 years

or more. Kidneys removed from living donors can be preserved for up to 72 hours before they must be

implanted; most are implanted within 24 hours because successful transplantation decreases with time.

Heart and heart-lung organs can be preserved for four to six hours, and the success rate with this procedure

continues to improve. Extensive matching of blood groups and tissue types is performed to minimize the risk of

rejection. The size of the donor and donated organ should match the size of the recipient and the recipient’s

organ, and the time between pronouncement of death and procurement of the organ should be kept as short as

possible.

In selected patients, liver transplantation has become an accepted treatment for end-stage liver disease.

Mortality following surgery is about 10 to 20 percent, depending on recipient age and health. Survivors require

long-term immunosuppressive therapy.

Surgical techniques

Laser surgery

A laser is a device that produces an extremely intense monochromatic, nondivergent beam of light capable of

generating intense heat when focused at close range. Its applications in the medical field include the surgical

welding of a detached retina and the stanching of bleeding (called laser photocoagulation) in the gastrointestinal

tract that can result from a peptic ulcer. Because a laser beam is absorbed by pigmented lesions, it can be used

to treat pigmented tumours, remove tattoos, or coagulate a hemangioma (a benign but disfiguring tumour of

blood vessels). Laser surgery has also been found to be effective in treating superficial bladder cancer and can

Britannica LaunchPacks | The Treatment and Prevention of Disease

109 of 123© 2020 Encyclopædia Britannica, Inc.

be combined with ultrasonography for transurethral ultrasound-guided laser-induced prostatectomy (TULIP).

More recent uses include the treatment of glaucoma and lesions of the cervix and vulva, including carcinoma in

situ and genital warts.

Cryosurgery

Cryosurgery is the destruction of tissue using extreme cold. Warts, precancerous skin lesions (actinic keratoses),

and small cancerous skin lesions can be treated using liquid nitrogen. Other applications include removing

cataracts, extirpating central nervous system lesions (including hard-to-reach brain tumours), and treating some

heart conduction disorders.

Stereotactic surgery

Stereotactic surgery is a valuable neurosurgical technique that enables lesions deep in the brain that cannot be

reached otherwise to be located and treated using cold (as in cryosurgery), heat, or chemicals. In this procedure,

the head is held motionless in a head ring (halo frame), and the lesion or area to be treated is located using

three-dimensional coordinates based on information from X rays and electrodes.

Stereotactic techniques are also used to focus high-intensity radiation on localized areas of the brain to treat

tumours or to obliterate arteriovenous malformations. This technique is also employed to guide fine-needle

aspiration biopsies of brain lesions; it requires that only one burr hole be made in the skull with the patient

under local anesthesia. Stereotactic fine-needle biopsy also is used to evaluate breast lesions that are not

palpable but are detected by mammography.

Minimally invasive surgery

Endoscopic image of the duodenum.

Samir

Traditional open surgical techniques are being replaced by new technology in which a small incision is made and

a rigid or flexible endoscope is inserted, enabling internal video imaging. Endoscopic procedures (endoscopy)

are commonly performed on nasal sinuses, intervertebral disks, fallopian tubes, shoulders, and knee joints, as

well as on the gallbladder, appendix, and uterus. Although it has many advantages over traditional surgery,

endosurgery may be more expensive and have higher complication rates than traditional approaches.

Trauma surgery

Britannica LaunchPacks | The Treatment and Prevention of Disease

110 of 123© 2020 Encyclopædia Britannica, Inc.

Trauma surgery

A monitor showing information about heart function for multiple patients in an intensive care unit.

© iStockphoto/Thinkstock

Trauma is one of the leading causes of loss of potential years of life. The explosion in the development of

medical instrumentation and technology has made it possible for surgeons to save more lives than ever before

thought possible. The intensive care unit contains a complex assortment of monitors and life-support equipment

that can sustain life in situations that previously proved fatal, such as adult respiratory distress syndrome,

multiorgan failure, kidney failure, and sepsis.

Radiation and other nonsurgical therapiesRadiation therapy

The depth range of different forms of ionizing radiation.

Encyclopædia Britannica, Inc.

Ionizing radiation is the transmission of energy by electromagnetic waves (e.g., X-rays) or by particles such as

electrons, neutrons, or protons. Interaction with tissue produces free radicals and oxidants that damage or break

cellular DNA, leading to cell death. When used properly, radiation may cause less damage than surgery and can

often preserve organ structure and function. The type of radiation used depends on the radiosensitivity of the

tumour and which healthy organs are within the radiation field. High-energy sources, such as linear accelerators,

deposit their energy at a greater depth, sparing the skin but treating the deep-seated tumour. The radiation

beam can also come from multiple directions, each beam being focused on the deep tumour, delivering a

Britannica LaunchPacks | The Treatment and Prevention of Disease

111 of 123© 2020 Encyclopædia Britannica, Inc.

smaller dose to surrounding organs and tissues. Electron-beam radiation has low penetration and is useful in

treating some skin cancers.

The basic unit of absorbed radiation is the gray (Gy): one gray equals 100 rads. Healthy organs have varying

tolerance thresholds to radiation, bone marrow being the most sensitive and skin the least. The nervous system

can tolerate much more radiation than the lungs or kidneys. Total body irradiation with approximately 10 Gy

causes complete cessation of development of the bone marrow, and physicians use it to destroy defective tissue

before performing a bone marrow transplant.

Radiation therapy can also be palliative if a cure is not possible; the size of the tumour can be reduced, thereby

relieving pain or pressure on adjacent vital structures. It also can shrink a tumour to allow better drainage of an

area, such as the lung, which can help to prevent infection and decrease the chance of bleeding.

Radioactive implants in the form of metal needles or “seeds” are used to treat some cancers, such as those of

the prostate and uterine cervix. They can deliver high doses of radiation directly into the tumour with less effect

on distant tissues.

An organ can also be irradiated by the ingestion of a radioactive substance. Hyperthyroidism can be treated with

iodine-131, which collects in the thyroid gland and destroys a percentage of glandular tissue, thereby reducing

function to normal. The drawback to this procedure is the difficulty in calculating the correct dose.

Irradiation is less effective in treating tissues that are poorly oxygenated (hypoxic) because of inadequate blood

supply than it is in treating those that are well oxygenated. Some drugs enhance the toxic effect of radiation on

tumour cells, especially those that are hypoxic.

Other noninvasive therapies

Hyperthermia

Some tumours are more sensitive than the surrounding healthy tissue to temperatures around 43 °C (109.4 °F).

Sensitivity to heat is increased in the centre of tumours, where the blood supply is poor and radiation is less

effective. A tumour may be heated using microwaves or ultrasound. Hyperthermia may enhance the effect of

both radiation and chemotherapy; it is one form of nonionizing radiation therapy.

Britannica LaunchPacks | The Treatment and Prevention of Disease

112 of 123© 2020 Encyclopædia Britannica, Inc.

Photodynamic therapy

In photodynamic therapy, a photosensitive drug absorbed by cancer cells is activated by a laser…

John Crawford—National Cancer Institute

Another form of nonionizing radiation therapy is photodynamic therapy (PDT). This technique involves

administering a light-absorbing substance that is selectively retained by the tumour cells. The cells are killed by

exposure to intense light, usually laser beams of appropriate wavelengths. Lesions amenable to PDT include

tumours of the bronchus, bladder, skin, and peritoneal cavity.

Extracorporeal shock wave lithotripsy

Kidney stone.

Robert R. Wal

The use of focused shock waves to pulverize stones in the urinary tract, usually the kidney (i.e., kidney stones)

or upper ureter, is called extracorporeal shock wave lithotripsy (ESWL). The resultant stone fragments or dust

particles are passed through the ureter into the bladder and out the urethra.

Britannica LaunchPacks | The Treatment and Prevention of Disease

113 of 123© 2020 Encyclopædia Britannica, Inc.

A woman during an extracorporeal shock wave lithotripsy (ESWL) procedure. Ultrasound waves are…

© John Harvey Perez

The patient is given a general, regional, or sometimes even local anesthetic and is immersed in water, and the

shock wave is applied to the flank over the kidney. If the stone is small, submersion in a water bath is not

necessary; shock waves are transmitted through the skin via a water-filled rubber bulb positioned over the stone

site. Stones that are too large to be treated in this manner are removed by passing an endoscope into the ureter.

PsychotherapyDrug therapy

The use of drugs to treat mental disorders has expanded dramatically with the development of new and more

effective medications for a variety of disorders that formerly were not treatable. Drugs that affect the mind are

called psychotropic and can be divided into three categories: antipsychotic drugs, antianxiety agents, and

antidepressant drugs.

Antipsychotic agents

The advent of antipsychotic, or neuroleptic, drugs such as chlorpromazine enabled many patients to leave

mental hospitals and function in society. The primary indication for the use of antipsychotics is schizophrenia, a

severe mental disorder characterized by delusions, hallucinations, and sometimes unusual behaviour. One form,

paranoid schizophrenia, is marked by delusions that are centred around a single theme, often accompanied by

hallucinations. The most effective drug to use may depend on an individual patient’s metabolism of the drug or

the severity and nature of the side effects.

Antianxiety agents

Drugs that combat anxiety, also known as anxiolytics or minor tranquilizers, enable otherwise dysfunctional

patients to cope more effectively with the environmental or personal factors that trigger symptoms. This class of

drugs includes the barbiturates, benzodiazepines, nonbenzodiazepine-nonbarbiturates, and hypnotics. The

barbiturates phenobarbital, amobarbital, pentobarbital, and secobarbital have been around the longest and are

used primarily as sedatives or for seizure disorders (phenobarbital).

Britannica LaunchPacks | The Treatment and Prevention of Disease

114 of 123© 2020 Encyclopædia Britannica, Inc.

The benzodiazepines have become the drugs of choice for acute anxiety. The first to be discovered was

chlordiazepoxide (Librium), in 1959. The development of chlordiazepoxide was followed by a large variety of

benzodiazepines, each with slightly different properties. Some are used primarily as sleeping pills (hypnotics) to

treat insomnia. Before the development of the benzodiazepines, the only available antianxiety drugs were the

barbiturates and meprobamate. The benzodiazepines have fewer unfavourable side effects and less abuse

potential and have replaced barbiturates and meprobamate in the treatment of anxiety. They also are useful in

treating alcohol withdrawal, calming muscle spasm, and preparing a patient for anesthesia. Drug dependency is

a potential problem, however, especially in persons with a history of dependence on alcohol or other

psychoactive drugs. Meprobamate is an example of a nonbenzodiazepine-nonbarbiturate drug. It is rarely used

today, however, since benzodiazepines are more effective and safer.

Other agents include the azaspirodecanediones (e.g., buspirone), which have a relatively low potential for abuse,

rendering them safer than other anxiolytics in the long-term treatment of chronic problems such as generalized

anxiety disorder. They also have no sedative effects and thus are safe for patients to use when driving or

operating machinery.

Hypnotic agents (nonbenzodiazepines) include chloral hydrate, some sedating antidepressants, and sedating

antihistamines, such as diphenhydramine (Benadryl) and hydroxyzine (Atarax). These tend to be used less

frequently than the benzodiazepine hypnotics because of an increased morning hangover effect and other side

effects. The distinction between antianxiety drugs and hypnotics is not clear, because many can serve both

functions. Small doses of hypnotic benzodiazepines are effective antianxiety agents, and in many persons,

especially the elderly, antianxiety benzodiazepines can induce sleep.

Antidepressant drugs

Depression, a common mental disorder, is classified as an affective disorder (also called mood disorders). Many

drugs are available to treat depression effectively. One is selected over another based on side effects or safety.

The main classes of antidepressants are the tricyclics, selective serotonin reuptake inhibitors (SSRIs),

monoamine oxidase inhibitors (MAOIs), and others that are often called heterocyclics (e.g., trazodone,

bupropion). The SSRIs, such as fluoxetine (Prozac), sertraline, and paroxetine, have no sedating effect,

anticholinergic activity, associated weight gain, or cardiac toxicity, but they can cause nervousness.

The oldest and best-studied class is the tricyclic antidepressants, which are divided into tertiary amines and

secondary amines. Most tricyclics have a sedating effect, cardiac toxicity, and varying degrees of anticholinergic

side effects, which some individuals, especially the elderly, have difficulty tolerating. Anticholinergic effects,

which result from the blockage of parasympathetic nerve impulses, include dry mouth, constipation, difficulty

urinating, and confusion. MAOIs have the potential to produce dangerous drug interactions. This is especially

true of tyramine, which can cause hypertension and severe headache. Tyramine is found in many foods, which

forces patients who take it to adhere to a specific diet.

Bipolar disorder is characterized by severe mood swings, from excessive elation and talkativeness to severe

depression. The predominantly favoured mood-stabilizing drug is lithium, which requires regular monitoring of

blood concentrations to achieve optimum effect. If the patient experiences episodes of mania or depression

while taking lithium, additional drugs, such as anticonvulsants (e.g., carbamazepine and lamotrigine), may be

necessary.

Britannica LaunchPacks | The Treatment and Prevention of Disease

115 of 123© 2020 Encyclopædia Britannica, Inc.

Behavioral therapy

Behavioral therapy, also called behavioral modification, uses psychological counseling to change activity that is

undesirable or potentially harmful. Treatment most often is directed toward changing harmful habits, such as

discontinuing cigarette smoking, dieting to lose weight, controlling alcohol abuse, or managing stress more

effectively.

Several types of behavioral therapy are used. Rational emotive therapy aims at altering inaccurate or irrational

thoughts that lead to negative emotions or maladaptive behaviour. Other behavioral approaches attempt to

modify physical responses. Neurofeedback, for example, uses sensitive electronic devices and the principles of

reinforcement to provide continuous visual or auditory “feedback,” which helps patients learn to control subtle

physical processes. Relaxation training, including deep muscle relaxation exercises, is a stress-reducing

technique that can be used conveniently any time of the day. These cognitive behavioral techniques have been

used to treat insomnia, hypertension, headaches, chronic pain, and phobias.

Behavioral therapies have been developed for common problems of both childhood (e.g., academic

performance, enuresis, and stuttering) and adulthood (e.g., marital discord).

Group therapy

Psychotherapy in which an experienced therapist—a psychiatrist, psychologist, social worker, or member of the

clergy—works with a group of patients or relatives is called group therapy. The process uses the interaction of

group members to benefit each member of the group. Behavioral modification through group interaction and

support can be used to change eating behaviours, which can lead to a reduction in weight. Support groups are

available to assist patients and families in dealing with cancer, alcoholism, abuse, bereavement, and other crises.

Family and systemic therapy

General systems theories emerged in the biological and social sciences following World War II. This led to the

conceptualization of the individual as an interdependent part of larger social systems. In psychotherapy,

systemic therapy does not focus on how problems start, but rather on how the dynamics of relationships

influence the problem. The therapist’s goal is to alter the dynamics of the relationships rather than to focus only

on the behaviour or internal dynamics of individuals. For example, if a child is having temper tantrums, attention

would be given to the stage of family development, the quality of communication between its members, and the

clarity and flexibility of family roles.

Family counseling brings the entire family together to discuss specific problems of one or more family members,

including adolescent discipline problems, marital discord, drug abuse, or relationship problems in families that

can arise from remarriage.

Psychodynamic therapies

Austrian neurologist Sigmund Freud held that all behaviour is influenced by unconscious motivations and

conflicts. Personality characteristics are thought to be shaped from the earliest childhood experiences.

Psychological defenses are seen mainly as unconscious coping responses, the purpose of which is to resolve the

conflicts that arise between basic desires and the constraints of external reality. Emotional problems are seen as

maladaptive responses to these unconscious conflicts.

Psychodynamic therapies emphasize that insight is essential to lasting change. Insight means understanding

how a problem emerged and what defensive purpose it serves. A classic form of psychodynamic therapy is

Britannica LaunchPacks | The Treatment and Prevention of Disease

116 of 123© 2020 Encyclopædia Britannica, Inc.

psychoanalysis, in which the patient engages in free association of ideas and feelings and the psychoanalyst

offers interpretations as to the meaning of the associations. Another form is brief, dynamic psychotherapy, in

which the clinician makes recommendations based on an understanding of the situation and the reasons for

resisting change.

Psychotherapy, the use of mental rather than physical means to achieve behavioral or attitudinal change,

employs suggestion, persuasion, education, reassurance, insight, and hypnosis. Supportive psychotherapy is

used to reinforce a patient’s defenses, but avoids the intensive probing of emotional conflicts employed in

psychoanalysis and intensive psychotherapy.

Experienced clinicians usually draw on various counseling theories and techniques to design interventions that

fit a patient’s problem. The format of therapy (e.g., individual, couple, family, or group) will vary with each

patient. Many patients respond best to a combination approach. Depression, for example, is frequently

alleviated by medication and cognitive-behaviour therapy. There is growing interest in primary prevention to

increase the coping abilities and resilience of children, families, and adults who are at risk for mental health

problems.

Robert Edwin Rakel

Additional ReadingAn introduction to therapeutic interventions and strategies is , , 4th ed. CHAD STARKEY Therapeutic Modalities

(2013). and (eds.), , 9th ed. (2016), is a standard ROBERT E. RAKEL DAVID P. RAKEL Textbook of Family Practice

textbook for family physicians covering the breadth of the discipline and emphasizing clinical diagnosis and

treatment. (ed.), (2009), discusses therapeutic GLEN O. GABBARD Textbook of Psychotherapeutic Treatments

approaches in the area of psychotherapy, including strategies in cognitive therapy and systems therapy. LEE

et al., , 22nd ed. (2004), is a textbook of medicine compiled by leading GOLDMAN Cecil Textbook of Medicine

authorities and containing thorough information on common and rare diseases. In-depth information on cancer

therapeutics, including chemotherapy, radiation therapy, and surgery, is presented in and PAT PRICE KAROL SIKORA

(eds.), , 6th ed. (2015).Treatment of Cancer

Citation (MLA style):

"Therapeutics." , Encyclopædia Britannica, 23 Britannica LaunchPacks: The Treatment and Prevention of DiseaseJul. 2019. packs-ngl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

influenza pandemic of 1918–19: temporary hospital

Britannica Note:

The influenza pandemic of 1918–19 was the most severe influenza outbreak of the 20th century. It caused an estimated 25 million deaths, though some researchers have projected a much higher death toll.

Britannica LaunchPacks | The Treatment and Prevention of Disease

117 of 123© 2020 Encyclopædia Britannica, Inc.

A temporary hospital in Camp Funston, Kansas, during the 1918–19 influenza pandemic.

Courtesy of the National Museum of Health and Medicine, Armed Forces Institute of Pathology, Washington, D.C

Citation (MLA style):

Influenza pandemic of 1918–19: temporary hospital. Image. Britannica LaunchPacks: The Treatment and , Encyclopædia Britannica, 4 Mar. 2022. packs-ngl.eb.com. Accessed 6 Apr. 2022.Prevention of Disease

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Watch archival footage of children with polio and Jonas Salk administering immunization injections

Britannica Note:

At its peak incidence in the United States, in 1952, approximately 21,000 cases of paralytic polio (a rate of 13.6 cases per 100,000 population) were recorded.

Britannica LaunchPacks | The Treatment and Prevention of Disease

118 of 123© 2020 Encyclopædia Britannica, Inc.

Video Transcript

NARRATOR: The polio virus attacks certain kinds of nerve cells, and for hundreds of years it killed and crippled,

especially children. By 1952 Dr. Jonas Salk and his research team at the University of Pittsburgh had developed a

killed-virus vaccine against polio. The effort to eradicate the disease started small, with Dr. Salk and his team

immunizing a few thousand students in the Pittsburgh area. But soon it became a nationwide mobilization to

protect every child against the polio threat. Salk's vaccine and the live polio vaccine developed later by Albert

Sabin, were effective in controlling the viral disease.

Archival footage showing children with polio, Jonas Edward Salk giving injections, an immunization centre, and

vials of vaccine being produced.

Encyclopædia Britannica, Inc.

Citation (MLA style):

Watch archival footage of children with polio and Jonas Salk administering immunization injections. Video.

, Encyclopædia Britannica, 4 Mar. 2022. packs-Britannica LaunchPacks: The Treatment and Prevention of Diseasengl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Britannica LaunchPacks | The Treatment and Prevention of Disease

119 of 123© 2020 Encyclopædia Britannica, Inc.

Learn how vaccines enhance the human immune system to fight against harmful pathogens

Video Transcript

Vaccines save more than 9 million lives each year, even though they contain ingredients that can be pretty

dangerous. But that's precisely the point-- those nasty ingredients are the only reason that vaccines work at all.

On their own, our bodies are already pretty good at fighting off illness. When an unfamiliar pathogen invades our

body, patrolling immune cells engulf a few of these foes to gather intel. The cells then churn out a chemical

alarm, transmitting the enemy's information along the chains of command and triggering the production of

specially targeted antibodies and assassin cells to fend off the rest of the invaders. After the battle, some of

these specialized troops stick around so that if the pathogen returns, they can help mount a quicker response.

However, some pathogens are so powerful that they can overwhelm the immune system before it's able to

mount a defense. Vaccination lets us stage a practice version of this battle in advance of the real one so that the

immune system can develop and stockpile weapons specifically targeted at the enemy. That does require

introducing a bit of the enemy into our bodies, but it's an enemy that we've already disarmed, either by killing it,

breeding a super weak strain, or dismembering it to get at recognizable, but not dangerous, parts. Enter

formaldehyde. It's a critical ingredient in some common vaccines, like those for tetanus and influenza, because it

alters the structure of those pathogens just enough to render them harmless. This reactivity also makes

formaldehyde dangerous to us in large doses, but vaccines contain only a tiny fraction of the formaldehyde we

consume in food and naturally produce in our bodies on a daily basis. In fact, the biggest risk with formaldehyde

is that it can leave pathogens so crippled that they won't trigger the proper immune response. So some vaccines

contain substances called adjuvants, which put the immune system on higher alert. Aluminum, for example,

causes a minor irritation at the injection site, summoning immune cells to the scene so they'll encounter the

weekend enemy. And although too much aluminum can be toxic, our bodies are super efficient at eliminating it,

keeping levels low even after lots of shots. So while boatloads of aluminum, buckets of formaldehyde, and an up-

lose meet and greet with a full-strength pathogen are all definitely bad ideas, a controlled concoction from a

doctor is designed to be just nasty enough to keep us safe. There's always a chance of an adverse reaction to

vaccine's ingredients, but the much scarier scenario would be to let polio, measles, and other deadly diseases

call the shots.

Britannica LaunchPacks | The Treatment and Prevention of Disease

120 of 123© 2020 Encyclopædia Britannica, Inc.

The basic strategies behind the use of vaccines to prepare the human immune system to deal with harmful

pathogens. Adjuvants, such as aluminum, are incorporated into vaccines to hasten the body's immune response.

© MinuteEarth

Citation (MLA style):

Learn how vaccines enhance the human immune system to fight against harmful pathogens. Video. Britannica , Encyclopædia Britannica, 4 Mar. 2022. packs-ngl.eb.LaunchPacks: The Treatment and Prevention of Disease

com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Discover how gene therapy can treat diseases caused by genetic mutations such as cystic fibrosis

Video Transcript

NARRATOR: Genes carry the hereditary information responsible for the distinct characteristics of every

individual. They are composed of DNA, and it is the sequence of bases along a strand of DNA that determines

the genetic code. Disruptions of the number or order of bases in a gene are known as mutations. If severe

enough, these mutations can cause death or diseases such as cancer and cystic fibrosis. Gene therapy seeks to

treat or prevent genetic disease by introducing a normal, working gene into an individual's genome. This new

gene may serve to replace the mutated gene. The transformed cells then reproduce and generate normal gene

product, thereby reducing the symptoms of disease or eliminating the disease altogether. If, however, the new

gene replaces a working gene instead of the defective one, a new mutation may arise. Another approach to

gene therapy involves the introduction of a new gene that does not replace the mutation but rather aids the

body in fighting the disease. This new gene targets the damaged cells while leaving the healthy ones unharmed.

Gene therapy seeks to repair genetic mutations through the introduction of healthy, working genes.

Britannica LaunchPacks | The Treatment and Prevention of Disease

121 of 123© 2020 Encyclopædia Britannica, Inc.

Encyclopædia Britannica, Inc.

Citation (MLA style):

Discover how gene therapy can treat diseases caused by genetic mutations such as cystic fibrosis. Video.

, Encyclopædia Britannica, 4 Mar. 2022. packs-Britannica LaunchPacks: The Treatment and Prevention of Diseasengl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Discover how constantly mutating viruses are a challenge to a universal flu vaccine

Video Transcript

SOPHIA CAI: It seems like every year doctors and worried mothers want us to get a flu shot so we can avoid the

fever, aches, and pains of the dreaded influenza virus. And it's true, we should. But why do we need one every

year? And hey, why don't they always work? Stick around for answers to find out about an even better option,

the ever-elusive universal flu shot we've all been waiting for. Yeah, I have been waiting for it. Haven't you? Hey,

everybody. Sophia here. There are dozens of types of influenza viruses that cause flu symptoms. Remember, the

swine flu? That's the H1N1 virus. The avian flu? That's H5N1. These letters and numbers identify variations in

two different proteins on the surface of an influenza virus-- H for hemagglutinin, which has 18 subtypes, and N

for neuraminidase, which comes in 11 varieties. Every year, experts from the CDC and flu research centers

worldwide pick the strains they think will be most prevalent. Private companies then create flu shots containing

antigens from those specific strains. Antigens are essentially little biochemical flags that wave on the surface of

viruses to signal which strain they are. These antigen flags also signal to your body that it's time to create

antibodies. Antibodies circulate in your bloodstream, looking for antigens. And once they find them, they tell

your body to launch a full-blown immune response and protect itself against these invaders. So a flu shot is like

an inverse guest list for the party that is your body. A vaccine tells your immune system who to bounce when

they show up. The trouble is flu viruses are constantly mutating, which causes their antigens to change all the

time-- from year to year or even over one flu season. Scientists call this annoying behavior antigenic drift. Flu

Britannica LaunchPacks | The Treatment and Prevention of Disease

122 of 123© 2020 Encyclopædia Britannica, Inc.

experts have to pick out which strains of viruses to include in a vaccine months before flu season kicks off so

companies can produce and distribute the vaccine in time. If experts guess wrong, your antibodies won't

recognize the invading flu viruses. And they won't stop you from getting sick, even though you went through all

the trouble of going to the pharmacy and getting your flu shot to make your mom happy. I got mine, mom. So

wouldn't it be great if scientists could create a vaccine that would work on any flu strain, no matter what antigen

flag it's flying? A universal flu vaccine, if you will. Well, research from this year suggests that scientists are

getting closer to coming up with one. Scientists have focused their efforts on understanding hemagglutinin

stems, called HA stems. These HA stems are basically standardized flagpoles that various viruses use to fly their

antigen flags. Two different research groups have created HA stems for a single vaccine to protect against

multiple flu strains. Even better, that might mean you wouldn't have to get a flu shot every year, maybe just

every 5 or 10, depending on how good your body is at remembering the antigens. So far they've tested these

new vaccines in mice, ferrets, and non-human primates, many of which have been protected from the flu. But

humans have a unique immune system, so clinical trials in people will really be the definitive next step. So

here's to hoping that researchers can take the guesswork out of flu shots and that they guessed right this year.

Searching for a universal flu vaccine.

© American Chemical Society

Citation (MLA style):

Discover how constantly mutating viruses are a challenge to a universal flu vaccine. Video. Britannica , Encyclopædia Britannica, 4 Mar. 2022. packs-ngl.eb.LaunchPacks: The Treatment and Prevention of Disease

com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Know why tuberculosis still poses a threat to humans and why the fight against this disease is far from over

Video Transcript

Britannica LaunchPacks | The Treatment and Prevention of Disease

123 of 123© 2020 Encyclopædia Britannica, Inc.

Video Transcript

NARRATOR: It's a deadly disease that only affects humans, and it's been around for a long time. CLIFTON BARRY:

This bug has been with humans since the paleolithic. NARRATOR: The fight against tuberculosis has been going

on for decades, even centuries. So why is this particular disease so hard to fight, and how close are we to

knocking it out? Though largely under control in the West, there are still about 9 million tuberculosis cases

reported worldwide each year, and more than a million deaths. Dr. Cliff Barry heads up the tuberculosis unit at

the National Institutes of Health. He recently published new findings on TB treatments in ACS Infectious

Diseases. Barry and his team are waging a fight against a resilient, constantly evolving, bacteria. BARRY: Drugs

and vaccines have been a huge failure recently. NARRATOR: There are treatments, but they are costly and

lengthy. BARRY: You take four drugs for two months, and then you continue on two of those for another four

months. So it's six months of treatment with a huge pill burden. NARRATOR: Those treatments have also been

around for decades, giving the disease time to adapt and leading to tougher drug resistant forms of the bacteria.

BARRY: It's a nightmare to deal with, because there is not really good options therapeutically. NARRATOR: Part

of the difficulty is that researchers still know very little about the bacteria itself. What is known is that the

bacteria's structure makes it even harder to fight. BARRY: They have an incredibly thick outer cell envelope

that's unlike anything else in the bacterial world. NARRATOR: It's hard to penetrate. And it has a unique way of

making itself at home in the lungs of a patient. TB uses the body's immune system to build a sort of castle inside

a lung called a granuloma. The immune system walls off the TB inside this castle of dead cells, but it doesn't kill

it off. Researchers have recently discovered that cholesterol in the bloodstream helps keep TB alive inside that

wall of cells. When the immune system can no longer keep TB walled off the disease strikes, killing at least half

the people infected with the active bacteria. Despite gains, Barry says the fight against tuberculosis is far from

over. BARRY: We have so much to learn. We have so far to go with this. NARRATOR: But he's buoyed by the

recent flood of research. BARRY: You got some good people focused on a hard problem now. And I think that's

probably why you see a lot more publishing in this area and a lot more-- and we will see a lot more publishing in

this area.

Learn why tuberculosis is still a threat to the human population.

© American Chemical Society

Citation (MLA style):

Know why tuberculosis still poses a threat to humans and why the fight against this disease is far from over. Video. , Encyclopædia Britannica, 4 Mar. 2022. Britannica LaunchPacks: The Treatment and Prevention of Diseasepacks-ngl.eb.com. Accessed 6 Apr. 2022.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.