CHANCE News 5.02
(9 Jan. 1996 to 3 Feb. 1996)
Prepared by J. Laurie Snell, with help from William Peterson, Fuxing Hou, Ma.Katrina
Munoz Dy, and Joan Snell, as part of the Chance Project supported by the National
Please send comments and suggestions for articles to email@example.com.
Back issues of Chance News and other materials for teaching a
CHANCE course are available from the Chance web site:
When you are listening to corn pop are you hearing
the Central Limit Theorem?
William A. Massey
Note: Chance is alive and well in Spain thanks to the presentation that Negambal
Shah at the U.S./Spain Joint Conference on Education in December about the Chance
Reminder: Don't forget that Nancy Reid has her own Canadian Chance News on the home
page for her course.
See especially her January 23 class. Here she mentions a recent article "Scientists
find dishcloths dangerous". The author of the study recommends using paper towels.
Professor Reid plans to discuss this article after her students have read Cynthia
Coisson's book "Tainted Truth".
In the last chance news we suggested that Marilyn was confused in her answer of "yes
and no" to the following two questions:
Say 10 tickets are numbered 1 through 10 in a
drawing. Half the numbers are even and half are
odd. The first ticket is drawn, and it's No. 3
which is odd. That leaves five even number and four
odd ones. Doesn't this mean that the next ticket
to be drawn is more likely to be even? If I buy a
ticket at this point, wouldn't I have a better chance
of winning the next draw by choosing an even number?
Sandy McRae writes that she thinks Marilyn's answer was correct. We agree, showing
again the dangers in being too quick to suggest Marilyn vos Savant is wrong.
We had a suggestion to provide for "chance dummies" answers to our questions. As the
above example shows we are all "chance dummies" and it would be dangerous to put
our answers in print. However, we would be happy to hear from anyone who wants us
to give our answer to a question raised. We will comment in a later chance news if we think
our answer would be of general interest.
Here's a discussion question suggested by Tom Hettmansperger.
The NYT radio (WQXR) news reports:
There were no homicides reported for the day of
the big blizzard. The average number for NYC is 3
Does this mean that the homicide rate is down on blizzard days?
Note: This is an interesting example to show how probabilities change as you get additional
information. A newspaper report described this as:
No homicides were reported in New York between
early Sunday evening and late Wednesday night. The
city averaged about three murders a day in 1995.
How would this change your answer to Tom's question?
A later newspaper report added the following information:
As the blizzard died, the killing resumed. Four men
in New York, including a livery cab driver, were
shot to death between 11:30 p.m. Wednesday and 9:20
a.m. Thursday, police said.
Now what is the answer to Tom's question?
Norton Starr suggested the following article:
Technology Review, Feb-March, pp 65-66
This is a review of John Paulos new book "A Mathematician Reads the Newspaper". While
Barnett has nice things to say about Paulos' writing and his mathematics, he writes
that he wishes Paulos had "chosen a limited number of themes, thought them through,
and used his formidable skills to develop them to the utmost". This is an issue that
those of us who teach a Chance course worry about. Is it better to get students
started thinking about uses and misuses of statistics with lots of examples from
current news, or concentrate on three or four typical examples and develop them in depth?
Probably both are useful.
Barnett supports his argument for the in-depth approach by discussing several examples
from Paulos' book that he feels should have been developed further to be convincing.
These examples would provide good discussion questions for a Chance course (we will give one of them). The review itself suggests that we could have an interesting
debate between Paulos and Barnett if it could be arranged.
In his book, Paulos comments on the 1993 Dinkins-Giuliani mayoral election, in
which the white Republican Giuliani defeated the black Democrat Dinkins. Claims
were made that more blacks voted along racial lines than whites because 75% of whites
voted for Guiliani and 95% of the blacks voted for Dinkins.
Paulos argues that this fails to take into account the preference for most blacks
for any Democratic candidate. He remarks that, assuming 80% of the blacks usually
vote for Democrats and only 50% of whites usually vote for Republicans, only 15%
of blacks voted for Dinkins based on race, but 25% of the whites voted for Guiliani based on
Barnett suggests that you could also support the original claim by arguing that if
80% of the blacks are Democrats then only 20% were in a position to favor race above
party and, since 15% of these did, 75% of the blacks in a position to switch did
so. Among whites, of the 50% who normally vote Democratic only, 25% switched so only 50%
of the whites in a position to switch did so.
Whose argument do you think is more convincing?
Nancy Reid sent us the following article complete with abstract and discussion question!
Most Quebeckers expect separation within 10 years.
Globe and Mail, Jan. 27, 1996. A1,A8.
The article summarized results from a poll conducted by Leger and Leger, for the Journal
de Montreal and the Globe and Mail.
The first paragraph says "three in four Quebeckers believe the province will become
a sovereign country some day and about 60 per cent expect that the change will occur
within 10 years". The poll was reported carefully, and later in the article we find
"When asked "Do you believe that Quebec will become a sovereign country within...',
2.9% replied within 1 year, 20.2% said within 2 or 3 years, 24.2 answered within
4 or 5 years, 14.5 predicted it would be within 6 to 10 years and 12% thought it
would take more than 10 years. Only 21.9% thought Quebec would never become sovereign.''
A series of questions were asked on whether Quebeckers feel they get a good financial
deal from the rest of the country. Here's an example: "Does Quebec receive or not
receive less than its share of federal government spending in the provinces?". Result:
50% said less; 37% said not less.
Statistics Canada data indicates that in fact in Quebec Ottawa raises $4,107 per capita
and spends $4,286.
What do you think about the phrasing of the questions?
Hoben Thomas wrote us about the following interesting article in the Wall Street Journal
we missed last time.
Studies try to measure the healing power of prayer.
Wall Street Journal, 20 Dec. 1995, B1
We have discussed this topic before and commented that there have not been many controlled
experiments in this area. Well, now a number of such studies are being carried out
and this article describes them. As Thomas remarks "the article provides some numerical data, certainly enough for discussions of controls, power and the like."
For example, the article describes a San Francisco AIDS experiments as follows: "20
praying people who are believers in healing prayer were recruited to "send intentions"
from cities like New York and Washington, D.C. Half of the 20 patients got intentions,
the other half didn't."
Researchers have strong feelings on this topic and the article has a number of interesting
quotes from experts. Here are two examples "If spirituality were a drug, we wouldn't
be able to make it fast enough". "If my doctor prayed for my recovery, I'd consider a malpractice lawsuit".
There are also a number of interesting letters to the editor about this article (Wall
Street Journal, 30 Jan. 1996, p. A19)
(1) What problems do you see with the San Francisco Aids study?
(2) Why say "send intention" rather than "pray"?
(3) In a letter to the editor Stuart Creque writes:
Since God is omniscient, it is impossible to
"blindfold" Him so that the identity of the test
subjects and the control group are unknown to Him".
Creque mentions a couple of strategies that God might choose which could confuse the
issue. If he is right what do you think God might choose to do?
The anniversary of the Challenger disaster results in a number of articles on risks.
Here are two such articles.
The New Yorker, Jan. 26, 1996, pp. 32-36
Gladwell begins by describing "the ritual to disaster" as an extensive investigation
to find the cause after a catastrophe such as Three Mile Island, the Challenger,
or a major plane crash. He reports that a group of scholars have recently suggested
that these may simply be rituals that, in fact, don't help avoid future accidents. They
argue that high-technology accidents may not have clear causes but rather are inherent
in the complexity of the technological systems we have developed. They suggest that
a disaster occurs when a series of small perfectly normal perturbations take place
in a way that, when combined, cause a major change resulting in a disaster. This
idea is presented for the case of the Challenger in a recent book "The Challenger
Launch Decision" by sociologist Diane Vaughan. She argues that the accident was the result
of "a series of seemingly harmless decisions made that incrementally moved the space
agency towards a catastrophic outcome."
Gladwell also discusses the theory of "risk homeostasis". This theory claims that
design changes to make a system safer, in fact, often have the opposite effect. The
explanation is that there is a tendency to compensate for lower risks in one area
by taking greater risks in another.
An example of risk homeostasis is provided by a study carried out in Munich several
years ago. A fleet of taxicabs was divided into two matching groups. One group was
equipped with anti-lock brake systems making braking safer especially in wet weather.
The other group was left alone. The two groups were observed secretly for three years.
The study concluded that the group with the anti-lock breaks did not have fewer
accidents because having these brakes led them to be more risky drivers in other
respects -- driving faster, tailgating, etc.
As additional evidence of risk homeostais, Gladwell remarks that we don't protest
the removing of speed limits, even though we know that these speed limits save lives,
because we really want to drive faster and airlines compensate safety improvements
in their planes and air controls by increasing the number of flights leading to new dangers.
(1) The recent series of accidents of commuter airlines has caused the government
to require pilots of commuter airlines to satisfy the same rules that apply for pilots
of larger planes, for example mandatory retirement at age 60. Do you think this will
decrease the number of commuter airline accidents?
(2) Give your own example of risk homeostasis.
(3) Is it reasonable to call risk homeostasis a "theory"? Where do you think they
got the name homeostasis?
The New York Times, Jan 28, 1996
William J. Broad
This article discusses, in a more conventional way, the possibility of another disaster
like that of the Challenger. The author remarks that, before this accident, NASA
estimated the chance of disaster at one flight in 100,000. When shuttle flights
resumed after the Challenger disaster, the risk of an accident was estimated to be 1 in
50. Last year a detailed NASA study, taking into accounts improvements in the shuttle,
put the chance of catastrophic failure between 1 in 76 and 1 in 230 missions, with
1 in 145 the most probable number. The report stated that, while reliability is increasing,
the risk of at least one more catastrophic failure is substantial before the current
shuttles are retired sometime in the 21st century.
Shuttles no longer carry major commercial or military payloads, which were banned
after the accident and now put on unmanned rockets. It is claimed that pressure to
defray costs by these payloads contributed to the disaster, but this risk factor
has been replaced by renewed demands by the Government to cut costs to the bone.
(1) How do you think they estimate the risk of a shuttle catastrophe?
(2) What does the remark "with 1 in 145 the most probable number" mean?
(3) The article states even with a risk of catastrophe of roughly 1 in 145 missions
for existing shuttles NASA officials worry that another disaster is almost inevitable.
About how many flights do you think they have in mind for these shuttles?
(4) How do the ideas in the New Yorker article apply to what is reported in this article?
The recent snow storms have led to a number of articles explaining how such storms
support the hypothesis of global warming. Here is one of them.
Blame global warming for the blizzard.
The New York Times, 14 Jan. 1996, sec 4, p4
William K. Stevens
It might seem paradoxical that record snowfalls in the Northeast should appear to
be confirmation of the global warming hypothesis.
The explanation given is that a warming atmosphere causes more evaporation of water
from the ocean, which means more rain, snow, or sleet. The conversion of more water
from vapor to precipitation releases more energy into the atmosphere which makes
storms more powerful. Similar reasoning shows that global warming is expected to produce
hotter heat waves and more severe droughts.
Comparing the increases in these extreme situations in the past century with the increases
computer models predict as the result of global warming has led scientists to conclude
that there is a 90 to 95 chance that the increase in extremes was caused by the increase in greenhouse gasses.
The article reminds us of the difficulties of long range weather predictions even
with sophisticated computer models. An exception has been predictions based on the
quasi-periodic appearance of El Nino, a large pool of warm water in the equatorial
Pacific which leads to a prediction of a warm winter in the Northeast. The appearance of
El Nino last year allowed the forecasters to correctly predict a warm winter for
last year. This year El Nino disappeared but the Weather Service again predicted
a warm winter based on previous weather patterns. So far they seem to have been wrong with this
(1) How do you think they arrived at the 90 to 95 percent probability that the climate
extremes were caused by greenhouse gasses.
(2) Scientists are concerned that arguing that we have global warming based on what
happens in a single year can be dangerous because it allows critics of the theory
of global warming to argue, during a normal year, that global warming is nonsense.
Is this a valid concern?. What should we be looking at to assess the global warming hypothesis?
High-dose beta carotene pills have no impact on health, study says.
The Boston Globe, 19 January 1996, p11
by Peter J. Howe
In recent years, beta carotene (from which the body makes vitamin A) has been linked
to reduced risk of cancer, strokes, heart disease, cataracts, artery-hardening and
arthritis. However, a new study led by a Brigham and Women's Hospital physician
found that taking beta carotene supplements, in doses equivalent to about eight times the
government's recommended daily amount of vitamin A, had no significant positive or
negative effects on cancer or cardiovascular disease. Since the previous studies
provided beta carotene through produce-rich diets, it may be that beta carotene provides benefits
only through interaction with other vitamins, or even that other nutrients by themselves
were in fact responsible.
A second study, this one by the National Cancer Institute, reported a controlled experiment
with 18,000 smokers or people exposed to asbestos. Those assigned to a treatment
group taking a beta carotene supplement had a 28 percent higher lung cancer risk
and a 17 percent higher mortality than those given a placebo. While no causal mechanisms
are proposed, these results are consistent with findings of a Finnish study involving
29,000 male smokers, discussed in previous editions of Chance News (3.06, 3.07, 3.12).
Why do you think they picked smokers and people exposed to asbestos to test the effect
of beta carotene?
Mapping out health care: authors say atlas provides evidence
that 'rational' system does not exist.
The Boston Globe, 30 January 1996, p 3.
Richard A. Knox
The Dartmouth Atlas of Health Care, to be published by the American Hospital Association,
compares health care in 306 US markets. It reveals substantial geographical variations
both in availability of resources and in rates of use of various medical procedures. For example, people living in mid-New Hampshire were found to be twice as
likely to undergo coronary angioplasty tests for heart disease as those living near
the Vermont border. In eastern Massachusetts, one in three breast cancer patients
are treated by lumpectomy instead of radical surgery; in the Interstate-91 corridor of New
Hampshire and Vermont, the rate is less than one in five.
Dr. John Wennberg of Dartmouth Medical Center, who headed the project, says: "The
atlas provides compelling evidence that a rational health care system does not exist."
In fact, in the debate over health care costs, policy analysts disagree over which
rates are "right." Some procedures, such as mammograms, seem underutilized even in high-rate
areas. But for other procedures, where lower utilization rates are apparently not
associated with adverse health effects, the high-rate areas are likely to be pressured to control costs.
A chart accompanying the article gives data on variations in the New England area.
More sample results including impressive graphics from the atlas are available on
(1) The chart gives "rates" for coronary artery surgery, angioplasty and angiography,
all expressed "per 1000 population." What other information might you want?
(2) The number of "adjusted [hospital] beds" is also expressed per 1000 population
(3.1 in Boston area). But the number of physicians is expressed per 100,000 population
(256.0 in Boston). Does it matter that a different denominator was used for this
one category? Why do you think this was done?
(3) In the Boston area, with 256.0 doctors per 100,000 people, the payments per Medicare
recipient are $4269. In the Worcester area with 210.7 doctors per 100,000 people,
the per recipient payments are $3943. Does this mean having more doctors drives
up Medicare costs?
(4) A lot of the differences are due to the fact that in many cases a patient has
a number of alternatives and there is no "correct" choice. Wennberg has suggested
that in these cases patients to not make their choices "at random" but rather their
choices are biased according to particular locations and hospitals. What could such a bias?
Plans may balance, but budget may not.
The New York Times, 14 January 1996, p A13.
Experts warn that the longer the wrangling over the Federal budget goes on, the less
likely it becomes that the budget will really balance as promised seven years out.
This is because each side continues to add assumptions and stretch calculations
in the search for compromise. The last published Republican proposal, published in mid-December,
forecasts a $3.9 billion surplus for 2002. The Clinton administration published
a plan last week, predicting a $1.9 trillion budget in 2002 with a surplus of $1
Stanley Collender, director of Federal budget policy for the accounting firm Price
Waterhouse, tells his clients: "...I don't care what plan they come up with, the
best you'll see by 2002 is a deficit of about $100 billion a year." And Rudolf Penner,
a Republican economist who ran the Congressional Budget Office during the 1980s, maintains
that any seven-year plan could err by $300 billion or more in 2002.
The experts are careful to add that they believe the plans offered are indeed serious.
But they question stories which report how many billions separate the two sides
on, say, Medicare, when overall spending over the next seven years will total many
trillions of dollars. They also caution that in both Republican and Democratic plans,
many of the hard budget cuts come towards the end of the forecast period.
(1) Under the current proposals, will the national debt increase or decrease over
the next seven years?
(2) Regarding the administration's projections, is it meaningful to predict a $1 billion
surplus in a $1.9 trillion budget 7 years out?
(3) The Congressional Budget Office estimates that a balanced budget will create a
$282 billion revenue windfall for the government, from higher tax revenues and lower
expenses. Both sides were quick to include this number in their proposals. Do you
think this is realistic?
Politics: In the lead; Dole asserts press avoids close scrutiny of opponent.
The New York Times, 30 January 1996, B7.
Katherine Q. Seelye
New results from the Pew Research Center, a Washington polling organization, show
Steve Forbes leading Robert Dole 29% to 24% among 543 Republicans and Independents
who said they would probably vote in the Republican primary in New Hampshire on February
20. However, the article points out that another poll, by a New Hampshire Organization
called the American Research Group, found Dole leading Forbes 33% to 16% among likely
Republican primary voters, with 15% favoring Pat Buchanan. (On the Iowa Political
Futures on the web, shares for Forbes increase significantly in the last couple of weeks
to about 25 with Dole remaining at about 50.)
(1) Do the phrases "Republicans and Independents who said they would probably vote
in the Republican primary" and "likely Republican primary voters" necessarily describe
the same target populations?
(2) Is any difference between "likely" and "probably"?
(3) Should we buy Forbes shares?
Though imperfect, polls offer best tool to measure public opinion.
Houston Chronicle, 13 Jan. 1996
Robert S. Boyd
This is a long article discussing how polls work and some of the reasons two different
polls dealing with the same topic can give significantly different results. Examples
of the difference the order of the questions and the wording can make in the outcome are given. For example, last fall a Gallup survey reported that Americans approved
sending troops to Bosnia by 46 to 40 percent. The poll did not mention that 20,000
U.S. troops were committed to go. A CBS New poll, mentioned the 20,000 figure and
got the opposite outcome -- a 58 to 33 percent disapproval rate.
Boyd makes a serious attempt to explain in simple terms what margin of error, confidence
intervals etc. mean. He even includes a sampling example with 10 coins. He does
a pretty good job with these technical matters.
A piece of probability theory that is sometimes hard to
grasp is that the size of the population being surveyed
doesn't matter. A sample of 1,000 is just as sound for
a nation as for a city or state.
Do you agree with this?
Here are two articles with very different views of a recent report on how women are
doing in professional work.
Study says equality eludes most women in law firms.
The New York Times, 8 Jan. 1996, I-9
A recent report by the American Bar Association states that despite increasing number
of female lawyers, bias against women continues and results in inequities in pay,
promotion and job opportunity.
Even though women now sit on the Supreme Court, head the Justice Department and preside
over the bar association itself, the report finds that barriers continue to face
the rank and file female lawyers.
The commission members noted that the findings seem to contradict the perception among
male lawyers that women are being given preference over men in the job market.
The commission cited studies showing that women have been disproportionately hurt
by the recent shrinking of law firms after the rapid expansion of the 80's.
Among Colorado lawyers with only one to three years experience the average annual
income was $30,806 for women but $37,500 for men. For those with 10 to 20 years
experience, women averaged 76% as much as men, or $68,466 to the men's $90,574.
The bright spot in the report is in government appointments. A record 31.5% of President
Clinton's appointments to the Federal bench were women. The Department of Justice
is cited as a leader in creating a hospitable workplace for women, offering part-time work, job-sharing, flexible schedules and on-site child care.
The article remarks that today's young female lawyers say they are less willing to
make extreme personal sacrifices to adapt to the work culture defined by and for
You can find out how to get a copy of this report.
Would a different work culture between men and women be a confounding factor in
a study comparing salaries?
How to spin the media in one easy lesson.
Rocky Mountain News, 28 Jan. 1996, p. 56A
Clifford D. May
The author argues that the previous "New York Times" article represents a biased view
on the issue of women in professional work. He claims that the Times gave extensive
coverage to the ABA study but not to the recent study of the Pacific Research Institute for Public Policy on the same issues. May states that the Pacific Research study
looked at a number of professions, law among them, and concluded that the wage gap
between men and women is narrower than commonly believed, and that most of the differences that do exist "reflect not systemic discrimination, but the effects of individual
May complains that the ABA refused to give an advanced copy of the report to those
who might have been skeptical of some of the results and might have provided balance
to the Times article.
To obtain a copy of the Pacific Research study write:
It would be interesting student project to get both reports and compare their methodology
Katherine Post, principle investigator of the Pacific Research study, suggested
that the ABA comment that in Colorado "though male and female lawyers now start out
with equal pay, an earnings gap opens in the first year of practice and widens rapidly
may not indicate bias against women but rather that women choose different kinds of
law -- for example, perhaps women go into public interest law, while many men go
into corporate law. Does it seem likely to you that more men would go into corporate
law then women? If so why?
There was a lot of news about AIDS during this period, some of which seemed pretty
Three-Drug Therapy Shows Promise Against AIDS
The New York Times, C5, 30 January 1996
Lawrence K. Altman
A combination of an experimental anti-AIDS drug and two licensed ones appears to be
the most potent form of AIDS therapy ever tested on infected patients. Indinavir,
a protease inhibitor, and two marketed drugs, AZT and 3TC, reduced the amount of
H.I.V by 99% to levels that could not be detected by laboratory tests in 24 out of 26 patients.
The combination's effect was measured both in terms of the amount of H.I.V. in the
blood and the standard CD-4 count. The study did not consider the effects of the
combination on illnesses from H.I.V. infection and AIDS.
The study was led by Dr. Roy Gulick of New York University, and conducted at NYU and
at the University of Pittsburgh, the University of California at San Diego, and the
University of North Carolina at Chapel Hill.
Indinavir reduced the amount of H.I.V. to undetectable amounts in 44% of 9 patients
after 6 months. The lowest detectable amount of H.I.V. is measured as copies of
virus per milliliter of blood and ranges from 200 to 500 copies. All patients entered
the study with at least 50,000 copies. In 8 patients who received only the combination
of AZT and 3TC for the same time period, the virus continued to be detected in high
Researchers caution, however, that it is too early to determine how long the favorable
effect would last, how many patients would benefit, and what long-term complications
Could these results be interpreted to mean that researchers have found a cure for
Survival of AIDS patients linked to doctors' knowledge of AIDS.
The New York Times, 1 Feb. 1996, A12
Lawrence K. Altman
A new study conducted by researchers from the University of Washington has concluded
that how long a patient survives from AIDS is directly linked to a doctor's experience
in treating the disease. The study involved more than 400 patients treated by 125
primary-care doctors from 1984-1994 at the Group Health Cooperative of Puget Sound,
a Seattle-based HMO.
After AIDS was diagnosed, the median survival among patients of doctors with the most
experience with AIDS was 26 months, compared to 14 months for those treated by the
least experienced doctors.
The study was conducted because many doctors had no formal training in AIDS treatment
and the standard of care for AIDS patients is continually evolving. In addition,
many AIDS patients are cared for by family practitioners and primary-care doctors,
and not by specialists in infectious diseases.
Lawrence Altman reports that the Seattle researchers considered the following factors:
the AIDS experience the doctors in the cooperative had in their training programs
and medical practice; the severity of illness of the patients they treated; and changes in AIDS care over the ten years the study covered.
Dr. Mari Kitahata, the head of the research team, cited three areas in which experienced
doctors monitored AIDS patients more closely or treated them more aggressively than
their less experienced colleagues:
(1) Monitoring the number of CD-4 cells, the
specialized white cells in the blood that
play a crucial role in the immune system.
A declining CD-4 is used as a guide for a
(2) Prescribing drugs to prevent development of
pneumocystis carinii pneumonia, a common
complication of AIDS that is a major cause
of death among AIDS sufferers. Patients
treated by the most experienced doctors had
fewer diagnoses of this form of pneumonia
as their first form of AIDS-related illness.
(3) Providing more aggressive anti-H.I.V. therapy.
(1) Why do you think the authors used the median rather than the mean to compare
the survival times of the patients?
(2) The article says that the researchers took into account the severity of the illness
of the patients doctors treated? What do you think this means?
(3) What confounding factors might be lurking in this study?
New Test Predicts Progress of AIDS Virus.
The New York Times, 31 Jan. 1996, A14
Lawrence K. Altman
Researchers from the University of Pittsburgh announced that a new test, called branched
DNA, measures the amount of AIDS virus in blood and predicts the progression of infection
to disease much sooner and more accurately than the standard test. In addition, the new test gives a better indication of a patient's chance of survival for
five years and can be used to establish a system of stages of infection with H.I.V.
The test currently used is called the CD-4 count. It measures the number of CD-4
white cells in the blood, indicates disease progression, and measures a patient's
response to anti-H.I.V. therapy. Branched DNA, on the other hand, measures the viral
load, the amount of H.I.V. in the blood, and can be used to determine which patients need
anti-H.I.V. treatment and when.
The researchers compared the new and standard tests on samples collected over 10 years
from 181 H.I.V.-infected patients who enrolled in a study financed by the National
Institutes of Health. Of the 181 participants, 116 died of AIDS and 65 are alive.
Of the 181, 41% received an anti-H.I.V. treatment. The remainder did not.
Other findings include the following:
(1) The risk of progression to AIDS after 7 years was
less than 10% among those whose viral load was less
than 5,000 per milliliter, in the lowest quarter of
the study. The risk was 60% for individuals whose
viral load was more than 34,500/ml, the highest
(2) The test found that, of the 43 patients whose viral
load was in the lowest quartile, none died within 5
years of the measurement. By contrast, 65% of those
in the highest quartile died within the same time
Many doubt that branched DNA will replace the CD-4 count because the two tests measure
different factors. The CD-4 count monitors immune system function, while branched
DNA gauges the amount of virus in the blood.
A New AIDS Drug Yielding Optimism As Well As Caution.
The New York Times, 2 Feb.1996, A1
Lawrence K. Altman
A new international study involving ritonavir, a new AIDS drug, has found that ritonavir
halved both the death rate from advanced AIDS and the number of serious complications
from the disease.
Ritonavir is a protease inhibitor. Earlier studies have shown that protease inhibitors
lead to a sustained rise in the number of CD-4 cells, specialized white blood cells
that are needed for immune system function and are destroyed by H.I.V. The ritonavir study provides the first evidence of the effectiveness of protease inhibitors in
Of the 1,100 patients involved, 13% died or suffered further progression of severe
AIDS as compared with 27% of patients receiving a placebo. The study defined disease
progression as the onset of a new AIDS-related illness such as Kaposi's sarcoma and
pneumocystis carinii pneumonia.
The death rate was 4.8% among 543 patients who received ritonavir and 8.4% among 547
patients who received a placebo. CD-4 count rose and stayed elevated for 16 weeks,
while the number of CD-8 cells (also an immune white cell) also rose. About 15%
of the patients taking ritonavir dropped out due to side effects (this was double the 7%
rate of the placebo group).
All patients were allowed to receive all anti-AIDS virus drugs they were taking before
the study. Participants also had to meet several criteria. They had to have been
taking one or two marketed anti-H.I.V. drugs for at least 9 months at some time in
the past. They had to be free of any active infection with one of the many microbes that
can complicate AIDS. Lastly, they had to have a CD-4 count of less than 100 cells/microliter.
The amount of H.I.V. rose steadily after dropping precipitously within the first two
weeks of therapy. Some researchers wonder if the unfavorable trend was due to the
appearance of strains of ritonavir resistant H.I.V. but resistance studies have not
Despite the positive effects, researchers caution that the duration of ritonavir's
beneficial effects remains unknown.
(1) Why were participants required to meet the criteria given?
(2) The article reports that there were 543 patients in the ritonavir and 547 in
the placebo group. Using the percentages given in the article, find the number of
patients in each group who died or suffered further progression of AIDS. How many
patients in each group died?
(3) If retonavir makes no difference, how many of the 72 patients who died would
you expect to find in each group? If you toss a coin 72 times do you think
there is a reasonable chance that you would get as many as 46 heads? What does
this have to do with the study?
The model was too rough: why economic forecasting became a sideshow.
New York Times, 1 Feb. 1996, D1
Recently several large corporations, such as I.B.M. and General Motors, have reduced
or eliminated their economic forecasting staffs and signed up with independent agencies
to estimate future interest rates, capital spending, and inflation.
The cutbacks have to do with disillusionment about the reliability of computer-model
economic forecasting. Economic models were first built in the 1930's as a series
of statistically estimated equations that describe the determinants of consumption,
investment, and other factors. In the 1960's, model builders constructed even more elaborate
equations to stimulate the economy and fed these models into computers.
Unfortunately such models failed to predict the stagflation of the 1970's (underestimating
inflation by 3 percentage points) and the severity of the 1980-1982 recession (triggered
by the Federal Reserve Board's tightening of monetary policy). They are deficient at timing issues (in what quarter a recession will occur) and do not accurately
gauge the impact of big shocks, such as wars.
Other problems include a failure to account for the role of expectations in determining
how businesses and households respond. For instance, the computer models link personal
consumption to income, while in reality the proportion of income consumed may hinge on a household's expectations about tax and price increases and job security.
Despite its many downfalls, forecasting can improve the odds of accurately gauging
trends and save both companies and taxpayers money.
Why do you think computer models have trouble predicting the future of economic variables?
Please send comments and suggestions for articles
CHANCE News 5.02
(9 Jan. 1996 to 3 Feb. 1996)