Профессор
Майкл Хоган
(Hogan,J.M.)
– выпускник
The
University
of
Wisconsin-Madison,
там же он получил Ph.D.
в области риторки в 1983 году.
Его научные
интересы сосредоточены на анализе характера и качества обсуждения в обществе
различных социальных проблем. В частности им разрабатываются темы, связанные
ходом политических кампаний, развитием социальных движений, дебатами по
вопросам внешней политики, общественному мнению и опросам.
Его основные
публикации:
The Panama Canal in American Politics (1986)
и
The Nuclear Freeze Campaign
(1994),
под его редакцией вышла книга
Rhetoric and Community.
Studies in Unity and Fragmentation (1998),
объединяющая
статьи ведущих специалистов по риторике и дискурсу,
идет
работа над томом
Imperialism and Reform in the Progressive Era.
В
The Pennsylvania State
University
профессор М.
Хоган читает курсы:Political
Campaigns in the Age of Television, Contemporary American Political
Rhetoric, Political Communication and the Media, ведет
семинар
Historical Criticism: The Rhetoric of Social Movements
и др.
Предлагаемая ниже
статья – является одной из наиболее цитируемых современых работ по
методологии исследований общественого мнения, политической риторике и
научного наследия Джорджа Гэллапа. Профессор М. Хоган любезно предоставил
нам право ее публикации в сети.
Борис Докторов
Throughout his career,
George Gallup, the `father" of modern polling, crusaded tirelessly to
establish polling's scientific and cultural legitimacy. In public speeches,
several books, and more than a hundred articles in journals and popular
magazines, Gallup mythologized polling's history of `progress, "deflected
doubts about the polls ' accuracy and technical procedures with a rhetoric
of scientific mystification, and celebrated the collective wisdom of "the
people." Gallup's "rhetoric of scientific democracy" sustained polling's
cultural legitimacy, yet it also diverted attention from its most perplexing
sources of error and stifled debate over its deleterious effects on the
democratic process.
"Everybody believes in
public opinion polls-everyone from the man on the street all the way up to
President Thomas E. Dewey."
Goodman Ace (as cited in
Field, 1990, p. 37)
Polling has come a long
way since 1936. In that fateful year, George Horace Gallup, the father of
"scientific" polling, predicted the spectacular failure of the Literary
Digest's presidential "straw poll." Over the next half century, Gallup's
name became virtually synonymous with polling, and today his legacy is an
industry of more than 2000 organizations with annual revenues in excess of
$4 billion (Walden, 1990, p. xiii). Promoting polling as "the pulse of
democracy," George Gallup sold America on a new scientific "instrument" that
he promised might "bridge the gap between the people and those who are
responsible for making decisions in their name" (Gallup & Rae, 1940, pp.
14-15).
Throughout his life,
Gallup preached about polling with "evangelical devotion" (Wheeler 1976, p.
87). In public speeches, several books, and more than a hundred articles in
trade publications, academic journals, and popular magazines,[i]
Gallup served as polling's unofficial historian as he mythologized the
critical incidents in polling's story of "progress." In addition, he
campaigned hard to establish polling's claim to science by statistically
documenting its record of accuracy and developing lay explanations of its
technical procedures. Above all, Gallup promised that polling could make
"democracy work more effectively" (Gallup & Rae, 1940, p. 11). Celebrating
the wisdom of the "common man," Gallup dreamed of a final stage in the
evolution of the American democratic experiment-an age of scientific
democracy, in which "the will of the people" would provide continuous
direction to policy makers "on all he major issues of the day (Gallup & Rae,
1940, p. 125). For Gallup, polling represented more than an imperfect social
science or a profit-making venture. To criticize polling was to criticize
"progress," "science," and democracy itself.
Gallup died in 1984, but
his American Institute for Public Opinion Research remains an industry
leader. Today, the Gallup Poll is syndicated to hundreds of newspapers and
conducts "exclusive" polls for CNN, USA Today, Newsweek, and other news
media. Meanwhile, Gallup's rhetoric of scientific democracy continues to
sustain polling's cultural legitimacy by deflecting attention from its most
perplexing sources of error and stifling debate over its deleterious effects
on the democratic process. With leaders refusing to lead out of deference to
the polls, and with references to the polls often supplanting deliberation
over the merits of policies, the polls increasingly shape both the agenda
and the outcomes of public debate. George Gallup alone did not create our
cultural obsession with polls, nor would he have approved of how politicians
and journalists often invoke the polls as a substitute for policy analysis.
Yet echoes of Gallup still can be heard in the contemporary defense of
polling, and he pioneered many of the polling practices that, ironically,
now threaten his own dream of a more inclusive and efficient democracy.
Properly conducted and
interpreted, polls might well serve democracy as Gallup envisioned by
providing at least one (albeit imperfect) measure of public opinion. In
recent years, however, the polls have become, if anything, even less
reliable, and the mission of polling has been changed radically by
journalistic imperatives. Instead of guiding policy makers, polls have
become "news events" in themselves that not only substitute for substantive
information, but also fuel so-called "horse race" journalism. In one sense,
Gallup's youthful dream has become a reality. Polls are ubiquitous in
American public life. Unfortunately, they rarely serve democracy as Gallup
envisioned, and what Michael Wheeler (1967) said about the polls nearly 30
years ago is even truer today: "for every good poll there are dozens,
perhaps hundreds, of bad ones" (p. 301).
The Historical
Folklore
As a young entrepreneur
in 1935, George Horace Gallup made a remarkable sales pitch to the
newspapers of America: a money-back guarantee that he could predict the
outcome of the 1936 presidential election more accurately than the famed
Literary Digest poll. As David Moore (1992) has written, this was "no small
gamble." If he failed, Gallup faced financial ruin, and the Literary
Digest's record had been impressive. In 1928, it accurately predicted
Herbert Hoover's landslide, and four years later it came within a percentage
point in predicting FDR's victory over Hoover. Despite confidence in his new
"scientific" methods, Gallup remained anxious. He had faith in his figures,
but as he confided to friends, there was "always a reasonable doubt" (pp.
31-33).
The story had a happy
ending, of course. Gallup proved right, the Digest proved wrong, and polling
thereafter emulated Gallup's use of "scientific" methods. In large measure,
however, the story of 1936 was the product of Gallup's own public relations
and ignores one important fact: Gallup also missed the mark badly in 1936.
Gallup correctly predicted the winner, but he too significantly
underestimated FDR's vote-by nearly seven percentage points.[ii]
Telling the story in his
coauthored 1940 book, The Pulse of Democracy, Gallup actually defended the
Digest against critics who suspected a Republican conspiracy. The Digest
poll had not been "rigged" to favor the Republicans, Gallup insisted; the
"sincerity and honesty" of the poll's sponsors were "beyond question." So
why, then, did the Digest "go wrong?" The answer lay "not in a lack of
honesty. . ., but rather in the fact that in this business of polling public
opinion, sincerity and honesty . . . are not in themselves guarantees
against errors and inaccuracies." The "disaster" of the Literary Digest poll
resulted from its "sampling methods," not from "the morals of its
organizers," according to Gallup. Thus emerged the case for his new, more
"scientific" methods (Gallup & Rae, 1940, p. 44).
Gallup's "Lessons of
1936" were cleverly misleading. In discussing his own performance (Gallup &
Rae, 1940, pp. 44-55), he began by emphasizing more accurate figures that he
had released earlier in the 1936 campaign. When he finally admitted that his
final prediction missed the mark by seven percent, he claimed to have "erred
on the side of caution" and, of course, he still did better than the Digest
poll. Gallup also discussed his own poll in the context of other
"scientific" polls that did better: the Crossley poll (less than 6 percent
error) and the Fortune poll (which came "amazingly close" in predicting the
outcome within a percentage point). In mentioning these other "new polls,"
whose methods also were "based on scientific principles," Gallup took at
least some of the credit for those who did better:
The surprising accuracy
of the results obtained by these new polls in 1936, bearing in mind that it
was the first real test of scientific sampling in a national election, bears
witness to the cogency of the criticisms of the Digestpoll, and to the
underlying soundness of the alternative methods which were put to the test.
(Gallup & Rae, 1940, p. 49)
Gallup thus designated
1936 as the starting point of the modern era in polling and dis inguished it
from a dark age of straw polling dating back to 1824. Even though some of
the new polls "weren't quite as accurate as their sponsors hoped," all
proved to be far more accurate than the Literary Digest poll, despite their
reliance on "only a small fraction of the gigantic sendout employed by The
Digest" (Gallup,1940b, p. 23). Gallup marveled at how "in our own day"
polling had "developed from a glorified kind of fortunetelling into a
practical way of learning what the nation thinks." Describing this "firs
national-election test" as "by no means a final statement of the accuracy of
the results which could be obtained by the sampling method," Gallup promised
still better things to come (Gallup & Rae, 1940, pp. 5, 55).
Gallup appeared to
deliver on that promise over the next decade as he wrote a new chapter in
polling's story of "progress" with each new election. Correctly predicting
FDR's victories in both 1940 and 1944, Gallup's became America's best-known
and most trusted pollster, a man celebrated in a Time magazine cover story
as the "Babe Ruth of the polling profession," a "big, friendly, teddybear of
a man with a passion for facts and figures" (The black and white beans,
1948, p. 21). A few critics questioned his methods, and some even alleged
political biases (see, e.g., Ginzburg, 1944, pp. 737-739), but Gallup (1944)
simply dismissed such allegations as "unintelligent, even fantastic."
Confident that such an inquiry would put an end to the "misrepresentations
and downright falsehoods" and help Americans "become familiar with the
technique of polls and their contribution to this government" (p. 795), he
even endorsed calls for a congressional investigation of polling.
Then came the
"scientific pollsters"' own great disaster: the prediction that Thomas Dewey
would defeat Harry Truman by anywhere from five to fifteen percentage points
in 1948. When Truman actually won by more than four percentage points, the
entire industry came under attack, with Gallup personally taking much of the
heat. Initially, an obviously distraught Gallup suggested that bribes and
rigged ballot boxes must have altered the outcome and argued, in effect,
that the election, not the polls, had turned out wrong (Wheeler, 1967).
Soon, however, Gallup joined with a number of social scientists,
journalists, and politicians in investigating what went wrong and, in the
process, rhetorically transformed the disaster of 1948 into a "blessing in
disguise."
The Social Science
Research Council (SSRC) sponsored the first of two major investigations (Mosteller
et al., 1949). Concerned that the pollsters' miscall might have "extensive
and unjustified repercussions," not only on polling but "social science
research generally," the SSRC appointed a committee to study "the technical
procedures and methods of interpretation" that had led the pollsters astray
(p. vii). Completing its investigation in just five weeks, the SSRC
committee considered possible errors at every step in the polling process,
including sample and questionnaire design, interviewing, problems in dealing
with "likely" and "undecided" voters, data processing, and the
interpretation and presentation of results. In its final report, however,
the committee identified two major "causes of error" in 1948: (a) errors of
sampling and interviewing and (b) errors involving the pollsters"' "failure
to assess the future behavior of undecided votes and to detect shifts near
the end of the campaign" (pp. 299-300).
The first problem, the
SSRC argued, stemmed from the "quota method" then used by all the major
pollsters in their national forecasts. In that method, quotas were
established for respondents with certain attributes, but interviewers
selected specific individuals to be questioned. Aside from the problem of
determining relevant "attributes," as the SSRC observed, this method caused
two major difficulties: "the composition of the population may not be
accurately known for the determination of quotas, and the interviewers may
operate . . . as 'biased' selecting devices" (p. 84). The SSRC concluded
that "probability" sampling, which the pollsters had considered too costly,
would have eliminated at least the problem of interviewer bias in selecting
respondents.
The second source of
error, according to the SSRC, involved a "central problem" in all social
science research: how to predict human behavior. With no good way to assess
the future behavior of undecided voters or to detect late trends, the
pollsters simply ignored "undecided" voters or allocated them arbitrarily to
candidates, which created an error that was significant, but "not precisely
measurable." The committee predicted that "the gap between an expression of
intent and actual behavior" would continue to be a major source of error in
election forecasts and that this had to be "recognized as a baffling and
unsolved problem imposing serious limitations on opinion poll predictions"
(p. 302).
While cooperating fully
with the SSRC committee, Gallup challenged its basic conclusions at a second
postmortem held a few weeks later at his alma mater, the University of Iowa
(Meier & Saunders, 1949). Suggesting that the SSRC committee "felt that it
had to point an accusing finger at the pollsters," Gallup declared:
"Hindsight is always twenty-twenty!" Having since read the committee's
report Gallup still had not seen "any evidence to support the contention
that our sampling and interviewing methods accounted for any large part of
our error." Gallup had a very different view of what had gone wrong: the
pollsters, in effect, had been victimized by their own remarkable record, by
the public's belief that polling had "reached a stage of absolute
perfection" (pp. 178, 203-204).
Gallup himself preferred
to view 1948 "against the background of poll performance recorded in the
years since 1935," since the "average error in these forecasts-including
those made in the presidential election of 1948"-remained "approximately
four percentage points." That, he argued at the Iowa Conference, fell within
the expected range of error for polling research "at this stage of its
development." Even counting 1948, Gallup insisted that the pollsters had
established a remarkable record: "I, for one, marvel that elections in this
country can be forecast with an average error of only four percentage
points" (pp. 177-178, 182).
To the extent that they
erred in 1948, Gallup insisted that pollsters had made "mistakes in judgment
and not in basic procedures." At the Iowa conference, he identified two such
"mistakes": the failure to conduct "a last-minute poll" and the ssumption
that "undecided" voters "could be ignored and eliminated from the sample."
As it turned out, "many voters" apparently had been "lured back into the
Democratic fold" in the final days of the campaign, and "many" of the
"undecided" voters had, in fact, voted for Truman (pp. 177, 180-181).
Gallup thus turned 1948
into a useful learning experience and, in effect, an argument for more
extensive polling. Promising "improvements in every single department" of
his operation, Gallup concluded his remarks at the Iowa Conference by again
predi ting better things to come: "You may think that I am merely a chronic
optimist, but I can't help feeling that this bitter experience of November,
1948, has made a lot of people examine the whole business of polling more
critically, and I think that is a real gain" (pp. 204-205). In later years,
Gallup routinely referred to 1948 as a "blessing in disguise" (see, e.g.,
Gallup, 1955-56, p. 237; Gallup, 1972, p. 200). Because it inspired an
"agonizing reappraisal," 1948 offered valuable lessons about how to deal
with undecided voters, how to screen out nonvoters, how to take account of
"intensity and prestige factors," and how to poll "within a couple of days
of the election" (Gallup, 1972, pp. 200-01). Gallup never admitted that his
sampling methods had failed in 1948, but over the next several elections he
quietly expanded his use of "pin-point" or "precinct" probability sampling.
By the 1956 election, he had aba doned "quota" sampling altogether and had
developed new "machinery" for measuring "trends to the very end of a
campaign" (see Gallup, 1957-58, pp. 25-26). By the early 1960s, Gallup could
boast that he had bested even his most optimistic predictions of the
pre-1948 era by achieving an "average error" of only about "one to
one-and-a-half percentage points." "There are still problems," Gallup told
an interviewer in 1962, "but . . . I doubt if we can improve our accuracy
very much . . . obviously we can't do much better than that" (as cited in
McDonald, 1962, p. 22).
Today, the Gallup Poll
Monthly features a "Gallup Poll Accuracy Record" that begins with 1936 and
calculates the "average error" of the Gallup polls separately for the pre-
and post-1948 periods: 3.6 percentage points between 1936 and 1950, compared
with only 1.6 percent between 1952 and 1992 (see Gallup oll Monthly, 1992).
The elections of 1936 and 1948 thus serve as defining moments in the history
of polling by providing benchmarks for statistical demonstrations of
improved accuracy and anecdotal evidence of key developments in polling's
story of "progress."
Unfortunately, polling's
historical folklore has diverted attention from more fundamental questions
about polling's claim to "science." While pollsters undeniably have made
technical advances in sampling and statistical procedures, they have barely
scratched the surface of more basic methodological problems-problems related
to what the pollsters euphemistically call "nonsampling error." For more
than forty years, George Gallup downplayed these more mysterious and
nonquantifiable sources of error by boasting of the industry's record in
predicting national elections. In the final analysis, however, Gallup not
only misled the public but encouraged his own industry to ignore its most
perplexing methodological problems.
A Rhetoric of Science
While pollsters today
generally disclaim the predictive value of polls (see Crespi, 1988), Gallup
portrayed election forecasts as important scientific "tests." In The Pulse
of Democracy (1940), he and Rae wrote: "Pre-election tests are important
because they enable surveyors to put this new method of measuring public opi
ion through its most exacting test" (p. 81). Several years before the 1948
miscall, Gallup told the House Campaign Expenditures Committee (U.S. House,
1945) that although the polls had not achieved "absolute accuracy," their
record did "represent a degree of accuracy found in few fields outside the
exact or physical sciences" (p. 1254). Even after the 1948 debacle, Gallup
called election predictions the "acid test for polling techniques" and
boasted of "a record of which we can all be proud." In "no other field," he
insisted at the Iowa Conference, "has human behavior been predicted with
such a high degree of accuracy" (Meier & Saunders, 1949, pp. 177, 286).
Gallup conceded that
election predictions have "no great social value" (Meier & Sa nders, 1949,
p. 286). They were, Gallup (1972) wrote, "the least socially useful function
that the polls perform" (p. 123). So why predict elections? Why risk
standing "naked before the whole world" the day after an election? For
Gallup, the answer lay not so much in the realm of science as public
relations: "It was the demonstrated accuracy of election polls, based upon
scientific sampling procedures, that convinced the public and office holders
that public opinion on social, political, and economic issues of the day
could be gauged" (p. 124).
For half a century,
Gallup campaigned tirelessly to establish that polling had indeed passed the
"test," a conclusion that in turn rested on two distinct claims: polling's
record in predicting elections had, in fact, been good, and this record
established the reliability of all other polls. Neither of these claims
holds up under scrutiny. Both rested on a misleading-indeed, a deceptive and
disingenuousrhetoric of science.
Gallup invariably
employed the same statistical measure to document the accuracy of his
election forecasts: the "average deviation" of those forecasts from actual
election results, in "percentage points," on an "all-party basis." Yet not
only did this measure ignore the impact of the electoral college,[iii]
it also put he best face on a record that, by most other statistical
measures, was far from impressive. For example, Gallup boasted at the Iowa
Conference of the "surprising" fact that, on an "all-party basis," even his
1948 miscall fell within the expected "margin of error" of four percentage
points (Meier & Saunders, 1949, p. 178). Yet calculating the "average
deviation" on an "all-party basis" in 1948 meant summing the error in his
predictions for each of the two major and the two minor-party candidates,
then dividing by four. This procedure produced a respectable "average
deviation" of only 2.9 "percentage points." Gallup's error in predicting the
vote for just the two major candidates was far less impressive: 4.7
percentage points.[iv]
Looking at 1948 through
yet another statistical lens, an even more disma performance is revealed.
In terms of raw percentages, Gallup overestimated Dewey's vote by about 10
percent and underestimated Truman's vote by approximately the same amount.
Similarly, in predicting that third-party candidate Henry Wallace would get
4 percent of the vote, Gallup missed Wallace's actual vote of 2.4 percent by
only 1.6 "percentage points." Yet n percentage terms, Gallup missed the
mark badly; he overestimated Wallace's vote by 40 percent (Rogers, 1949, pp.
117-118, 124).
Even by Gallup's
preferred statistical measure, the pollsters' overall record in predicting
presidential elections has been, to say the least, spotty. Between 1952 and
1972, as Wheeler (1976) pointed out, the pollsters predicted only the
obvious "landslides," and one could have done just as well simply by picking
incumbents to win every election (pp. 39-40). Since Wheeler's study, the
pollsters have done even worse. In 1980, they completely missed the Reagan
landslide, with most major polls calling the election a toss-up and some
even predicting that Carter would win (Shelley & Hwang, 1991, p. 61). In
1984, the pollsters disagreed widely over the magnitude of Reagan's
reelection, with figures ranging from Roper's underestimate of 55%-45% to
Gordon Black's overestimate of 60%-35% (5% undecided) for USA Today (Crespi,
1988, p. 3). Despite further attempts to tighten their procedures in 1988,
only one of the five best-known polls predicted the correct vote for both
Bush and Dukakis within three percentage points (Shelley & Hwang, 1991, pp.
67-68). And in1992, the pollsters had their worst year since 1936, as the
Gallup Poll overestimated Clinton's vote by 5.8 percentage points. Despite
this substantial error the Gallup Poll Monthly's "Accuracy Record" continues
to boast of an "average deviation for 28 national elections" of 2.2
"percentage points." Yet, curiously, the "Record" no longer appears in every
issue, and it now bears the indignity of an asterisk referring readers to an
apologia for the 1992 miscall.[v]
The pollsters' election
forecasts have been off so frequently in recent years that even some
pollsters now question their own industry's claim to "science." Reacting to
the polls' poor performance in 1984, for example, Burns Roper lamented: "I'm
very concerned. This raises real questions of whether this business is
anywhere near a science." Pollster Irwin Harrison went even further:
"There's no science in the questions. There's no science in the interviews.
And there's more art than science in the results" (as cited in Crespi, 1988,
p. 2).
All this would be reason
enough to question whether election forecasting has established the
reliability of polling. Yet there is another, more important reason for
questioning Gallup's "test": presidential election polls bear very little
resemblance to the vast majority of polls. Recognizing that "their
reputations, and as a consequence their profits, depend on how close they
can come to the actual outcome," the pollsters historically have employed
much larger samples in their final election polls,[vi]
and they have tightened other procedures as well to avoid being "proven
wrong" by the voters (Wheeler, 1976, pp. 38-39). As a result, final election
forecasts are not so much a "test" of their general procedures, but instead
unique, special events. Indeed, election polls are not only methodologically
different from all other polls; they are, as Wheeler (1976) has pointed out,
"the best things that the pollsters do" (p. 43)!
Far more suspect are the
hundreds, even thousands of issue polls conducted each year-the polls Gallup
considered most socially useful. Throughout his career, Gallup (1940b)
insisted that, technically, issue polls were "relatively simple" compared to
election forecasts, since the latter could "go awry" because of any number
of factors affecting turnout (p. 25). Yet the notion that one need not
discriminate among respondents in issue polls implies that variations in
knowledge and salience have no substantive significance. It suggests no need
to distinguish uninformed and perhaps unstable opinions from carefully
considered positions, nor the passionately committed from the largely
indifferent.
More importantly, issue
polls have far more potential than election polls for what the pollsters
call "nonsampling error." These sources of error, which range from
interviewing problems to flawed interpretive theories, have received far
less attention than sampling and statistical techniques in the literature of
polling. Yet not only are the effects of such errors often "five or ten
times greater" than those of sampling error, as Brady and Orren (1992) have
pointed out, but they also are "harder to understand and measure" (p. 68).
A comprehensive review
of the various sources of "nonsampling error" is beyond the scope of this
essay. Suffice it to say that a growing body of research has documented
enormous "response effects" from such "nonsampling" factors as the race,
gender, or class of interviewers, the context and timing of surveys, and
variations in the order and phrasing of both questions and response options.
In addition, researchers recently have called attention to the need for
pollsters to develop explicit and more sophisticated interpretive theories.
Yet none of these concerns has undermined discernibly the apparent cultural
legitimacy of public opinion polling. As may be illustrated by just one
example, the problem of question-wording, George Gallup helped shroud such
problems in a fog of scientific mystification that only now is beginning to
lift.
The "Science" of
Question-Wording
Concerned with complex
subjective phenomena-beliefs, attitudes, and opinions - rather than a simple
choice among candidates, issue polls demand far more attention to
question-wording than election polls. Not only must the questions in such
polls tap appropriate, relevant opinions and attitudes, they also must mean
the same thing to different respondents and inject no bias into the
measurement process. In contemporary polling, batteries of questions are
often employed to assess not only opinions, but also the awareness and
salience of issues, the level and character of existing knowledge, the
strength of opinions, and even how opinions might be changed by exposure to
information or arguments about an issue. Unfortunately, this only introduces
more potential for error into the polling process. Not only does error
result from the wording of individual questions, but also from the order in
which questions are asked and other factors that affect the context within
which respondents interpret particular questions.
Throughout his career,
Gallup (1972) paid lip service to problems of questionwording by
occasionally conceding that question-wording posed "difficult" and
"important" problems for polling (p. 77). At the same time, however, he
insisted from the outset that variations in question-wording madelittle
difference and that "science" already had solved virtually all problems of
question-wording. Devoting barely a dozen pages to question-wording in the
290-page Pulse of Democracy (Gallup & Rae, 1940), Gallup declared that his
Institute for Public Opinion Research was hard at work perfecting a "neutral
vocabulary-a public-opinion glossary-within the comprehension of the mass of
people" (p. 106). More importantly, he insisted from very early in his
career that scientific experiments had shown that variations in question-wordingwere
of "relatively minor importance." Touting his own "splitballot" technique
for testing question-wording effects, Gallup noted that his firm often
employed two different versions of a question to assess what, "if any,"
difference question-wording might make. In more than 200 such experiments,
Gallup (1940b) reported that "greater-than-expected" differences had been
discovered in "only a small fraction" of cases. Where there was "no material
alteration in the thought expressed," Gallup concluded, "there will be no
material difference in the result, no matter what wording is used" (p. 26).
Despite his
protestations, Gallup's own "spit-ballot" data often revealed
questionwording effects that seemed quite material indeed. In one of his own
experiments in 1939, for example, half of the sample was asked, "Do you
thinkthe United States will go into the war before it is over?" The other
half was asked, "Do you think the United States will succeed in staying out
of the war?"Of those asked the first question, 41 percent said "yes" and 33
percent said "no." Of those asked the second, 44 percent said "yes" and 30
percent said "no." In short, a plurality of respondents answered "yes" to
both questions. This represented an eleven-point difference in the
percentage of people who felt that the U.S. would go to war. In all
likelihood, the explanation lies in what modern researchers call "response
acquiescence," or the inexplicable tendency of people to say "yes" when
asked about unfamiliar issues. Gallup, however, ingeniously attributed the
finding to the instability of opinion on the issue (Moore, 1992, p.
325-326).
Other researchers
employing Gallup's split-ballot technique likewise discovered significant
but mysterious effects from seemingly nonsubstantive variations in
question-wording. For example, Rugg (1941) analyzed split-ballot data
gathered by Elmo Roper. In a survey on attitudes toward free speech, Roper
had asked half his sample whether "the United States should forbid speeches
against democracy" and the other half whether "the United States should
allow" such speeches. The difference proved startling: 46 percent opposed
"forbidding" such speeches, but only 25 percent favored "allowing" them.
In the late 1940s,
Gallup (1947) announced a major breakthrough in the "science" of
question-wording-a new "system of question-design" that he insisted answered
all of the most "frequently voiced criticisms of public opinion
measurement."Impressively jargonized as the "quintamensional approach," this
new "system" allegedly allowed the pollster to "exclude from poll results
the answers of any and all persons who are uninformed on any given issue"
and to probe in greater depth opinions on "almost any type of issue and at
all stages of opinion formation." Employing five categories of questions,
including "filter" questions to assess knowledge and follow-up questions to
probe the motivations and reasoning behind opinions, the "quintamensional
approach," as Gallup described it, allowed a number of different dimensions
of public opinion to be "intercorrelated, with a consequent wealth of data
by which public opinion on any issue of the day can be described" (pp.
385-393).
Throughout his career,
Gallup (1972) touted the "quintamensional approach" as among the most
"important developments" in the history of polling, a development that made
possible the assessment of public opinion even on the most technical or
complex of issues (pp. 90-91). In reality, however, it only created new
problems in question-wording. The use of "filter questions," for example,
created unique and artificial interpretive contexts for questions that came
later in the interview process. The whole approach also encouraged the most
empirically and philosophically indefensible practice in modern polling: the
practice of "informing" respondents about an issue before asking their
opinion. When Gallup (1947) first unveiled the "system," he envisioned this
very practice as among its most exciting possibilities, as he suggested that
pollsters might use "simple and neutral language" to "explain" issues to
those identified by the "filter questions" as "uninformed." This would allow
pollsters to compare informed and uninformed opinion and, more generally,
"to obtain the views of the maximum number of voters" (p. 389). Only a
handful of critics have questioned the possibility of providing genuinely
"neutral information" (see, e.g., Robinson & Meadow, 1982, pp. 118-120). So
too have researchers only recently begun to ask whether pollsters ought to
be in the business of predicting what the public might think if exposed to
selected "information" (see, e.g., Hogan & Smith, 1991, p. 547).
At best, the "quintamensional
approach" was a diversion and, at worst, a scientific hoax. A classic case
of scientific mystification, it did nothing to solve the real problems in
question-wording. It provided virtually no guidance for how to word
questions; it specified only the types of questions to be asked and their
order. It did nothing to answer the most fundamental question about the
wording of questions: how does one account for the often enormous impact of
seemingly nonsubstantive variations in wording?
Meanwhile, additional
studies revealed significant question-wording effects, not only from
variations in wording, but also from the number, character, and even order
of response options. In 1951, Stanley Payne summarized a number of these
studies in The Art of Asking Questions, a book whose very title challenged
Gallup's efforts to scientize question-wording. Payne documented a wide
variety of questionwording effects that ranged from the false portraits
created by obviously "loaded" questions (pp. 177-202) to a more subtle and
mysterious "recency" effect-that is, a tendency among respondents to choose
the last response option in some but not all questions (pp. 129-134). Payne
concluded that "improvements in question wording" could "contribute far more
to accuracy than further improvements in sampling methods"-"tens" rather
than "tenths" of percents (p. 4).
Over the next twenty
years, however, research on question-wording all but disappeared from the
polling literature-as if Payne's book, with its "concise list of 100
considerations" in asking questions, "represented the final, and
comprehensive, word on the subject" (Moore, 1992, p. 328). Gallup played a
major role in this neglect of question-wording by continuing to insist well
into the 1970s that "thorough pre-testing" assured the reliability of all
questions and that, in any case, "split-ballot" experiments had proven that
"even a wide variation in question wordings" did not produce "substantially
different results if the basic meaning or substance of the question remained
the same" (Gallup, 1972, pp. 78, 83).
Question-wording did not
become a serious and sustained focus of research until a new "intellectual
movement" emerged among academic survey researchers in the 1980s-a movement
that broke the conspiracy of silence about question-wording and finally made
research on "response effects" a "mainstream pursuit" (Moore,1992, p. 343).
This "movement," led by University of Michigan researcher Howard Schuman and
others, began with efforts to summarize and replicate earlier research. The
effort to theorize about question-wording was frustrated, however, as
explanations for a number of question-wording effects proved elusive.
Intrigued by Rugg's 1941 study of attitudes toward free speech, for example,
Shuman and Presser (1981) repeated the experiment and found the same effect.
Despite the passage of forty years, respondents still were far more likely
to agree that the U.S. should "not allow" anti-democratic speeches than to
agree that the U.S. should "forbid" such speeches. When they performed the
same experiment on the question of "forbidding" or "not allowing"
pornographic movies, however, the effect disappeared. In this case, there
was no statistically significant difference between those who would "forbid"
and those who would "not allow" such movies (pp. 276-283).
In addition to
variations of ten, twenty, or more percentage points from seemingly
nonsubstantive variations in question wording,[vii]
more recent studies also have shown significant response effects even when
questions are not reworded at all, but merely reordered. In studying polls
on abortion, for example, Schuman and Presser (1981) attempted to account
for a 15 percentage point difference in two surveys employing exactly the
same question: "Do you think it should be possible for a pregnant woman to
obtain a legal abortion if she is married and does not want any more
children?" The researchers noted that the survey revealing less support for
abortion rights placed this question after another question asking whether
it should be possible to get a legal abortion "if there is a strong chance
of a serious defect in the baby." Testing the possibility that this question
created a different interpretive context for the second question, the
researchers conducted a split-ballot experiment in which half the sample was
asked both questions, with the "defective fetus" question first, and the
other half both questions, but with the order reversed. The difference was
substantial, with the "defective fetus" question, when asked first, reducing
support for abortion rights in the other question by about 13 percent-the
difference between majority support and majority opposition to abortion
rights (pp. 36-39).
Expanding into the area
of response options, Schuman and Presser (1981) not only confirmed the "recency"
effect observed by Payne (1951), but also demonstrated a very substantial
effect from offering or not offering "don't know" as a response option. In
Schuman and Presser's experiments, explicitly offering the "don't know"
option increased the number of people in that category by an average of 22
percent (pp. 113-146). Studies by other researchers revealed that, in the
absence of the "don't know" option, many respondents would express opinions
even on entirely fictitious policies or programs (see, e.g., Hawkins &
Coney, 1981, p. 372).
These and similar
findings would seem to confirm what polling pioneer Hadley Cantril observed
many years ago in the foreword to Payne's (1951) book: the variables
involved in question-wording are "legion"; "they are also subtle and they
defy quantification" (p. viii). More recently, Bradey and Orren (1992) went
further in concluding that "the more one explores the many sources of . . .
error [in questionwording], the more hopeless the prospect seems of ever
preparing a poll questionnaire that is not fatally flawed" (p. 77).
Reflecting on the limitations of both openand closed-ended questions,
Schuman and Scott (1987) have suggested, quite simply, that no survey
question, nor even any set of survey questions, should be taken as a
meaningful indicator of either absolute levels or relative orderings of
public preferences. Not surprisingly, this sort of conclusion has caused
considerable consternation among pollsters and survey researchers.
Responding to Schuman and Scott, Eleanor Singer (1988), then President of
the American Association for Public Opinion Research, complained that such a
view threatened to undermine the "intellectual defense" of survey
research-that it was only "a short way" from Schuman and Scott's conclusion
to the rationale for survey research and polling "being blown out of the
water altogether" (p. 578).
The pollsters
nevertheless continue to say little publicly about problems of
question-wording. They now insist more strongly than ever that a poll's
"margin of error" be reported, but this insistence only sustains the
illusion of science and wrongly implies "that sampling is the primary, if
not the only, source of possible error in a poll" (Cantril, 1991, p. 119).
Even pollster Burns Roper has conceded that reporting the "margin of error"
is little more than a diversionary tactic. As Roper (1980) observed,
pollsters typically present their findings in the media as "within three
percentage points" of what the public actually thinks on some issue, "when,
in fact, a differently worded question might-and often does-produce a result
that is 25 or 30 points from the reported result" (p. 16).
The Illusion of
Science
Gallup's great pet peeve
was the tendency of some writers to place quotation marks around the word
"scientific" as applied to polling. "If our work is not scientific," Gallup
(1957-58) complained to the sympathetic readers of Public Opinion Quarterly,
"then no one in the field of social science, and few of those in the natural
sciences, have a right to use the word." According to Gallup, polling fully
qualified as "scientific" even under "the most rigid interpretation" of that
word (p. 26). Those who thought otherwise simply did not understand "the
nature of the new science of public opinion measurement" (Gallup, 1940b, p.
23). When Professor Lindsay Rogers of Columbia University published a
scathing critique of the industry in 1949, for example, Gallup (1949b)
characterized him as "the last of the arm-chair philosophers in this field"
and declared: "One cannot blame the Professor for his lack of knowledge
about polling techniques. He says, quite honestly, that he knows nothing
about research procedures-a fact which is abundantly clear in his chapters
dealing with these matters" (pp. 179-180). When Congress considered
regulating polls in the wake of the 1948 debacle, Gallup (1948c) similarly
dismissed critics who raised questions about question-wording as ignorant of
"the new procedures that are followed and of the care that is taken to
dissect opinion not only with one question but, in the case of complex
issues, with a whole battery of questions" (p. 735).
Gallup's claim to
"science" also buffered the pollsters against criticism of their
interpretive practices. In responding to such criticism, Gallup did not
merely claim to interpret his data objectively; indeed, he claimed that, as
a scientist, he did not interpret at all! Characterizing polling firms as
"fact-finding institutions" whose "responsibility ends with the objective
reporting of survey results," Gallup (1960) attributed all complaints about
the polls to partisanship or sour grapes. "Obviously," he argued, the
pollsters could not be "charged for the manner in which others may interpret
or use poll findings," nor could they be "properly attacked for results
which happen to be unpalatable to some people." Such attacks, Gallup
concluded, were akin to blaming the meterologist for bad weather: "Just as
barometers do not make the weather, but merely record it, so do modern polls
reflect, but not make public opinion" (pp. ix-x).
The pollsters continue
to maintain a scientific facade by placing heavy emphasis on sampling, data
collection, and data processing in their public discussions of methodology.
In its monthly journal,[viii]
for example, the Gallup Poll historically has said little about
question-wording and nothing about interpreting data. In the 1980s and
1990s, the Gallup Poll Monthly added a statement about question-wording and
other sorts of "nonsampling error" to its regular "Note to Our Readers": "In
addition to sampling error, readers should bear in mind that question
wording and practical difficulties encountered in conducting surveys can
introduce error or bias into the findings of opinion polls." At the same
time, however, the Monthly added two new pages on weighting procedures,
sampling tolerances, and the "random digit stratified probability design"
used in telephone surveys. The Monthly even added tables for calculating the
error for subsamples of various sizes (see Gallup Poll Monthly, 1992).
Gallup's rhetoric of
science thus lives on, not only within the firm he founded, but in the
entire polling industry's claims of "scientific progress." Pointing to
continued improvements in sampling techniques and a variety of other "new
technologies available through computers," pollsters began boasting in the
1970s that their "science" had "truly come of age" (Roll & Cantril, 1980, p.
13). In this respect, pollsters may differ little from economists,
psychologists, sociologists, or anthropologists, whose claims to "science"
have been demystified in recent years by students of the "rhetoric of
inquiry" (see Nelson et al., 1987). Yet pollsters take such claims one step
further by boasting that their science can "greatly strengthen the
democratic process by providing `the people' yet another way of making their
views known particularly between elections" (Roll & Cantril, 1980, p. 14).
Preaching the
Democratic Faith
In classical democratic
theory, public opinion did not equate with some numerical majority, however
measured. Only ideas tested in public discourse were considered worthy of
the label "public opinion" (MacKuen, 1984, pp. 236-238). Gallup (1948a)
subscribed to a different view. He embraced James Bryce's definition of
"public opinion" as the "aggregate of the views men hold regarding matters
that affect or interest the community" (p. 84). Substituting
undifferentiated mass opinion for "tested" ideas and thoughts expressed
privately and confidentially for genuinely "public" opinions (see Herbst,
1993, pp. 64-65), Gallup, in effect, redefined public opinion as that which
the polls measured.
This redefinition, in
turn, raised anew the most fundamental philosophical and empirical questions
of democratic theory. Should public opinion, however defined or measured,
dictate public policy? Is the public, especially as more broadly defined by
polls, sufficiently interested and informed to govern itself? Not
surprisingly, George Gallup had answers to these questions. Less
surprisingly still, those answers rested largely on the results of his own
polling. Enthusiastically preaching the democratic faith, Gallup did not
merely celebrate America's democratic heritage and affirm the people's right
to govern themselves; he flattered the American people by invoking his own
polls as proof of their seemingly mystical "collective wisdom."
Gallup did not dwell on
the philosophical question of whether public opinion should dictate public
policy. His answer seemed obvious from how he phrased the question itself:
"shall the common people be free to express their basic needs and purposes,
or shall they be dominated by a small ruling clique?" (Gallup & Rae, 1940,
p. 6). In Gallup's polarized world, there were only "two points of view" on
this question, and between them there could be "no compromise" (Gallup &
Rae, 1940, p. 8). One either sided with those "who would place more power in
the hands of the people" or those "who are fearful of the people and would
limit that power sharply" (Gallup, 1940b, p. 57). In no uncertain terms,
Gallup stood with the democrats: "In a democratic society the views of the
majority must be regarded as the ultimate tribunal for social and political
issues" (Gallup & Rae, 1940, p. 15).
The empirical question
remained: were the "common people" capable of governing themselves? Only
with the advent of polling, according to Gallup (1940b), had it become
possible to answer this question-to ascertain whether the people were
"responsible or irresponsible; blindly radical, blindly conservative or
somewhere in between; and whether their judgments may or may not make sense
when viewed in perspective" (p. 57). Happily, according to Gallup (1948b),
"the polls" had shown again and again "a quality of good common sense" in
the American people that was "heartening to anyone who believes in the
democratic process" (p. 177). In a refrain that echoed throughout his
career, Gallup insisted that the people not only proved to be "right" about
most issues but typically were "far ahead" of their leaders.
Early in his career,
Gallup pointed to attitudes concerning preparedness and World War II as
proof of the public's wisdom and foresight. Not only did the public
understand far better than their leaders the threat posed by Hitler, but
they were willing, even eager, to sacrifice personally to counter that
threat. "Instead of balking at increased taxes," Gallup (1940a) wrote, "the
American people have been ready to make direct, personal sacrifices in order
to build up an army, a navy and an air force-and all this long before
Congressmen and other political leaders were ready to recognize the fact"
(p. 21). In a radio address, he even described the public as "cheerfully"
accepting "the burden of heavy wartime taxation" (Gallup, 1942a, p. 688).
Elsewhere, Gallup (1942c) detailed how the American people would be willing
to sacrifice 10 percent of each paycheck for defense bonds, to accept
"complete government control" over prices and wages, and even to allow the
government to dictate "what kind of work" they should do and for "how many
hours" (p.16). Thus, the polls not only confirmed "the sound judgment and
foresight of the common people" on "virtually all . . . important war-time
issues," but also their willingness to sacrifice personally "to help the
country win the war." Indeed, the people were so far "ahead of their
political leaders," Gallup (1942b) concluded, "as to raise the real question
as to whether the leaders are leading the people or whether the people are
leading the leaders" (pp. 440-442).
In the 1950s, Gallup
changed his tune somewhat as he lamented the public's failure to appreciate
fully the threat posed by communism. "Today for the first time," he wrote in
the New York Times Magazine in 1951, "I must confess that I am concerned
lest lack of information lead the American people to decisions which they
will regret" (p. 12). Significantly, however, Gallup blamed a "lack of
information," not the people themselves, and he took it upon himself to
"inform" both the public and Congress of the need for more vigorous American
propaganda to counter the Soviet threat (see U.S. Senate, 1953, pp.
772-793). Gallup thus simultaneously worried about public apathy and sounded
his usual refrain that the public was "far ahead" of its leaders. In a
speech at Georgetown University, for example, Gallup (1952b) worried that
the American people might "delude themselves" into complacency, yet again
the public was "years ahead of Congress" in its desire "to go all-out to
sell our point of view to the world," even if it meant "the expenditure of a
billion dollars and more to do the job" (p. 502).
A decade later, Gallup
(1968b) focused on electoral reform and declared that never before in
history had "the man in the street been so conscious of all the shortcomings
and faults in the whole electoral process from beginning to end" (p. 41).
Noting that presidential campaigns had "degenerated" into a "phony
business," a "matter of showmanship" and "a test of the candidate's stamina"
rather than of "his intelligence" (Gallup, 1968a, p. 133), he again claimed
that the polls showed the public ahead of its leaders. For "many, many
years," Gallup (1968c) told an interviewer, the people had "favored doing
away with the Electoral College" and establishing "a nationwide primary."
They also favored "shorter campaigns" and "more serious and shorter
conventions." They would "put an absolute top on campaign spending by any
candidate or any party," and they would "give each candidate five or six
half-hour broadcast periods to present his case-at which time everything
else would be ruled off the air." In short, the public supported "a whole
series of reforms that would change virtually every step and stage of the
business of getting elected." Again, the public not only was "right" about
electoral reform but "far, far ahead of the parties and the politicians" (p.
34).
In the twilight of his
career, Gallup continued his crusade for electoral reform and also became
deeply concerned about public education. Inaugurating an annual Gallup Poll
on public attitudes toward education in 1968 (see Elam, 1984), Gallup became
a major voice for educational reform. Meanwhile, he continued to chart
public opinion on a wide variety of foreign and domestic issues and to
celebrate the public's wisdom and foresight on even the most complex and
divisive issues of the late 1960s and 1970s. On Vietnam, for example, Gallup
(1972) again declared the public to be "far ahead of its leaders" for
supporting Vietnamization three years before Nixon even proposed it (p. 17).
Even in the post-Watergate climate of political alienation and
disenchantment, Gallup's faith in the wisdom of "the people" remained
undiminished. At the age of 77, allup (1978) sounded the refrain that he
first introduced in the 1930s in summarizing what he had learned from
"conducting thousands of surveys on almos every conceivable issue for
nearly half a century": "One is that the judgment of the American people is
extraordinarily sound. Another is that the public is almost always ahead of
its leaders" (p. 59).
In preaching the
democratic faith, Gallup obviously went beyond merely "reporting" his data.
Recalling "very few instances . . . when the public was clearly in the wrong
on an issue" (as cited in McDonald, 1962, p. 28), Gallup judged the wisdom
both of policies and of the public's opinions. Gallup also routinely
reflected on the historical significance of findings, discerned causes and
trends, and even predicted the future. Most significantly, however, Gallup
invoked the moral authority of vox populi in a number of his own pet causes.
Notwithstanding his professions of scientific objectivity and political
independence, it seems more than coincidence that Gallup-the conservative
businessman, the close personal friend of Thomas Dewey, the staunch Cold
Warrior-almost invariably discovered a public with the "right" political
views. In short, George Gallup was no mere "weatherman" of public opinion.
To the contrary, he sought to alter the ideological climate of his era with
the results of his own polls.
The Faint "Pulse of
Democracy"
Academic survey
researchers now generally agree that if polls are to be anything more than
"entertainment . . . or propaganda for political interests," they must be
integrated with "additional sources of information into a fuller, textured,
portrait of public opinion" (MacKuen, 1984, p. 245). In other words,
individual polls, if they are to be meaningful at all, must be interpreted
both in terms of larger historical or social trends, and within the context
of public debate and discussion. At a minimum, polls must be compared to
related polls if one hopes to speak of trends, shed light upon the impact of
question-wording, or explore the salience and logic of the public's
opinions. Indeed, recent studies suggest that meaningful conclusions about
public opinion can emerge only through comprehensive, comparative analyses
of all available opinion data on particular topics or issues (see, e.g.,
Hogan & Smith, 1991).
George Gallup's legacy
discourages such comprehensive and integrative analysis. Establishing
polling as a commercial enterprise and aligning it with journalism, Gallup
pioneered the institutional arrangements that today not only discourage
comparative and in-depth analyses of polling data but have corrupted the
mission of polling. Some polls, of course, are commissioned by politicians
or special interest groups, which creates an obvious incentive for bias.
Less obvious, yet even more troubling, however, is how journalistic
imperatives have corrupted even the syndicated media polls.
Early in his career,
Gallup offered media syndication as "a pretty realistic guarantee" of
polling's honesty and impartiality. As Gallup (1938) summarized that
guarantee, the Gallup Poll derived "all its income" from some sixty
newspapers "of all shades of opinion"-some "left of center," some "right of
center," and some in "the middle of the road." With that arrangement, Gallup
asked rhetorically, "how long do you suppose we would last if we were
anything but honest?" (p. 138).
"Honesty" is not the
issue when considering how journalism has affected the quality and character
of public opinion polling. Rather, as Everett Carll Ladd (1980) has argued,
there is a "clash of institutional imperatives" between ournalism and
polling, with contemporary notions of "news" encouraging technically
deficient polling. So serious is this "clash," and so different are the
requirements of a good news story and sound polling, that the "linkage of
polling and the press raises serious questions as to whether opinion
research does, or even can, enhance the democratic process" (p. 575).
Indeed, there is no longer much question about it: as essentially wholly
owned subsidiaries of a handful of media conglomerates, polling firms do
not, in fact, aspire to "scientific" assessment of public opinion. Instead,
they strive to make news.
The highest priority
among journalists using polls is "timeliness," yet this creates a built-in
incentive for methodological shortcuts: smaller samples, fewer efforts to
track down respondents, truncated and unverified interviews, and less data
processing. Once the data are collected, moreover, time and space
limitations, along with the dictates of a good news story, encourage
interpretations that oversimplify and overdramatize the results. As Ladd
(1980) pointed out, good news reporting "has focus and arrives at relatively
clear and unambiguous conclusions," whereas "good opinion research typically
reveals . . . tentativeness, ambivalence, uncertainty, and lack of
information or awareness" (p. 577). As Moore (1992) further reminds us, news
organizations pay dearly for their "exclusive" polls and thus have a "vested
interest" in making the results "sound important" (p. 250). As a result,
both journalists and the pollsters themselves look for something more than
disinterest or ambivalence in polling data: something new or surprising,
something dramatic, a "lesson" to be learned, or some portent of conflict.
Journalistic imperatives
have, in a sense, relieved pollsters of the problem of framing good
questions. Failing to discern news in the public's relatively stable
opinions on persistent political issues, journalists now demand more quirky
polls from the pollsters-polls that pursue fresh, often non-political, and
occasionally even silly angles on "old news." During the Persian Gulf War,
for example, the pollsters typically asked the public not to assess the Bush
administration's war policy, but to speculate about the unknowable: "How
long do you think the war will last?" And "how many Americans do you think
will be killed before the war is over?" The Gallup poll asked Americans not
only to speculate about whether America would "win" the war, but even
whether they thought "prayers" might be "effective" in resolving "a
situation like this one in the Persian Gulf" (Gallup Poll Monthly, 1991, pp.
7, 25). As Mueller (1994) has observed, the Persian Gulf War was the "most
extensively polled" issue in American history-the "mother of all polling
events" (p. xiv). Yet despite asking some 5,000 questions (Ladd & Benson,
1992, p. 29), the "picture of public opinion" that emerged from polling on
the Gulf War remained "fuzzy," as Moore (1992) has written, and "the crucial
question of whether the public supported or opposed a war with Iraq could
not be summarized with a percentage point estimate" (p. 353).
Polls have become "news
events" in and of themselves. As a result, they substitute for substantive
information about political issues and stifle debate. Indeed, as Herbst
(1993) has observed, polls often make political debate seem "superfluous,"
since "they give the illusion that the public has already spoken in a
definitive manner" (p. 166). When debates do occur, they often focus on the
polls themselves, with partisans debating not the merits of a policy but the
"message" of "the polls" (see, e.g., Hogan, 1994, pp. 119-137). Such an
emphasis, in turn, discourages bold leadership, since it pressures elected
representatives to defer to the polls. It may even discourage the public
itself from other modes of political expression (such as letter-writing or
demonstrations), since it suggests not only that the public already has
spoken but that the public "speaks" only through the polls.
During election
campaigns, polls likewise substitute for substantive information about the
records or proposals of candidates. During the 1988 presidential primaries,
for example, nearly a third of all election stories on the network news
noted poll results, and during the final months of the campaign, a new poll
was reported, on average, every other day (Jamieson, 1992, p. 175). More
importantly, the polls have redefined campaign "news" itself and have fueled
an astonishing increase in so-called "horserace journalism"-an emphasis on
the strategic "game" of politicssince the 1960s (see Patterson, 1994, pp.
53-93). Not only does such reporting deprive voters of the information they
need to assess candidates, but it may discourage political participation
altogether by casting voters as mere "spectators" (see Jamieson, 1992, pp.
186-188).
Despite all the
historical "lessons" and "scientific" breakthroughs, we thus remain a long
way from George Gallup's vision of a "scientific democracy." Indeed, the
public opinion poll not only has failed to become "the most useful
instrument of democracy ever devised," as Gallup predicted (Meier &
Saunders, 1949, p. 218), but it has had a number of deleterious effects on
public discussion and political leadership in America. Properly designed and
interpreted, polls might well serve George Gallup's dream of a more
inclusive and efficient democracy. In a sad irony of history, however,
Gallup's own rhetorical legacy subverted that dream, and the "pulse of
democracy" only beats fainter in the age of saturation polling.
Footnonets:
[i] A survey
of Gallup's articles listed in Reader's Guide to Periodical Literature,
the International Index, the Social Science Index, and the Humanities
Index reveals 94 published speeches and articles by Gallup and 10 others
co-authored by Gallup. That number includes six speeches published in
Vital Speeches of the Day, along with articles in everything from the
most popular, general circulation magazines to obscure scholarly
journals. Gallup's articles appeared in Reader's Digest, as well as in
news magazines (Newsweek and U.S. News and World Report), women's
magazines (Good Housekeeping and Ladies Home Journal, business
publications (Business Week and Nation's Business), and journals in
education (Phi Delta Kappan), philosophy (American Philosophical Society
Proceedings), government (National Municipal Review), and health
sciences (The Jour al of Social Hygiene). He also published regularly in
his own industry's flagship journal, Public Opinion Quarterly, and he
even published one article, ironically, in the Literary Digest.
[ii] FDR
received 60.7 percent of the to" + vote and 62.5 percent of the major
party vote, while Gallup's final poll predicted 53.8 percent of the
total and 55.7 percent of the major party vote. See Gallup and Rae
(1940, pp. 50, 53).
[iii] Rogers
(1949) illustrated the significance of this fact in analyzing Gallup's
performance in 1948. By wrongly predicting the winner in seven states,
Gallup underestimated FDR's electoral vote by more than 33 percent (pp.
118-119).
[iv] The
figures are as follows: Gallup missed Truman's vote by 5.3 percentage
points, Dewey's by 4.1 percentage points, Wallace's by 1.6 percentage
points, and Thurmond's by 0.4 percentage points (see Gallup, 1949a, p.
55).
[v] Ross
Perot's candidacy, the note explains, "created an additional source of
error in estimating the 1992 presidential vote" because it had "no
historical precedent." According to the note, the Gallup Poll erred in
allocating none of the undecided vote to Perot, which neglected to take
into account Perot's "equal status" in the presidential debates and his
"record advertising budget" (Gallup Poll Monthly, 1992).
[vi] "The
Gallup Poll, in particular, has employed extraordinarily large samples
to make election forecasts. In 1944, for example Gallup employed a
national sample of more than 6,000, as compared to only two- to
three-thousand in most issue polls. After the 1948 debacle, Gallup told
U.S. News that the Gallup Poll would interview "in the neighborhood" of
40,000 to 50,000 people in the two months preceding the 1952 election,
including some 10,000 interviews in the week just prior to the election
(U.S. House, 1945, p. 1239; Gallup, 1952a, p. 61).
[vii] To
cite just a couple of additional examples, split-sample experiments have
shown that while only 19l)lo agreed that too little money was being
spend on "welfare," 63% agreed that too little was being spent on
"assistance to the poor"; and while only 20% agreed that too little was
being spent on "assistance to big cities," 49% agreed too little was
being spent on "solving the problems of big cities" (see Moore, 1992,
pp. 343-346).
[viii] "The
Gallup Poll's monthly journal has been published since 1965 under three
different titles: the Gallup Opinion Index, the Gallup Report, and the
Gallup Poll Monthly.
References
Brady,
H.E., & Orren, G.R. (1992). Polling pitfalls: Sources of error in
public opinion surveys. In T.E. Mann & G.R. Orren (Eds.), Media polls
in American politics (pp. 55-94). Washington, DC: Brookings
Institution.
Cantril,
A.H. (1991). The opinion connection: Polling, politics, and the press.
Washington, DC: Congressional Quarterly Press.
Crespi,
I. (1988). Pre-election polling: Sources of accuracy and error. New
York: Russell Sage Foundation.
Elam, S.M.
(Ed.). (1984). Gallup polls of attitudes toward education, 1969-1984:
A topical summary. Bloomington, IN: Phi Delta Kappa. Field, M.D.
(1990). Opinion polling in the United States of America. In M.L. Young
(Ed.), The classics of polling (pp. 34-45). Metuchen, NJ: Scarecrow
Press.
Gallup,
G. (1938). Government and the sampling referendum.Journal of the
American Statistical Association, 33, 131-142.
Gallup,
G. (1940a, October). Can we trust the common people? Good
Housekeeping, 111, 21.
Gallup,
G. (1940b, February). Polling public opinion. Current History,
51,23-26, 57.
Gallup,
G. (1942a, September 1). Democracy-and the common man. Vital Speeches
of the Day, 8, 687-688.
Gallup,
G. (1942b). How important is public opinion in time of war?
Proceedings of the American Philosophical Association, 85, 440-444.
Gallup,
G. (1942c, March 29). The people are ahead of Congress. New York Times
Magazine, 91, 16, 35.
Gallup,
G. (1944, December 30). I don't take sides. The Nation, 159, 795-796.
Gallup,
G. (1947). The quintamensional plan of question design. Public Opinion
Quarterly, 385-393.
Gallup,
G. (1948a). A guide to public opinion polls. Princeton, NJ: Princeton
University Press.
Gallup,
G. (1948b,January 1). Main street rates the issues. Vital Speeches of
the Day, 14, 177-179.
Gallup,
G. (1948c). On the regulation of polling. Public Opinion Quarterly,
12, 733-735.
Gallup,
G. (1949a, February 27). The case for the public opinion polls. New
York Times Magazine, 98, 11, 55,57.
Gallup,
G. (1949b). A reply to "The Pollsters." Public Opinion Quarterly, 13,
179-180.
Gallup,
G. (1951, November 4). What we don't know can hurt us. New York Times
Magazine, 100, 12, 50-51.
Gallup,
G. (1952a, May 23). Interview with George Gallup: How 55 million vote.
U.S. News and World Report, 32, 56-64.
Gallup,
G. (1952b, June 1). Why we are doing so badly in the ideological war.
Vital Speeches of the Day, 18, 501-504.
Gallup,
G. (1955-56). The absorption rate of ideas. Public Opinion Quarterly,
19, 234-242.
Gallup,
G. (1957-58). The changing climate for public opinion research. Public
Opinion Quarterly, 21, 23-27.
Gallup,
G. (1960). Foreward. In J.M. Fenton, In your opinion: The managing
editor of the Gallup Poll looks at polls, politics, and the people
from 1945 to 1960 (pp. vii-xi). Boston: Little, Brown and Co.
Gallup,
G. (1968a, November 18). Gallup: Humphrey "probably" would have won-if
the election had come a few days later: Interview with Dr. George
Gallup. US. News and World Report, 65, 132-133.
Gallup,
G. (1968b, November 4). How Dr. Gallup sees the campaign homestretch:
Interview with a veteran pollster. US. News and World Report,
65,40-41.
Gallup,
G. (1968c,July 29). Interview with Dr. George Gallup: '68 election
size-up. U.S. News and World Report, 65,30-34.
Gallup,
G. (1972). The sophisticated poll watcher's guide. Princeton, NJ:
Princeton Opinion Press.
Gallup,
G. (1978, August). Six political reforms most Americans want. Reader's
Digest, 113, 59-62.
Gallup,
G., & Rae, S.F. (1940). The pulse of democracy: The public opinion
poll and how it works. New York: Simon and Schuster.
Gallup
Poll Monthly, No. 304, (1991,January).
Gallup
Poll Monthly, No. 326, (1992, November).
Ginzburg,
B. (1944, December 16). Dr. Gallup on the mat. The Nation, 159,
737-739.
Hawkins,
D.I., & Coney, K.A. (1981). Uninformed response error in survey
research. Journal of Marketing Research, 18, 372.
Herbst,
S. (1993). Numbered voices: How opinion polling has shaped American
politics. Chicago: University of Chicago Press.
Hogan,J.M. (1994). The nuclear freeze
campaign: Rhetoric and foreign policy in the telepolitical age. East
Lansing, MI: Michigan State University Press.
Hogan,J.M. & Smith, TJ., III. (1991).
Polling on the issues: Public opinion and the nuclear freeze. Public
Opinion Quarterly, 55,534-569.
Jamieson,
K.H. (1992). Dirty politics: Deception, distraction, and democracy.
New York: Oxford University Press.
Ladd, E.C.
(1980). Polling and the press: The clash of institutional imperatives.
Public Opinion Quarterly, 44, 574-584.
Ladd, E.C.,
& Benson,J. (1992). The growth of news polls in American politics. In
T.E. Mann & G.R Orren (Eds.), Media polls in American politics (pp.
19-31). Washington, DC: Brookings Institution.
MacKuen,
M.B. (1984). The concept of public opinion. In C.F. Turner & E. Martin
(Eds.). Surveying subjective phenomena (Vol. 1, pp. 236-45). New York:
Russell Sage Foundation.
McDonald,
D. (1962). Opinion polls: Interviews by Donald McDonald with Elmo
Roper and George Gallup. Santa Barbara, CA: The Fund for the Republic.
Meier,
N.C., & Saunders, H.W. (Eds.). (1949). The polls and public opinion.
New York: Henry Holt and Co.
Moore,
D.W. (1992). The superpollsters: How they measure and manipulate
public opinion in America. New York: Four Walls Eight Windows.
Mosteller,
F., Hyman, H., McCarthy, P., Marks, E.S., & Truman, D.B. (1949). The
pre-election polls of 7948: Report to the committee on analysis of
pre-election polls and forecasts. New York: Social Science Research
Council.
Mueller,J.
(1994). Policy and opinion in the Gulf War. Chicago: University of
Chicago Press.
Nelson,
J.S., Megill, A., & McCloskey, D.N. (Eds.). (1987). The rhetoric of
the human sciences: Language and argument in scholarship and public
affairs. Madison, WI: University of Wisconsin Press.
Patterson, T.F. (1994). Out of order. New York: Vintage Books.
Payne,
S.L. (1951). The art of asking questions. Princeton, NJ: Princeton
University Press.
Robinson,J.P., & Meadow, R. (1982). Polls apart. Cabin John, MD: Seven
Locks Press.
Rogers,
L. (1949). The pollsters: Public opinion, politics, and democratic
leadership. New York: Alfred A. Knopf.
Roll, C.W.,
& Cantril, A.H. (1980). Polls: Their use and misuse in politics. Cabin
John, MD: Seven Locks Press.
Roper,
B.W. (1980). The impact of journalism on polling. In A.H. Cantril
(Ed.), Polling on the issues: Twenty-one perspectives on the role of
opinion polls in the making of public policy (pp. 15-19). Cabin John,
MD: Seven Locks Press.
Rugg, W.D.
(1941). Experiments in wording questions: II. Public Opinion
Quarterly, 5, 91-92.
Schuman,
H., & Presser, S. (1981). Questions and answers in attitude surveys:
Experiments on question form, wording, and context. New York: Academic
Press.
Schuman,
H., & Scott,J. (1987, May 22). Problems in the use of survey questions
to measure public opinion. Science, 236, 957-959.
Shelley,
M.C., & Hwang, H. (1991). The mass media and public opinion polls in
the 1988 presidential election: Trends, accuracy, consistency, and
events. American Politics Quarterly, 19, 59-79.
Singer,
E. (1988). Controversies. Public Opinion Quarterly, 52,576-559.
The black
and white beans. (1948, May 3). Time, 51, 21-23.
U.S.
House. (1945). Campaign expenditures: Hearings before the committee to
investigate campaign expenditures . . . seventy-eighth Congr ss,
second session, on H. Res. 551. (Part 12, December 12, 1944).
Washington, DC: Government Printing Office.
U.S.
Senate (1953). Overseas information programs of the United States:
Hearings before a subcommittee of the committee on foreign relations .
. . eighty-third Congress, first session, on overseas information
programs of the United States. (Part 2, April 1, 1953). Washington,
DC: Government Printing Office.
Walden,
G.R. (1990). Public opinion polls and survey research: A selective
annotated bibliography of U.S. guides and studies from the 1980s. New
York: Garland Publishing.
Wheeler,
M. (1976). Lies, damn lies, and statistics: The manipulation of public
opinion in America. New York: Dell.
GEORGE GALLUP AND THE RHETORIC OF SCIENTIFIC DEMOCRACY
Communication Monographs, June 1997, vol. 64 No. 2; p. 161-179
www.pseudology.org
|