the first and last presidential  debates, CNN added
a new dimension to campaign coverage by monitoring the political
pulse of 480 registered voters via instant interactive polling.
a push-button phone, the 480 randomly selected survey respondents
were instructed to express their immediate like or dislike
for what each of the presidential candidates said throughout
the first debate, held on Oct. 11, and the last debate,
held on Oct. 19.
Respondents registered their opinions by calling a toll-free
800 number and punching any number from one through nine
on their telephone keypads-creating a scale that ranged
from a highly negative to a highly positive reaction. Responses
were collected in Omaha, Neb., by Call Interactive and fed
to Decision Labs, a company based in Chapel Hill, N.C.,
that specializes in real-time response polling, who were
set up for the debates in CNN's Atlanta newsroom.
voters were classified as Clinton, Bush or Perot supporters
or undecided. Their responses were plotted on a graph similar
in appearance to a biofeedback chart on which the needle
continuously moved as people registered their opinions to
the candidates' words.
"This polling method provides the ability to understand
much more clearly the way specific pieces of a speech affect
people," said Jack Ludwig, vice president and chief methodologist
of the Gallup Organization, which cosponsored the project,
along with CNN, through the aid of a Markle Foundation grant.
rapid-response survey found that voters liked Clinton's
middle-class tax cut but were not so enthusiastic about
his handling of the Arkansas budget. They supported Bush's
proposal for allocating 10 percent of income tax revenues
toward reducing the budget deficit, but gave mixed reviews
for his attacks on Clinton's character. Nearly all voters
reacted positively to Perot's performances (though the number
of Perot supporters in the sample was so small that the
results were not reported ultimately).
The pollsters instantly converted the polled data into a
graph, with each candidate's group of supporters and the
group of undecided respondents represented by a different
colored line. They then superimposed the graph onto a videotape
of the candidates being made during the debate so that,
after each debate, viewers could see the instant reactions
of respondents that occurred at different points during
the CNN roundup a few minutes after each debate, public
opinion analyst William Schneider interpreted several graphs
measured at different points during the program. As Schneider
analyzed the survey, the network broadcast a tape of the
candidate speaking on one side of the screen while the graph
of responses to what was being said played on the other
debate reaction poll was the first time a presidential debate
had been tracked using this instant-response, interactive
method. While television networks, advertisers and political
candidates have used these interactive survey methods in
controlled laboratory settings to determine audience response
to their programming in the past, those scientific methods
have not been applied on a second-by-second basis to viewers
across the country before.
previous real-time television viewer surveys, respondents
have participated voluntarily, either by calling a toll-free
800 or a toll 900 number. Such call-in surveys are unscientific,
and therefore unreliable, because they do not use randomly
selected samples. But pollsters for CNN and Gallup did a
good job in selecting a random sample of registered voters
for their survey, said Ludwig, because it reflected the
demographics of other polls conducted at the time.
Ludwig cautioned against "projecting too far" with the polling
results. Due to the nature of the poll, the sample was prone
to a number of flaws. Its small sample size, for instance,
which was reduced even further by dividing respondents into
various groups, increased the margin of error well above
an acceptable rate for a normal survey. Also a problem was
the requirement that respondents have a push-button telephone
located near a television, which may have excluded certain
are also questions about what the poll actually measured.
Survey instructions asked participants to react "positively
or negatively to what you are hearing and seeing during
the debate." But were voters deciding on what they saw or
what they heard? Were they reacting to platform positions,
rhetorical skill or the candidates' taste in ties? The method,
Ludwig concluded, "doesn't make it clear what they are reacting
to, and it's impossible to tease that out."
the survey's shortcomings, voter preferences did seem to
correspond with those in larger, more conventional surveys,
Ludwig said. "The movement of the lines does appear to make
to Ludwig, the positive feedback to the survey indicates
that interactive polling could become a permanent fixture
in future political debates. "It's an aid that we don't
have otherwise for looking at political messages," he said.