Why NZ voters should beware of reading too much into political polls

0
300
SHARE
A word of caution about opinion polls
A word of caution: don’t treat opinion polls as gospel, and try not to let them become self-fulfilling prophecies. Image: Getty Images/The Conversation

ANALYSIS: By Grant Duncan, Massey University

With a new prime minister sworn in and a cabinet reshuffle imminent, it is no exaggeration to say the election year in Aotearoa New Zealand has begun with a bang. Already the punditry and speculation are ramping up, with anticipation building for the first opinion polls.

There will be more polls to come, of course, but a word of caution is in order: don’t treat them as gospel, and try not to let them become self-fulfilling prophecies.

At this point, we cannot predict who will form New Zealand’s next government, and it could yet be a tight race.

Furthermore, political polling has not had a stellar record in recent times. Former prime minister Jim Bolger’s famous remark from 1993, after he didn’t get the election majority he expected, still resonates: “Bugger the polls.”

It’s not just a local phenomenon, either. The results of the Brexit referendum and the Trump–Clinton presidential contest in 2016, and the 2019 Australian election, were all out of line with preceding opinion polls.

In 2020, the US presidential polls were off by about four percentage points. And the 2022 US midterm elections didn’t produce the landslide (or “red tsunami”) many Republicans had predicted.

Election night 2020
Election night 2020 . . . polls consistently underestimated the Labour Party’s eventual majority. Image: Getty Images/The Conversation

The 2020 election miss
It is a similar story in Aotearoa New Zealand. In 2020, the polls immediately prior to the election overestimated the National vote and underestimated Labour’s.

Taking the averages of the results of all six polls published during the month before election day, National emerged on 30.9 percent and Labour on 47.2 percent. In the final three polls during the two weeks when advance voting was open, the averages were National 31.4 percent and Labour 46.3 percent.

The gap was closing and Labour would land on about 46 percent, or so it seemed. As Labour’s trend in the polls since mid-2020 was already downward, 45 percent looked plausible. But predictions based on the opinion polls were significantly wrong.

Labour’s election result was 50 pecent, National’s only 25.6 percent.

The polls in the final fortnight were overestimating National by an average of 5.8 percentage points. They were underestimating Labour by 3.7 points. The Green and Māori parties were also underestimated (1.1 and 0.7 points, respectively).

There were even bigger failures in polls showing Green candidate Chlöe Swarbrick running third in Auckland Central with about 25 percent of the vote. Instead, she got 35 percent and won the seat.

Green MP Chlöe Swarbrick
Green MP Chlöe Swarbrick on election night 2020 . . . polls had placed her third but she won the Auckland Central seat. Image: Getty Images/The Conversation

Statistics 101
The opinion polls and the election — the only poll that counts, as the saying goes — use different methods with different samples. They are intended for different purposes, and hence their results will differ, too.

An opinion poll is a snapshot of a sample of potential voters. By the time it’s published, it’s already in the past. Surveys normally ask which party you would vote for if the election were held tomorrow.

But you may change your mind by the time you actually vote, if you vote at all.

Furthermore, surveys are prone to random error. So, no matter how scientifically rigorous, they only estimate — and cannot replicate — the relevant population. It is in the interests of the polling companies to be accurate, of course, especially when close to an election.

But we need to read their results critically.

Samples are normally about 1000 people, and pollsters try to ensure they closely resemble the demographic makeup (ideally by age, gender, ethnicity, education and location) of the eligible population, giving voters of all kinds an equal voice.

Post-survey weighting boosts results from social groups with low response rates.
The proportion of the population that holds a specified preference is estimated, and all estimates are subject to variance.

This is expressed as a margin of error, which is normally plus or minus three percentage points.

The margin of error is the range in which the pollster bets the “true” results should probably fall, with the true figures being outside that range only 5 percent of the time. In other words, pollsters are 95 percent confident the actual results will fall within that range. It is only a statistical estimate.

But the quoted margin of error doesn’t apply evenly. If a given party is polling at 50 percent, then the quoted margin of error applies. If a party is polling higher or lower, then the margin of error narrows percent the further you get from 50 percent, the narrower the margin of error.

New NZ Prime Minister Chris Hipkins
How new Prime Minister Chris Hipkins fares in the first opinion polls of 2023 will be closely watched. Image: Getty Images/The Conversation

Beyond the margin of error
Another concern is whether respondents will give honest answers. Some may be unwilling to reveal their voting intentions or they will wilfully mislead the poll.

And often a large proportion of a sample doesn’t know yet whether they’ll actually vote, or for whom they’ll vote. Responsible pollsters will report the percentage of “don’t know” responses.

But the conservative bias in the pre-election 2020 opinion polls was systematically outside of the margins of error, and hence not due only to random variation.

Apparently, pollsters did not obtain samples that resembled the population that actually voted. It looks like younger leftwing voters were especially hard to reach or unwilling to participate.

Or their election turnout may have been underestimated.

Polling companies are now using online panels to help correct such biases. We’ll have to wait for the next election’s results to judge how it’s working.

Reading the tea leaves
A series of opinion polls can reveal trends and thus serve a purpose as public information. But they’re not suited for forecasting. One result taken out of context may be misleading, so it is disappointing when major news organisations over-hype polls.

When party-vote percentages get converted into numbers of seats, journalists are reading tea leaves and not reporting news. Meanwhile, the market research firms are getting massive publicity.

Accurate or not, opinion poll results can have self-fulfilling or “bandwagon” effects on people’s voting behaviour. People might want to back a winner, or not waste their vote on a party that’s polling below 5 percent. Or some might vote for a party other than their favourite, with an eye to post-electoral negotiations.

Perhaps the best advice for voters is this: when deciding which party to vote for, try not to think about the polls. And poll-watchers should prepare for surprises on election night.The Conversation

Dr Grant Duncan, associate professor, School of People, Environment and Planning, Massey University. This article is republished from The Conversation under a Creative Commons licence. Read the original article.

NO COMMENTS