Now Reading
Measuring Democracy: TV and polling in India

Measuring Democracy: TV and polling in India

“So why should we be paying so much if we cant get the exact number of seats right?” the news media baron asked imperiously. The Delhi election results had just been declared and the Aam Aadmi party had won a remarkable 67 of the 70 seats. It was, in the truest sense of the term, a landslide victory. Our post poll survey had shown AAP winning between 42 to 48 seats, a clear win but nowhere close to the final number. “A poll survey is not a maths sum, it cant predict an exact number. We can capture a trend, but seat projections are hazardous,” I sputtered. We had spotted a distinct trend of the AAP being comfortably ahead of its rivals but we had failed to predict the scale of the triumph. My argument didn’t cut much ice. “If we cant get it spot on, we shouldn’t be doing polls,” was the testy response.

The above conversation only highlights the precarious relationship between polling agencies and the media, television media in particular. India has more than 390 registered news and current affairs channels, each competing with a manic frenzy for shifting eyeballs and a finite advertising pie. At the core of the competition is the concept of Television Rating Points or TRP, a weekly barometer of viewership figures. That in a country of a 160 million plus cable and satellite homes, less than 20,000 Television audience meters are measuring viewership patterns is immaterial. What matters it seems is just who is the number one channel in a particular week in the TRP battle. Opinion polls too are part of this maddening race to be number one, this incessant desire to engage in chest thumping in the ratings game.

Elections, after all, are a big news story, general elections every five years possibly the biggest news story in democratic India. Naturally, every news channel sees election coverage as core to its identity. But the duration of election campaigns has got compressed in recent times, perhaps they have even become less colourful because of the strict gaze of the Election Commission. Every channel covers the campaign with mostly similar images and soundbites. After all, how different can the telecast of a Narendra Modi rally be across channels? In a sense, there is a sameness to the campaign coverage across the news media. News channels need differentiators to set themselves apart in this competitive space: if the campaign cant provide the cutting edge, then poll surveys are expected to become a USP.

It is also true to some extent that an average viewer is no longer interested in the minutiae of politics. Who are the candidates in Darbhanga, and what are the issues in the Mithilanchal region of Bihar maybe endlessly debated by journalists at an adda in the Press Club but the viewer’s interest and attention span seems more limited. The viewer has only one real objective: to know which party is winning and who is the next chief minister of the state. It is this need which the opinion poll/exit poll/post poll is expected to fulfil.

Just take a look at the over-hyped promotional activity that accompanies any poll commissioned by a channel both before and after the results. “For the most accurate, informed, comprehensive poll, turn to channel xxx,”screams one tv commercial; “For the channel you can trust to give you the fastest, most accurate exit poll, turn to channel yyy” shouts a competing ad. When a channel does get a poll right, there is another hysterical response: “The only channel in the country that got the xx elections right!” It is almost as if the success or failure of a channel’s election news coverage is determined by a single number. Forget the hard journalism which reporters might have done by criss-crossing the country, in the end the only thing that is projected to be of ‘real’ value to the viewer is the accuracy of the poll. It is almost as if the psephologist matters more than even the reporter, editor or news anchor at election time.

In 2014, I was editor in chief of the IBN 18 network. We really pushed the boundaries in that general election with our news coverage, commissioning a range of stories and programmes that stretched over an extended period of almost six months. When it was all over, there was immense satisfaction at a job well done. Till I met a marketing executive a few days after the election results were out. “You guys did a good job of the elections, but you know the only guy who predicted that Mr Modi would win more than 300 seats was this Today’s Chanakya on News 24. He is the real media star of this election!” said the executive.

I didn’t quite know how to respond. Our journalists had done award winning programmes, but the only thing that seemed to matter to the outside world was which pollster had predicted the right result (incidentally, our survey team, the redoubtable CSDS-Lokniti had predicted almost 300 seats and a clear majority for the NDA but even that wasn’t considered good enough by this gent). “You know there is more to an election than just a number sir,” I helpfully pointed out.

And yet, the fact is, the 2014 general elections catapulted an unknown Today’s Chanakya into some kind of political rock star, a pundit who was on top of the game. I knew the individual by his original, rather more prosaic name, VK Bajaj. He had come to me as a young man ahead of the 2003 state elections while I was at NDTV with a packet of Bikaneri bhujia and scraps of paper with numbers scribbled on them. He apparently had a family business to manage but his real passion was as an amateur psephologist. “I can predict an election, just give me one chance, ”he said confidently. I asked him to show me his raw data and methodology. “Sir, I cant reveal my data, but you must trust me, I have people in every constituency giving me feedback.”

NDTV was headed by Dr Prannoy Roy, the original pollster turned face of Indian news tv. “You really want me to trust someone to do a poll who wont share his method or data!” he said incredulously. We packed away Mr Bajaj with the promise that we would stay in touch. Over the next few years, Mr Bajaj or Today’s Chanakya did more polls for other channels and was even much sought after by political parties: he got some polls right, the occasional one wrong. He was clever enough to remind you of his predictions when he got it right, rarely contacted you when he was off the mark. Till 2014 happened and he was suddenly the toast of the town.

“Shouldn’t we get Today’s Chanakya to do our poll,” a channel proprietor asked me ahead of the 2015 Bihar elections. “Well, only if he is willing to share his raw data and methodology,” I said. As it turned out, Today’s Chanakya had already signed up with another channel. As it also transpired, the man who had got the 2014 general elections spot on, ended up getting the Bihar elections badly wrong while predicting a NDA win. He did apologise gracefully but then suggested that it was a “computer error” that was responsible. “We punched in some wrong figures,” was the tame explanation. A more rational explanation would have been an admission that pollsters are not bestowed with divine powers, they can get elections wrong!

In a sense, the rise of a Today’s Chanakya reflects how much the media’s understanding of polling has changed over the years. The first major opinion poll for national media was done in 1980 when Dr Roy, Dorab Sopariwala and his team were commissioned by India Today magazine to predict the general elections. The poll was done with great rigour, with a serious attempt being made to provide a state wise analysis of voter behaviour. Statistical models like Index of Opposition Unity (IOU) were evolved to measure the likely poll results. Since it was being done for print, articles were written to justify the numbers. It was, by any reckoning, an exhaustive exercise: it was also, from what one gathers, a less expensive exercise since the cost per sample was lower at the time.

The Roy-Sopariwala model worked well for a decade and more. It was almost a monopoly situation: there were no other pollsters of any great pedigree, there were limited subscribers too since television was dominated by Doordarshan and most print publications had limited budgets. It is only post 1995 as the political system began to splinter and the Congress dominance over national politics came to an end that the IOU model came under strain. At the same time, the proliferation of news channels after 2004 meant that there was now a greater demand for polls.

When I started CNN IBN in 2005, one of the first persons I contracted was Prof Yogendra Yadav of CSDS-Lokniti. Over the years, I had developed a respect for Prof Yadav’s ability to combine political theory with a passion for elections and number-crunching. He had a rare gift of being able to provide a cogent explanation to election trends without being trapped in academic jargon. For almost a decade, he and I became an election jugalbandi: as a journalist, it was enjoyable because it was a genuine attempt at making elections intelligible and accessible to television audiences. Prof Yadav eventually drifted into active politics: maybe, he saw participation in politics as his true calling, but then maybe he just grew fatigued with the changing nature of poll programming on television.

I recall one instance which I believe was a turning point in our partnership. It was counting day in the 2012 UP elections: the initial trends in the first half hour, based primarily on postal ballots, showed the BJP leading. Our post poll had predicted a Samajwadi party win. Buoyed by the early trends, some BJP leaders were already claiming victory. “You will have to apologise to the nation for getting your poll wrong,” shrieked a BJP leader while reacting to the numbers. I could see the nervousness on Prof Yadav’s face. The final results saw the Samajwadi party score a thumping win: no apology had to be given as the BJP guest did a disappearing act! But I sensed that Prof Yadav probably realised that day that being a politician was perhaps less nerve-wracking than being a pollster on counting day!

The Roys and the Yadavs saw polling as primarily an exercise of the mind: their work reflected a commitment to the science of polling and measuring voter behaviour. Today, in the age of T-20 television, where news channels work at a frenetic pace, most polls are less detailed, with the focus almost entirely on the final number, and not on the methodology. Which is also perhaps why many channels are reluctant to invest large amounts of money into rigorous nationwide polls that have bigger sample sizes and will involve greater manpower. Forget the track record or credentials of a pollster, it almost seems as if even a fly by night polling operation is preferable so long as it can be done on the cheap. Almost every conversation one has had with news managers in different channels over the years has been over how we can cut costs while doing polls. “Cant we keep a smaller sample size,?” is a question that is most frequently asked. “Can we get the results as quickly as possible,” is another oft repeated question.

Deadline pressures mean that exit polls in particular are hostage to television’s relentless desire to being first with a story ahead of being the most accurate. This pressure means that a pollster is being asked to be a sprinter rather than a marathon runner: a studious exercise is quickly transformed into a market-driven 100 metre dash for TRPs. The results are potentially disastrous. Voting officially ends in most states at 5 pm. The Election Commission says that television channels can start announcing results half an hour after voting has ended. Theoretically, most channels can start broadcasting the results from 5.30 pm; in reality, people are often lining up at polling booths and continuing to vote till well after 6 pm. Which means that an exit poll could well put out the figures even while voting is still going on without taking into account the last minute rush to vote.

This happened in the 2015 Delhi elections, a single day poll, where voting continued till after 7 pm because of the long queues at polling booths. This did not stop news channels from predicting the results at 6 pm itself, simply because competitive pressures meant that a channel wanted to be the first with their exit poll. Are we then surprised that many pollsters failed to get their numbers right?

What makes it even more hazardous is the fact that translating raw data into accurate seat projections has always been a high risk exercise, especially in states where the gap in voting percentage is very small. Many exit polls got their 2016 Tamil Nadu assembly election projections wrong because they simply could not measure the ground reality that Jayalalithaa’s AIADMK was way ahead of the DMK amongst women voters. Tamil Nadu with its razor thin vote percentage differences between the two main Dravida parties has always been a pollsters nightmare: the pressure then of delivering an accurate number is always difficult.

See Also

As the electoral map of the country gets more fragmented, as elections become more “localised”, with increasing sub-regional variations within a state, it isn’t easy to find a robust statistical model that will convert vote share into seats with any degree of certainty. The likes of the now Chennai-based Dr Rajeeva Karandikar have made serious attempts to find such a model, but there is no guarantee that they will always be right. And yet, the pollster is expected to deliver numbers with accuracy and speed for post election bragging rights.

Sadly, there has been little attempt to move away from this crazy “lets be first with the numbers” philosophy even though there is scant evidence to show that a viewer values a channel being first more than he does it being accurate. I had once suggested to a news channel proprietor that even for a single day poll, we should go in for a meticulously done post poll rather than a hastily organised exit poll. “My personal experience is that a post poll is much more accurate. We can always put out the numbers the next day or even 48 hours later so that we have more time to assess the trends,” was my suggestion. “But who will watch us then once every other channel has already given their numbers,” was the counter. Needless to say, accuracy was compromised yet again at the altar of speed.

It is this race to the bottom that should worry all media practitioners. Have we reduced even polling on television to a tamasha where everyone shouts and screams even as we provide little real benefit to a viewer who is looking for an informed debate and understanding of the results? Is commerce the driving force behind made for tv instant polls? Indeed, in our obsession with numbers, there is very little focus on the underlying voter trends, community, caste or regional patterns, the explanatory nuggets of information that can help make sense of the emerging big picture. Should we, for example, not be trying to explain just why Mr Modi scores so heavily over a much younger Rahul Gandhi in the 18-23 age group of first time voters? Or just why Dalit and tribal voters have drifted away from the Congress in recent elections? These micro trends can only be captured by a detailed poll that goes beyond empty number crunching.

And yet, the overarching belief is that the micro detailing is meant only for the political aficionado, while the average viewer is only clamouring for information about the eventual winner. In other words, a deeper knowledge of voter behaviour is immaterial, what matters is a quick fix satisfaction of the need for numbers that tell us who is winning. I call it part of the “Mcdonaldisation’’ of news where even polls are expected to be like a fast food burger to be instantly consumed. Its almost as if the pollster is like a stock market broker, being asked to satiate his clients need for immediate profits on Dalal Street by telling him which stock is up or down but without giving any additional information that will perhaps explain the big picture.

And yet, I do believe that there any number of viewers/readers who do want the back story of an election victory or defeat, who do want to know not just who is winning, but also why and how. The proof of this lies in the post-poll trackers we did at CNN IBN for an entire year ahead of the 2014 general elections. In partnership with CSDS-Lokniti, the polls gave a quarterly glimpse at how the nationwide picture was changing. With a team of experts that included journalists, political scientists, sociologists and historians, we were able to map just how the mood was changing across India and, importantly, why it was being transformed. Every quarterly polling cycle was a five day exercise on prime time: we got decent ratings but more crucially, it got us acknowledgment from peer groups and beyond for being the gold star in poll analysis. Surely, in a competitive market, perception matters as much as ratings? Even sponsors, who normally shy away from polls for fear of being identified too closely with the numbers, were keen to be part of this landmark exercise.

Maybe, it was a one off because rather than raise the bar since the 2014 elections, there is evidence that media groups are just not interested in investing in high quality polls. Recently, while debating the issue, a news proprietor showed me the TRP figures of the exit poll day for the 2015 Bihar elections. “Look, the channel which got the highest ratings didn’t even do their own poll but only took the numbers from other channels and did a poll of polls. Why do we then need to do spend large amounts of money and do our own poll? Just lets take the numbers from other channels and do the poll,’’ was the rather business-like view.

But perhaps the biggest shocker of the Bihar elections was when a news network chose to abandon a commissioned survey because it reportedly didn’t agree with the numbers that were showing up. The survey had given the Nitish-Lalu combine a whopping 180 seats in the 243 member assembly. The official response was that the channel wasn’t convinced about the survey figures. The unofficial version was that the channel was worried that projecting such a big win for the non-BJP combine at a time when there is a BJP government at the centre may work against the network’s business interests, especially if the poll turned out to be wrong. After all, hadn’t prime minister Modi likened polls ahead of the Delhi assembly elections that showed the BJP losing as evidence of a “bazaroo” (sold) media? Hadn’t the Congress party boycotted opinion poll programmes ahead of the 2014 elections because the figures were not to their liking? Why then take a major risk in a political environment where a poll finding is also seen in partisan terms? As it transpired, the Bihar poll that never saw the light of day turned out to be the most accurate!

Which leaves one to ask: if a combination of political pressures, marketing compulsions and competitive channel wars are going to decide the scope and nature of polling in India, then is it all really worth it? Should serious polling revert to what it was originally meant to be: a genuine attempt by researchers to understand the complexities of voter behaviour in India? And should one then just see television polls as yet another eyeball grabbing “event”, a part of the 24 x 7 news wheel where today’s news is tomorrow’s history? “You can never be wrong for long in news tv,” a senior editor once told me, “there is always another breaking news story around the corner!” Is it any wonder then that serious polling on television as part of this breaking news eco-system is in danger of a breakdown?

View Comments (0)

Leave a Reply

Your email address will not be published.

© 2020 Rajdeep Sardesai. All Rights Reserved.

Scroll To Top