A perennial topic on this blog is research. In my interview last week with former NBC President, Warren Littlefield he discussed it and the flaws in the system as he saw them. I received an interesting Friday Question though, right after the election, supporting the notion of research and seeking my response. (I'm also trying to answer a few more of these questions and catch up a little.) Here’s his Q and my A.
It’s from Hecky:
Given the success of sabremetrics and deft statistical analysis in both sports and now politics (Nate Silver corrected predicted everything but the senate race in North Dakota), how can you be so opposed to research testing in principle when it comes to entertainment? Certainly it's true that a lot of research is done poorly (e.g. bad methodology, unwarranted conclusions/inferences, sloppy handling of the data, etc.). The companies doing it for profit don't make their methods publicly available, so who knows if what they're doing is any good. But I don't think that justifies a wholesale rejection of the entire enterprise. Maybe we just haven't seen a Nate Silver of Nielsen yet.
All this stuff about "going with your gut" and just finding "great" material and having "vision" -- unquantifiable rules of thumb -- strikes me as complete hooey. It's the exact same sort of dogma that got so deliciously panned in "Moneyball" and in all the election post-mortems about FOX News predictions over the last two days. When done right, statistical research methods work, and it doesn't really matter what's being analyzed. It could be baseball, TV, the stock market, or politics. TV is about making money by generating ratings. And I don't see why we shouldn't expect proper research to aid in achieving that goal. It's just a matter of figuring out the right parameters by which to measure the performance of one's algorithm.
Thank you for your question, Hecky. Let me first say this: in 1974 I worked in the NBC research department. My educational background emphasized math. I appreciate the value of statistics and have seen the process of audience testing first hand from both sides -- as the network and as a producer. Okay -- that's my disclaimer. Here's my answer:
How do you measure art, Hecky? How do you assign a numeric value to creative endeavors? Yes, you can predict who will win an election. It’s simple. People tell you they’ll vote for candidate A or candidate B and you put a check in the appropriate column. If you’ve asked the right people, if you’ve asked a large enough sample of people, and they’re truthful then you can make a prediction with relative assurance (always taking into account a margin for error).
When you’re analyzing baseball players there are intangibles but their ultimate value can be determined by performance. How many hits in how many at bats? Strikes vs. balls? How many stolen bases and how many caught stealings? They're all numbers -- numbers that don't lie. MONEYBALL found statistics that were overlooked. They discovered undervalued players. And in MONEYBALL, these statistics were used merely as one form of input. Scouting and intangibles were still taken into account, just not to the same extent. And the advantage the Oakland A's had was that no one else knew these statistics, which gave them a competitive edge. Today every team knows those same formulas. So you better have someone with an eye for talent to go along with the computer readouts.
But turning to entertainment --
When a joke doesn’t get a laugh, is it because the writer isn’t good, the actor didn’t deliver the line well, the audience doesn’t like that actor, the audience doesn’t like the situation, the audience doesn’t understand the joke, the audience is tired because it’s late at night, the air conditioning isn’t working, they’ve heard a similar joke, they didn’t hear the joke correctly, they’re biased against jokes of that topic, they were distracted by something else going on on the set, a camera blocked their view, they were pre-occupied by problems at home, or any combination of the above? Plus, the audience you’re testing has little dials and is asked to twist them to the right or left depending on how much they liked said joke – what’s the standard? Two people may find the joke equally funny but one person gives it a +4 and the other gives it a +7. Is one guy overly generous or is the other overly tough?
So when a test audience is watching your show and that joke comes on the screen and a line on a graph determines how funny it supposedly is – how accurate do you think that is? And how helpful is that number in determining why the joke didn’t rate higher?
Okay, let’s say you ask each audience member why he didn’t laugh at the joke. Here’s the answer you’re going to get most of the time: it wasn’t funny. Yeah, we know it wasn’t funny. Why? You think they can tell you? I’ve watched focus groups where people didn’t like characters because of their shoes.
On the other hand, you poll a bunch of people on who they plan to vote for they can tell you. And if you ask why they can generally give you an answer. They like his tax plan. They think the other guy isn’t a friend of Israel’s. They always vote along party lines. Their reasons aren’t subconscious. When you laugh at a joke, when you hear a new band, when you see a certain painting how often can you accurately define and articulate what you like about it and to what extent? And then digitize it.
That’s what program research attempts to do. It takes your show and breaks it down into which characters the audience thought they liked, which jokes they thought they liked, and based on that – how popular the show might be.
There is one statistic I would love to see. It’s also the one statistic these audience research firms won’t show you. HOW MANY TIMES HAVE YOU BEEN WRONG?
Since the failure rate on television shows is over 90% and these were the shows that all tested well, my guess is that number they’re keeping from us is also well into the 90 percentile. So Hecky, I disagree with your theory that testing works. It doesn’t always.
Now do the math. If something doesn’t work 90+% of the time why keep doing it?
Nat Silver’s numbers worked. His information was accurate. Karl Rove’s was not. And neither was the research that said SEINFELD was a bomb and THE PLAYBOY CLUB was going to be a breakout hit.
So the answer here is not to put too much stock in audience research. It’s too flawed. As Mr. Littlefield said, any show with new ideas, hard-to-categorize premises or execution test poorly. But show Mother Teresa assisting orphans and it will test through the roof.What would you rather watch -- that or BREAKING BAD? Guess which of those two shows the research company would recommend.
And yet the networks make programming decisions based almost SOLEY on this flawed information. And that’s my big beef. So when a network president “goes with his guy” and discards research for what he believes is a good show, I say that’s just as valid or more valid an indicator of whether a show will succeed. And a whole lot cheaper.
I could see political strategists going to Romney and saying you need to appeal more to women and minorities. I can’t see advisors telling Picasso he needs more blue, or telling Shakespeare that 64.6% of playgoers don’t like Hamlet because he’s indecisive.
39 comments :
That, Kenneth, is the perfect answer.
I've recently been writing a paper that's included a bunch of stuff on this, and one of the questions you don't touch on, Ken, is whether data/statistics/algorithm-driven methods can spot something entirely new. There's some very interesting stuff about using such systems to spot hits in the making in Christopher Steiner's book _Automate This_: there are companies that claim to have isolated clusters of factors that determine hits (primarily in music, but the same stuff is moving into TV and movies). So the idea is that you can use these systems to avoid wasting large amounts of money on things that will never fly.
But these are all based on large amounts of data about what has been successful in the *past*. Cue Don Draper, in MAD MEN, about the limitations of focus groups (S4e04, "The Rejected", talking to Dr Fay): "A new idea is something they don't know yet, so of course it's not going to come up as an option."
wg
I want to see numbers that compare an audience research score and the shows ratings. Do some research to see if audience research is actually worth a damn or if the studios are wasting their money doing it.
Sorry Ken, but all I can see here is you saying "people do it badly, therefore it is impossible to do well".
Wrong. People have been doing things shitty for ages. They kept failing, tweaked, gotten it better, tweaked, and so on. That's why math today works so well: People have been refining it for centuries and know what works and what doesn't.
That clearly hasn't been done in audience research. But I don't see why a properly implemented testing process (with peer reviewed methodologies and all that stuff) couldn't work or shouldn't be done. Because it's hard? Unacceptable.
All I know is I'd trust somebody who trusted their gut over some statistical bean counter any day of the week.
Agreed.
I was a research analyst at Frank N. Magid & Associates for a couple of years on both radio and TV projects.
My conclusion: Qualitative research is a tool misused by executives who usually do not have the power to make a decision unilaterally.
Audience research (size, demos, programs watched, etc.) is valid and reliable; but qualitative,not so much.
Qualitative depends on the interpretation...so, we get right back to "the gut."
I'm all for using metrics in a variety of different areas, but attempting to quantify art is...very disturbing.
All that could possibly yield is an even more stagnant creative environment than we have now. Wendy said it best in her post when talking about focus groups - new ideas will have even more trouble pushing through because there's no algorithm for it.
One of the beautiful(and frustrating) things about our culture is how rapidly it evolves. Awfully hard to quantify that.
Very interesting. The original UK Office tested very badly, but luckily the BBC stuck with it. I've often found the shows I end up loving took a couple of eps to "click." Seinfeld seemed a bit too broad at first to me - Jerry saying everything at the top of his voice, and Kramer was so OTT. Also Modern Family seemed a bit cheezy with its "hugging and learning" - I love them both now, bit if I'd been in a test audience I wouldn't have given them good scores.
Not everybody is going to laugh at the same joke. My radio career is proof of that.
I will always fall on the side of bullshit when it comes to audience research. I trust my own gut instincts and opinions before that of some random focus group.
And don't get me started on data - which can be easily manipulated to tell whatever story you want.
There are some things that can't be quantified. And this is the problem with a creative business run solely by number crunchers.
The best executives will have equal parts of creativity and business acumen.
Good post! It reminds me of watching the DVD extras of The Godfather when they spent a fortune to try and replace Pacino, and Coppola and adamantly swore that Brando would never be in that movie. Yet, these are the people in charge of giving the green light to such up coming movies as Monopoly and Hungry Hungry Hippos because Battleship made a couple of bucks more than it's budget. You have to wonder if they ever get tired of getting things incredibly wrong.
David Lee here.
Ken is, of course, exactly right. The only two decisions I can remember making based on audience research were losing the "Wings" opening sequence with the plane flying to the accompaniment of Schubert and putting Eddie in "Frasier". While doing a "dial" test on "Wings" to see if there was audience tune out during the titles (there was and the ratings ticked up after we changed it), the testing guy mentioned that animals always tested well. I remember Peter David and I decided rather cynically that we would add a dog to the next show we did. And we did. The other choice was an infant. No thanks.
I completely agree with Ken on this. There's a serious problem with collecting statistical data on something as subjective as art: Most people are not used to explaining, or even understanding, WHY they like or dislike something.
As someone who studied media, spent some years reviewing films, and who is pretty analytical by nature, I find myself getting frustrated when people write-off a film or TV show for completely illogical and nonsensical reasons.
Their taste made be completely valid, but their reasoning is nearly always faulty.
For example, a friend of mine recently attempted to watch TWIN PEAKS for the first time. Now I'm somewhat biased towards that show because I know people connected to it, but I can still see all its strengths and flaws.
My friend's sole argument, and the reason she refused to watch it any further: "The acting is terrible."
Well actually, the acting isn't terrible. You may not like it, that's fair enough, but David Lynch was very deliberately going for a style. And he knows what he's doing.
"No, I'm sorry. The acting is terrible. I just couldn't watch it."
David Lynch is renown for being able to get a good performance out of anybody. If you didn't like his decisions, that's cool. But he's an artist, and the way the actors behave is precisely what he wanted. It was a very deliberate choice.
"No. Nobody could act. I couldn't watch it. I've never seen such poor acting. They shouldn't be working."
At which point I gave up.
Somebody doing audience research with my friend could only conclude that several of the key roles needed to be recast, when actually her problem with the show was far much fundamental than that.
Now do I think that ALL audience research is pointless? No, I don't. For example, watching something with an audience you get a great feel for what's working and what isn't.
But as soon as you start trying to drill down into what worked and what didn't, you generally get nonsense back.
Alan Moore, a personal hero of mine, sums up the problem succinctly: "It's not the job of the artist to give the audience what the audience wants. If the audience knew what they needed, then they wouldn't be the audience. They would be the artists. It is the job of artists to give the audience what they need."
@ David Lee
Actually, I always really liked the opening to Wings, with the classical Schubert piece. It set it apart from all the generic 90's themes that all sounded like Deep Blue Something.
Personally, I didn't watch Twin Peaks past its first season, but I can admit it had a very specific creative voice and ambition. Sometimes, I consider giving it a second try so I can finish it.
Most good shows take some time finding their voice. My personal favorite example is probably Homeland. I started watching it because it had Clare Danes and Damian Lewis, and it was produced by the 24 staff, right on the heels of that show's ending. It wasn't until the fourth episode that I really started to get into it, and realize how special these characters are.
Johnny, something similar happened to me with a friend shortly after "Ugly Betty" first went on the air. He couldn't understand the show's style--the wacky color scheme, the crazy wipes, the melodramatic acting--and thought it was all nonsense (exactly the word he used). I patiently explained to him that it was adapted from a Mexican telenovela and they were replicating that style, and he should give it another chance bearing this in mind. He said he would, which usually translates to "I won't." But to my utter surprise, a week or so later he sent me an e-mail that said, "Okay, I get it now." So you see, opinions can be changed once people have the proper information.
While reading this post, the two "Fatal Attraction" endings comes to mind. Personally, I prefer the original ending but most audiences preferred the 'shock' ending. Then again, the re-shot ending was certainly a factor of the movie becoming a major hit. (Word-of-mouth.)
When it comes to the creative process, I prefer deciding from the gut than deciding from test screening results. But as you and several others have mentioned, sometimes test screenings don't always add up to success.
What do you think people were saying 20 years ago about baseball? "I'll trust the gut of the scout over those nerds any day".
What do you think people were saying about politics 10 years ago? "No way you can quantify the voting of the populace, there's too many variables. People are too hard to predict. I'll trust the political pundit over those nerds any day".
Sound familiar to what Ken is saying today? and what barbie said a few years ago - "math is hard".
Iterate, improve and revise. You may not be able to use the audience research of today to predict a "hit", but you will be able to someday.
For the people who are saying Ken is kind of wrong, I think you are missing the point.
Math works when there are numbers, because numbers are numbers. You can't muck around with them too much.
It's hard to judge WHY something that can be quantified as 'art' works.
Look at the Monkees. There is no real reason that show should have worked, but it did, because it just had the exact right chemistry, and came on at the exact right time in television history. When they tried to replicate that, it failed.
Math is objective. Art is subjective. That's the problem with trying to use math to judge television shows.
Fine. When they perfect audience testing to where it's accurate I'll be the first to sign up.
But until then...
It's important to understand the difference between qualitative and quantitative research. Nate Silver is doing meta-analysis on quantitative research. Audience research (focus groups and such) are qualitative research. Basically, they're two totally different fields.
Audience research is really good at measuring yes/no responses. Ask enough people if they will vote for Obama on Nov. 6, and you'll have a sense of where the country is trending.
But watching TV isn't like voting. You have to be excited enough about the show to tune in, or at least set your DVR, and then keep watching it. It's not one, somewhat annoying line you need stand in once every four years -- it's 30-60 minutes out of your life, every week.
And what will it take to get you to make that commitment? It's not as simple as the handful of things that motivate voters -- someone once said to me, based on Newsradio's ratings, that "America doesn't want to invite Dave Foley into their living room." Who even knows if that's right -- was it Dave Foley? Or is the word "Newsradio" so boring that no one would willingly sit down to watch something by that name?
Lastly, an election is just a snapshot. On that day, in this country, this is how people felt. You can't govern the country on the basis of the electoral college. By which I mean, the election isn't going to help Obama select the top three priorities for the State Dept. staff in Algeria over the next year, or land on the specific reductions to Medicare needed to balance the budget.
In the same way, audience research can't tell you if Norm's secret ability is going to be baking french pastry or amazing color sense. Only the writing staff can make that call, based on what makes them laugh and where it takes the story.
Lastly, I think you have a participant-honesty issue that is gonna be a tough nut to crack. I don't know how Nielsen families react to the scrutiny, but I've had approximately 50 conversations with people who, knowing only three things about Breaking Bad (cancer, cooking meth, violent), have insisted it's not their kind of show. And approximately 100 conversations with people who stumbled upon an episode in their living room one night and couldn't tear their eyes away.
One of these situations makes people feel exposed and in need of defending their taste, one of them makes them feel safe and open to new things. If you can't create that, on a large scale, in your testing scenario, the data is going to be of limited use.
If you are basing your analysis on what worked in the past (TV, Movies, Music, any art form) you will eventually be wrong anyway. Tastes change over time; what was popular in 1999 might not fly today.
Also, because a certain show is popular doesn't mean you can just copy part of it and have a new hit. Mad Men's success wasn't solely due to the fact it took place in the late 50's/early 60's, but that's all the networks saw. The Playboy Club and Pan Am did not have the writing (or in some cases, acting) of Mad Men, but they sure looked period (ish)!
Everyone is looking at this the wrong way. It's not about trying to figure out why something is a "hit" and replicating it. It's about making something new and trying to determine if it is going to impact enough people that they will not only watch it once but continue to watch it.
It's not about analyzing art, it's about human behavior, industry trends and getting better data. The trend in using analytics in baseball to determine player worth didn't happen over night and it really didn't take off until SABRE started gathering better data than average, RBI's, HR's, ERA, walks and strikeouts.
For example - take Last Resort. Interesting premise, well acted, a few big TV names, will probably draw decent numbers for the pilot. Will it have legs? what would it need to have legs, what questions can you ask a focus group after they've seen the pilot that would give you some data on where to focus future story lines. How long does the average person care about a single plot line? What's Scott Speedman's Q score?
If you've ever participated in a political poll you know they ask a lot more than "are you voting for A or B?" and Nate Silver does a lot more in his analysis than aggregate that info across 50 states. Polling is a constant feedback loop and one of the reasons why audience research is so bad today for making predictions is that it's 1 shot data. Other than the neilsens and the other pure number generators, it's not continuous data.
Yep..a lot of the data is crap today. Does that mean it can't get better and be more useful?
Only the unenlightened choose to bury their heads in the sand, claim "it can't be done" and walk away.
In baseball and elections (and a myriad of other areas), you have a clearly defined output ('did I win the baseball game?', 'did I win this particular state?', etc) and a relatively small set of inputs (the various baseball player's stats, the polls, etc)
For TV shows, you can pretty easily define your goal ('how do I maximise ratings?' or 'how do I maximise Emmy wins?' or whatever), but the number of inputs is vast - actors, writers, every word of the script, how every word is performed and no doubt an enormous number of other factors. Furthermore, most of these factors aren't easily quantified. ('Shelly Long's comedic timing on that joke was a 8').
Now, in theory, if you could quantify all the inputs and get a sufficiently large sample size, then you could number-crunch and optimise for your end goal.
In practice, those are two pretty enormous ifs. I won't say they can never be overcome. After all, somebody quantified Jeopardy questions well enough for a computer to become the best player in the world. But I think this is at least a magnitude of difficulty harder still.
"If I'd asked customers what they wanted, they would have said "a faster horse."
- Henry Ford
Ken,
A good post and plenty of reasonable comments.
To wldr, who wrote:
"Sorry Ken, but all I can see here is you saying "people do it badly, therefore it is impossible to do well"."
I think you've misunderstood.
Ken Levine's contention, and the many other commenters pointing out the limits of any kind of qualitative research, is that it's impossible to do well, therefore people who do it, do it badly.
Neither popularity nor quality has ever been a guarantor of the other.
If data analytics were the all-important factor in a creative endeavor, studios and production houses could ask potential audiences what sort of shows they'd like to see, and save on writers by using the crowd-sourced data to produce hit after hit.
It doesn't happen (despite some execs popping a boner at the thought) because crowds aren't that clever, and analysis of trends isn't equivalent to creation.
Sabremetrics don't tell you how to train better ballplayers (despite that sequence in Moneyball), 528 wouldn't have helped Romney learn how to win the election (even IF he accepted the math as valid) -- and audience testing doesn't tell you how to make a better show; it just tells you how much people like the one you have now.
You coud argue that trend-analysis on an informal, individual level, is kind of a prerequisite for breaking into the industry; somewhere someone sometime has to believe you have a feel for how to satisfy an audience.
But where creative people get frustrated is in the act of confusing test results -- which are a response to an existing product -- for a recipe on how to improve the product. Translating test results into a presciption for a better direction takes as much personal good-taste, guesswork -- and gutwork -- as coming up with ideas in the first place.
Let's leave the number-crunching to sports, stocks, and politics. Because if one day, a company produced a movie or TV show tailored to my demographic profile, crowd-sourced, researched within an inch of its life to appeal to ME personally -- I wouldn't want to see it on principle.
Then again, I did watch Star Wars Episode One. But look how well that turned out.
Question: How is it decided who gets credit for creating a series? I was reading a piece about the old sitcom Bewitched awhile back, and while a gentleman named Sol Saks is credited in every episode as creator of the series, I got the impression from the article that nearly half of the guys associated with the series in its first season claimed at some point to have been the one who REALLY dreamed up the whole thing. While I'm sure at least some of that is latter-day glory-seeking, how (and who) decides who gets credit for creating a television series?
Yay!
Audience research is not bullshit per se, and can be very useful, the problems relate to how research projects are defined, methods chosen and implemented. Group discussions or focus groups are a key qualitative tool to assess and support decision-making about TV programs and many other things. However, there is an inherent moderator-based risk and usually a high cost to reduce findings risk, which not everyone is willing to pay. Our firm Audience Dialogue has developed a simpler method that less skilled people can apply to support decision making on media and other matters, it's called consensus group method and unlike focus groups, it works to find what consensus exists amongst a group, and sometimes this is more valuable than working to a particular aspect.
The way I look at statistics is that the "number" you get really represents a cloud of probability. The more valuable iterations of what you're testing you get, the more likely that cloud will look like a single point. That is basically what Nate Silver has done the last two elections: taken huge streams of data and figured out where the likeliest point is from that data. Given that everyone in Ohio was probably polled three times in the last week before the election, it takes a lot of work but it's actually not that hard if you know what you're doing.
Baseball statistics work well for some things and not so well for others, in part because there are thousands of valuable inputs for some and not so many for others (or the way they're classified, like for fielding, is still in its infancy).
Taking a single test screening, or dozen, of a show, is unlikely to give you a lot of valuable input. How will it play in Peoria if it plays well in LA?
For the record, I'm not saying it CAN'T be done, I'm saying it will be hard to do in a way that produces useful information.
Nate Silver had the advantage of 50 states worth of local & national polls, taken over a series of months, and conducted by random phone sampling. Each individual poll was funded by a media organization seeking to attract readers with their data -- probably close to 50 different outlets, all in, if not more.
To mimic that level of data, the four networks would have to adopt a polling budget previously spread over 12+ media outlets, and they would be seeking data on not just two candidates, but a roster of 5-10 shows. The most expensive network pilot I know of cost $12 million -- to gain reliable data on audience response might conceivably cost that much or more, AND at the end, you wouldn't have a show to broadcast.
Y'know, it's funny when we read stuff from Ken and comment on things like audience research bullshit, sitcoms killed before their time, and invariably The Classics getting referred to. Yet we forget (or weren't around yet) that even back in the 1960s -- when you'd think everything was golden -- the networks were full of totally-awful crap shows, too.
I bring this up because given that Xmas is approaching, I was trying to remember sitcoms and dramas from when I was a kid that I might like on first-season DVD. I mean even obscure shows that lasted maybe a season or two.
That's when I encountered this: http://www.youtube.com/watch?v=QebFI-3zMO0
Good freakin' Lord. A whole *minute and a half theme song* to establish the premise every single week? And the laugh track doesn;t even laugh at anything remotely funny. Ah well, what did I know when I was 5 or 6 years old?
There's nothing on YouTube that might tell me whether "When Things Were Rotten" would be funny to me today (the theme song doesn't bode well, tho), so I'm just going with 'F Troop' instead. That I *know* is still pretty funny.
I just remember back when I was a first grader, me and everyone I knew would run around the playground singing the beginning of the "It's About Time" theme song, except we'd sing "It's about time, it's about space, it's about time I slapped your face".
When that got old and everybody had their faces slapped a few times, we switched to asking everyone if they got the letter we sent ... and if they didn't, we'd remember to STAMP IT. Right on their foot.
Part of the issue that remains, whether with statistical analysis or gut-trusting, is another kind of calculus: how long before the person in charge of the gut or the stat-research, starts to imagine it is THEM who is the voice, and then it starts to take over to the point, the stats or gut or whatever become worthless because their potential value is ignored for pure ego. Yeah, I'm looking at Rove on Election night, but there are others in industry too, as well as the broadcast industry. It's human.
All the above are part of this puzzle. the fact that focus groups provide qualitative judgements and also small sample sizes (one challenge of early sabermetrics was to determine when samples sizes became statistically significant). If something were to be done in the TV industry on the level of what Silver did it would be done with extensive research into what makes a successful show. You might have to figure out a quatitative factor for what makes the successful shows a success. A point system for stars, type of comedy, type of jokes, characters, and storyline. By the time you had enough samples the trends will have changed. I don't think it is impossible but it would take intense reasearch into why shows succeed and then quantifying and synthesizing these into a formula. Then you could judge new shows to determine success or failure. I think some people have already looked into this and probably are continuing to do this.
This isn't about creating art but about creating a ratings success. My guess is that such a show would be average to above average but probably never great. Statisticall analysis is used for both predictions and evaluations but it is a tool and not some majic formula to make everything work. Nate Silver got the election right because some states were so lopsided it was easy to predict and those that weren't were polled so much that the sample sizes available were very reliable. He had a solid formula and a large amount of data.
The other difference between tv and elections is that voters are inundated with information about the candidates for literally years before the election. Based on this fact, Google searches, twitter trends, etc. can be used to supplement polling data.
Maybe if we had new pilots campaign for a few months ahead of the fall season... Hey, I think I just stumbled on a reality show idea.
PS Ken, as a data analyst looking to transition to tv writing, I'm curious about your story. Have you posted anything about it already?
Post a Comment