The 30-second television commercial, once a cultural touchpoint, has lost its relevance in today’s world. It’s doomed to be relegated to the dustbin of 20th-century artefacts, right up there with cassette players and dial telephones.
TV commercials had their heyday in the 1960s when people had a surplus of time, particularly in the evenings after work. There were no emails, text messages or social networks to keep up with. Work and life had distinct boundaries, and TV was limited to a handful of stations that only broadcast during certain times of day.
Commercials were a part of the TV experience, a window onto the new world of packaged goods, automobiles and airline travel. They were an efficient way to learn about these products without having to get up from the couch.
The result of all this is that TV commercials feel like relics of a simpler era, one where the entire family gathered around the TV set each night to watch a limited set of programming without the ability to pause, fast forward, rewind, check Facebook, check Twitter, look something up on the internet or send a text message. They certainly don’t feel like 2015, and statistics indicate that viewers are assiduously avoiding them.
A study last year by Arris showed that 84% of respondents wanted to fast forward through the ads they watch, while 60% of them download or record shows so they can skip commercials. Even Super Bowl ads have lost their effectiveness: a 2014 study showed that 80% of them do not increase sales for the companies running them.
The increased use of smartphones and tablets also detracts from TV commercials’ relevance. Last month, researchers found that viewers who focused just on the TV screen were able to recall 2.43 out of every three brands mentioned, while smartphone and tablet users only managed to recall 1.62 on average.
Thanks to the internet, advertising’s even losing its role as an information source: A study by Mindshare earlier this year showed that the percentage of Americans who said advertising helped them learn about products and services dropped from 52% in 2005 to 41% in 2014.
So then, what sort of brand message is appropriate today? What can a brand do to get its message across to consumers in this new media environment without giving up the massive reach that TV commercials can bring?
The answer isn’t yet clear, but one place to start is branded content or native advertising. By sponsoring programming the viewer might want to watch purely for its entertainment value, the brand message does not feel quite so onerous. The emphasis here is on entertainment and on adopting the voice of the site, or the network the programme is showing on. The brand message is not buried, but it’s not front and centre either – a compromise many marketers still struggle with. Branded content and native advertising has proven effective online, such as on Buzzfeed, and there’s no reason to think that it won’t effectively translate to TV, where viewers will actively engage with it, versus the passive experience they have with old-school TV commercials.
Another option is to come to terms with the fact that many (if not most) viewers are watching TV on their own schedules and to start treating shows like in-cinema movies. That means limiting advertising to pre-roll ads only, where it’s less likely to be perceived as annoying or interruptive since it comes on before the viewer has had a chance to become involved with the show. It can even be followed up by a page on the show’s website that has a list of “sponsors” (which sounds better than advertisers), with links that allow the viewer to have a deeper engagement with the brand, anything from downloading coupons to watching a demo to straight-up e-commerce.
TV commercials won’t disappear overnight – they’re still far too effective – and they won’t disappear for all the usual ”TV is dead, the internet is king” reasons. They’ll disappear because the modern consumer no longer has the patience to sit through a four-minute pod of eight 30-second sales pitches. As a result, their effectiveness will slowly wither away, leaving them as artefacts for historians as they study the latter half of the 20th century.
A hot topic this past week has been the growth of new ad blocker blockers (that’s not a typo—we’re talking about software that blocks ad blockers and stops them from working.) Given the impact of ad blocking software, it’s sure to be a hot topic along the beaches of Cannes this week, particularly among ad tech types.
Ad blockers are big news because the number of people using them is both huge and rapidly growing. As with most things tech-related, there are conflicting stats, but a recent study by Reuters showed that 40% of UK internet users utilize some form of ad-blocking software, while an Adobe/PageFair study conducted in the US, showed that number at 28%, with a 69% year-over-year increase. Even more troubling, 41% of the more tech-savvy 18-29 year old cohort were using ad blockers. So figure the number is somewhere between the two and factor in mobile, where Apple allows adblockers to run on iOS.
The answer, the industry hopes, is in the invention of anti-ad blockers, software that sounds as if it’s stepped right out of Dr. Suess’s Sneetches tale, that prevents the ad blockers from working. Buzz last week was around Sourcepoint, an ad blocker-blocker from ad tech vet Ben Barokas whose Sourcepoint just raised $10 million in Series A funding.
From where we’re sitting, few things could be as counterintuitive as ad blocker blockers. Forcing users to sit through ads they thought they were avoiding doesn’t exactly lead to good will. And how long till a virtual arms race breaks out as the ad blockers come up with ways to bypass the blocker-blockers, who’ll then roll out an update that stymies the blockers, and so on and so on.
The way to stop users from turning to ad blockers, to paraphrase my colleague Jesse Redniss’s recent answer to similar question, is to make ads people don’t want to avoid. And while that sounds like common sense, you’d be surprised at how difficult that is for most brands.
Or maybe you wouldn’t be.
Most of the online advertising we want to avoid is interruptive advertising: we’re online to do something in particular and we just don’t have time to listen to yet another brand’s sales pitch. The more we can avoid that sort of advertising, particularly if it pops up, rolls over or takes over the page, the better life is. Native advertising and branded content, particularly the #CreatedWith variety, are another story. Users will engage with those because (if you remember therules laid out here by Andy Marks last week) they are first and foremost entertainment. Not sales pitches.
So even if the user doesn’t want to engage with these units right away, they don’t want to block them either. This is content they might want to watch, certainly something they are not rejecting out of hand and if it interests them they will engage.
And that is how you eliminate the need for ad blockers. For as our friends in the medical field have learned, it’s far more effective to treat the cause than to treat the symptoms. Something the ad tech community ignores at its own risk.
Vox had a great story last Friday about how Netflix doesn’t own any of its own shows—they’re all the property of the studios who made them. This seemed to confuse a lot of people, while at the same time engendering a conversation among TV insiders about how little understood the notion of studio ownership was, particularly among the sort of tech blogger who lives to proclaim the death of television. So, a little enlightenment. Most networks do not make their own shows. (There are exceptions.) They buy their shows from studios like Sony and Lionsgate, who put up all the money for the production costs and for making the pilot. (Most of the time. There are, on occasion, exceptions, particularly when a big name star is involved.) Now I bet you’re wondering why the networks do this, because it seems like making TV shows is the fun part of the business. And the answer is a four-letter word: Risk. Most TV shows fail. Most pilots don’t get made into TV shows. Most scripts don’t get made into pilots. You see the pattern here? While it’s fun to work on a successful TV show, the vast majority of TV shows aren’t successful and they cost millions of dollar to make (even a pilot will cost millions of dollars) so if you’re a network executive, you may not want to take on that amount of risk, particularly since production is not your forte. Take Netflix. They get to pick and choose among the best shows the various studios have come up with without putting any initial skin in the game themselves. They’re often bidding for these shows against competitors like HBO and Amazon and that may cause the price to be higher than it would have been if they’d made the series themselves. But in their minds, that’s okay, because it’s still less than what they would have lost if they’d made a pilot and it flopped. So then why do the studios do it? If it’s so risky and there are so many flops, why bother?They do it because there is a huge potential upside. A hit series can be worth hundreds of millions of dollars for everyone involved: actors, writers, producers, studio executives. And while hit series are not the rule, there are enough of them that they are not the exception. For the studio, there is money to be made from selling rights to streaming services like Netflix, from DVD sales, from syndication and from overseas rights. That money—which can be in the hundreds of millions of dollars—does not traditionally go to the network that airs the show (there are, as always, the occasional exceptions), and successful studios and production companies are able to stay afloat (and then some) by successfully intuiting which shows are going to be hits and running with them. As we’ve said, it’s a very complicated business.
Why bother with the networks at all?
That’s an excellent question and one we’ve been hearing a lot the past year or so. The answer is marketing and promotion. The networks spend millions to market and promote the shows they air because the more people who watch the show, the more advertising revenue the network brings in. If a lot of people are watching a show, it gets renewed again and again and everyone gets rich. The networks are good at promoting shows, even in this age of audience fragmentation, and they’re a safe bet.
That said, there’s been talk (and to date, it’s just that—talk) about putting together a show with top-notch talent and a name director, all of whom have large social media followings, and taking it directly to consumers via a website and/or Roku channel. It’s an idea that might work, but no one’s willing to take a chance on it yet. (“Yet” being the operative word here—as OTT continues to explode, the odds of success will look more promising.) Everything else is changing, why not distribution?
Dick Costolo started his career at Twitter with one of the classic tweets of all time. He ended with an equally ironic blast, but what happened in between was often not very pretty.
Costolo had something close to an impossible job: take the only social media platform that huge swaths of humanity seemed to actively dislike and turn it into something profitable and long-lasting.
Twitter’s biggest problem has always been the size of its audience, its inability to expand beyond the odd amalgam of media types, celebrities, teenage girls, urban youth, techies and porn stars who make up the bulk of its user base, so that it’s able to attract the sort of mass audience advertisers and data collectors can find on Facebook.
In his defense, Costolo was a very smart guy who made some very clever moves. Chief among them was the decision to launch third party tweet ads.
This was a brilliant move because all those people who “hate Twitter” are generally intrigued by what’s trending on the platform, and so the ability to surface tweets from actors, journalists and the like on third party sites (with embeddable, clickable video, no less) promises to be a huge win.
Where Costolo kept getting tripped up however, was in the Attracting New Users department. According to eMarketer, Twitter’s monthly user base will grow 14.1 percent this year, down from 30 percent growth two years ago, with a projected user growth rate of just 6 percent by 2019.
We all know why that is, and yet so few of the Silicon Valley crew are willing to say it: You need a certain type of personality to feel comfortable tweeting. It does not come naturally to most people. And more importantly, it never will. Posting on Twitter is the internet equivalent of standing up in front of a crowd and giving a speech and that’s just not something most people would do without a compelling reason, like having a gun pointed at their head.
In a heartfelt 8,300 word essay released earlier this week, investor Chris Sacca outlined some ways he felt Twitter could save itself. Some made a lot of sense, like his suggestion that Twitter introduce a live public feed where TV shows could introduce tweets from “the actors themselves, the show’s official account, some parody accounts, hobbyist commentators, and celebrities who are known to be big fans of the show.”
Where Sacca trips up however, is in his suggestion on how to get people over their fear of public tweeting. His solution is to ask leading questions like ““Who influenced you the most growing up?”, asking users to comment on controversial articles, or, worse still, asking users to post pictures of their “goofiest ever Halloween costumer.”
And this is why Twitter is in trouble. Because anyone who understands human nature or user behavior (your pick) knows, beyond a shadow of a doubt, that no one is going to respond to any of those suggestions, certainly not when their answers will appear in public. It’s hard enough to get people to respond when the only people seeing their responses are their friends (Facebook). People who regard the whole notion of publicly sharing their thoughts as frightening/ridiculous/egomaniacal are not going to start tweeting pictures of their Halloween costumes. Least of all their “most ridiculous” ones.
What Twitter needs to do is get over the notion that it’s a social network and embrace the notion that it’s a broadcast network. They need to start thinking about viewers, not users. Let the core users provide the content. And figure out a way for other people to see that content, in all its live and immediate glory, without having to sign up for a user account (a viewer account is another story) and/or feel like a loser when they realize they have nothing they want to tweet about.
Twitter is a great platform and we are big fans, but if it keeps trying to be Facebook Lite, it’s going to fail miserably. Double down on the broadcast, on the live, on the immediacy, lay off trying to get people to become “users” rather than “viewers”, and things (Twitter stock among them) may finally start to look up.
In the age of selfies and “citizen journalists,” it seems only natural that live streaming would become a trend. Over the years a number of companies have tried to make personal broadcast streaming a reality, but two newcomers appear to have succeeded: Meerkat and Periscope, the former a startup backed by the likes of actor Jared Leto, the latter a startup purchased by Twitter only a few months ago.
They both burst onto the stage in April during the Floyd Mayweather/Manny Pacquiao fight (known on the interwebs as #MayPac). The reason? Dozens of people were live streaming the $100 pay-per-view broadcast of the fight. Unsurprisingly, HBO and other rights holders were a bit unhappy about these unauthorized broadcasts, and even more unhappy about what they felt was Twitter’s lack of a serious response to their takedown requests.
Many in the industry rolled their eyes at HBO, noting that the shaky, hand-held streams were hardly a replacement for an HD broadcast and that the network was getting all bent out of shape over nothing.
So were HBO and other rights holders justified in coming down hard on Periscope and Meerkat? Or was it, yet again, much ado about nothing? Change is inevitable, right?
So just how long is a video view? For YouTube, it’s 30 seconds. For Facebook, it’s three seconds; which may well be why the social network was able to grow its daily video view count from one billion views per day in September 2014 to three billion in January 2015, and four billion in April 2015.
This new ‘minimalism’ also extends to digital advertising. To qualify as a view, the Interactive Advertising Bureau (IAB) requires only that “at least 50 percent of the ad’s pixels are visible on a screen for at least two consecutive seconds.
Why am I bringing this up? Good question.
First, it helps make sense of Snapchat’s recent claim that its 100 million users are watching two billion mobile videos a day (about 20 video views a day per user). Second, it provides a cautionary tale of what happens when there are no standards and every platform creates its own metrics, making it impossible to compare apples to apples. (Or even apples to oranges. Although, given the wide variety of methods used by various social platforms to measure video, it is actually more a case of comparing apples to elephants.)