CAT | Rambles
Is print dead? This has been constantly asked, re-asked and over-analysed during the last few years with the expansion of current media developments. However, focusing on the traditional newspaper’s death sentence precludes us from examining the real and very exciting changes that have taken place – and are currently taking place – in the news industry as a whole; that is, how newspapers and news sites are interacting with social media creating news that is centred on your photographs, your videos, and perhaps most importantly, your opinions.
Newspapers and news sites have responded to demands with a new age of digital, personalised and unedited news. Whilst sites are exploring different angles, the overall agenda is the same; giving us an enhanced experience of the news where we all take part. The BBC News website’s ‘Have your say’ and ‘Your pictures and stories’ sections are just a couple of examples amongst countless. CNN uses ‘iReport’ – a user-generated site where ‘the way people like you report the news’ – influencing the way that CCN itself reports the news. And perhaps most groundbreaking is the ‘Guardian Zeitgeist’, a news feed application that literally captures the spirit of the times, pulling in stories from the main site according to ‘social signals’ (i.e. reader trends and mentions on Twitter). The day’s ‘Zeitgeistiness’ is calculated at midnight each day and is frozen in the archives for posterity. We create each day’s Zeitgeist; the news has been democratised.
Since news is now presented as something to which we should respond, actively contribute, and shape, the traditional client-editor relationship in the media has been overhauled. We now expect to have a voice in the news – to play a part in the debate – whereas in the ‘Letters’ section in traditional newspapers, the editor decides which of our opinions are worth publishing. The power has shifted from the editor to us; our opinions have become part of the news and the way it is told.
The relationship between news sites and social media is therefore ever-changing and increasingly significant. Recent turmoil has proved this: the London riots showed the BBC to be getting much of their information from Twitter, enabling journalists to collate news from many different places simultaneously; and Twitter is particularly useful in covering the Middle Eastern conflicts, as Syria for example have banned journalists. Twitter has become a new Reuters. Does this make the journalist redundant by simply using information from tweets? News sites certainly no longer appear to be the front line for news. However, we perhaps need journalists more than ever to sift through the copious amounts of information; not only creating a story, but actually providing an analysis.
If sources from ‘non-professionals’ have become the norm, can we trust the news? What are people’s Twitter agendas? There is no regulating body – or even necessarily an incentive – to maintain a reputable journalistic standards on Twitter. In which case, perhaps we should be increasingly sceptical of the news the more democratised it becomes. Whilst we assume news sites check their sources, these are becoming increasingly difficult to track down with the anonymity of the internet. Or, alternatively, should we potentially regard tweets as having less of an agenda than journalists’ articles, allowing Twitter and its counterparts to provide an oasis of democratisation in the agenda-driven world of journalism?
If recent years have indeed seen the democratisation of the news, can we say that this is for the best? Inevitably, new media can be used for good and bad, but where can the media go from here? Whilst having space to voice our opinions is undeniably significant, is there perhaps too much equality, and have we lost a sense of what is important news and what is self-important rambling?
The standout moment at last month’s eG8 summit in Paris saw Nicolas Sarkozy offer a foreboding warning that the internet must not become a ‘parallel universe without rules’ – only days before David Cameron had been at pains to distance himself from the idea of state regulation of the internet. But why is it that the same morality and rules of law that we defend culturally are seemingly so inapplicable to human interaction over the net? The question is one which is rapidly forcing internet moguls like Mark Zuckerberg, who also addressed the eG8 summit, straight into the ring with political leaders.
It’s clearly an issue for governments and the internet industry to consider. Responsibility for regulating the web has for too long seemed a question impossibly gargantuan, perhaps too hopelessly multifaceted to be properly addressed by heads of state. A more accessible dialogue on what law is needed in cyberspace might have prevented the abuse of its liberal merits by tabloid newspapers in privacy scandals such as the failure of Ryan Giggs’ gagging order, whereby papers stake claim to a better representation of our rights as net-users than law courts do. As with the Space Race and contested rights to Deep Sea Oil Reserves in the antarctic before it, the internet seems to lack the clear geographical or institutional boundaries which would validate an open discussion on its regulation in national or global fora.
Interestingly, Rupert Murdoch was amongst the crowd who received Sarkozy’s assertion that governments must not allow the internet to remain unchecked. Looking at British politics (almost unavoidably through the window of a Murdoch-owned medium), it is hard to argue against any regulation of the internet. Just as parliament and the English courts are sometimes made to look irrelevant by the power of Murdoch’s media and the twitterati masses, Mark Zuckberg also presented the case for an entirely unregulated global space.
Zuckerberg said: “I’m happy to play any role they [the people] ask me to play… the internet is really a powerful force for giving people a voice.” In fact Zuckerberg openly undermined Sarkozy’s opinion througout the eG8, adding: “People tell me: ‘It’s great you played such a big role in the Arab spring, but it’s also kind of scary because you enable all this sharing and collect information on people…But it’s hard to have one without the other. You can’t isolate some things you like about the internet, and control other things you don’t.”
Glancing through the myriad predictions which are spat out every January (“The 37.5 Biggest Things in Digital in 2011!”), an interesting trend emerges – namely, counter-trends. 2011, say some commentators, will be the year that people turn off. The arduous quest – for information, for connectivity and for communication – has reached a kind of saturation point. People have had enough.
That is not to say that there isn’t a wealth of new, exciting tech waiting to invade our collective consciousness in 2011. Near Field Communication (NFC) looks set to make a big impression this year. The wireless data exchange technology inside Oyster cards is rumoured to be a feature of the iPhone 5, with Google Android and RIM also announcing that they will be releasing NFC-enabled phones in 2011. By this time next year, tapping your phone will probably be the standard way of negotiating such troublesome physical obstacles as train barriers, hotel doors and venue bouncers. Likewise 3D printing is deemed so important by one tech blog that it is afforded its very own 2011 preview list.
But despite (and partly because of) all these exciting advances, along with the falling price and rising availability of 2010’s technologies, another pattern is appearing. Influential agency JWT identify ‘digital downtime’ as one of their 100 Things to Watch in 2011. Their prediction that such breaks from the technology will be commonplace in an attempt to ‘foster creativity’ seems to unfairly pre-suppose that ‘digital uptime’ (a real feature of 2010) somehow stifles creativity. But you can see the point – there are already signs of a growing nostalgia for traditional practices (personal service, knitting) and things (vinyl, physical books).
Likewise, there has been a decline in the fervour for transparency which characterised the politics of those most unlikely political bedfellows, Barack Obama and Boris Johnson. Remove from your mind the image of those two fellows sharing a bed for a moment, and you will realise that developments in 2010 have changed people’s attitudes to the distribution of information. As the WikiLeaks saga developed at the end of 2010, perceptions of Julian Assange’s role became more ambiguous. Leaving aside security concerns around the disclosure of some pieces of communication, international diplomacy operates on the basis that some things should remain private. Similarly, Vince Cable received criticism when his comments about Rupert Murdoch were revealed, but there was also a feeling that private conversations between politicians and their constituents should remain just that: private.
And this, of course, leads us to Facebook. The controversy over its privacy settings has raised serious concerns about the ways in which personal data is collected, stored and shared on the internet. David Fincher’s excellent The Social Network interestingly highlighted Mark Zuckerberg’s ideological belief in the sharing of information – this belief is not, it turns out, shared by everyone. Facebook’s complicated web of privacy settings is seen by many as pernicious and exploitative. It has led some users to look to alternatives. Elsewhere, as previously discussed in this blog, concerns have been raised about location services.
All of this presents a challenge to all those involved in shaping the way that people use the internet. For me the answer can be found not in switching off completely, but in two other new developments. Firstly, the establishment of clearer and firmer rules on how users are ‘used’ by the big players – and secondly the acquisition of more power and independence by those users.
Social media has already destroyed the traditional one-way relationship between brands and consumers, between broadcasters and audiences. Now, with the coalition government championing a big society (whatever your views on that), 2011 will see the growth of businesses, outlets and schemes set up by the people, for the people. WhipCar, the p2p car sharing network, is a good example. Freecycle, which allows users to donate unwanted items rather than discard them, is another. Meanwhile Made.com and Naked Wines have enjoyed great initial success by taking consumers direct to manufacturers – cutting out the middlemen and bringing down prices. With developments like these, 2011 promises to be an exciting year for digital.
At the risk of sounding like an Edwardian school boy, I think Twitter is magic. I mean this in the supernatural sense (fitting, as it was Halloween last weekend). The mysteries of the internet have always struck me as evidence of occult intervention somewhere - some particularly intuitive websites send my eyes scouring the page for evidence of pentagons – but Twitter really takes the biscuit.
Twitter gives words the sort of power that has traditionally been associated with witchcraft. When a tweeter writes a particular formula, their words create a genuine effect. However, ancient runes have been replaced by a very modern symbol: the hashtag. This tool bridges the divide between words that communicate and words that perform an action. The hashtag may have originated as a way for participants to organize material on Twitter, but it has developed real power.
When Livestrong wanted to raise awareness of cancer, they tweeted the words #beatcancer. Each time the hashtag was consequently repeated, PayPal and SWAGG donated $0.05 to cancer charities. Suddenly, words didn’t just say something, they did it. Formerly only magic users have been attributed the power to use words to such tangible effect.
It could be that this unprecedented power is a symptom of larger scale decline. For a time the internet was fertile ground for writers. Text-only blogs abounded as technological restrictions limited communication to text. Now however, many brands use text merely as a gateway into a multimedia experience. Thus we come to another use of the hashtag. Recently, an Orange campaign offered to record certain hashtagged tweets as songs. In doing so, the campaign reduced text to the status of prototype; not quite the real thing.
Of course, there are benefits to an increasingly visual online experience. Key examples are increased usability and ease of access. Firstly, in contrast to a dense paragraph of text, video narratives require less initial commitment from the user. Thus, in using visual media, designers and developers are reacting to the requirements of casual web users. Another instance of these benefits is something we at Open CC have developed with the Whitechapel Gallery. We have enriched an exhibition with additional text, images and film which are accessible to smart phones. This is enabled by QR Codes – 2D barcodes which, when scanned, circumnavigate the need for a textual URL. The aim is to provide the user with engaging content immediately – without the requirement to type an address on a fiddly keypad interface. The reward, without the effort.
All this goes some way to explaining why, when I read media blogs, I am often struck by the apparent consensus that text will soon be obsolete. The future, we are told, lies in digital rich media – brimming with images, videos and interaction. We have seen that whilst text is used increasingly as a tool for linking one source to another (rather than as a reliable documenter itself), it may soon become redundant even for this purpose. Perhaps, then, the magical power of the hashtag is not only a triumph, but a swansong.
It is a truism to say that the internet has sped things up. One two-word phrase, coined just three years ago, has now pervaded popular culture to the extent that your gran’s probably heard of it.
No, not Britney Spears. The two words are ‘Social’ and ‘Media’. In the UK, traffic to social networking sites surpassed that of search engines for the first time last May, and in the US this happened long before. Comment on the internet, once the preserve of hardcore forum users with their cliques and in-jokes, is now open to all: in 2010 pretty much everyone, from Barack Obama to us (and yes, Britney Spears) can, and does, have their say.
The benefits of the new connectivity seem huge: the popularity of social media around the world demonstrates the huge capabilities the web has to enable communication between completely disparate populations. Social media is providing freedom of expression to countries like China, as well as allowing the sharing of contacts, information and a considerable amount of entertainment.
And it hasn’t taken big business long to appreciate the potential of social networks. Some firms have started moving their main online presence onto existing networks. Given the everyday use of sites like Facebook, it can really come as no surprise – brands want to operate in the same space as their customers. The networks are hardly going to stop them. “What’s the issue?” you might ask. We’re constantly bombarded with adverts on all media platforms, all trying to convince us to do this, drink that, and definitely BUY MORE. This is a part of life, and when social media marketing is done effectively it can be fantastic – encouraging interaction from consumers and providing organizations with an opportunity a chance to engage with their audience in ways hitherto unheard of.
The problem is when it is done lazily, without thought for the individuality of the end user. Social networks provide advertisers in particular with the opportunity to interact with consumers on an individual basis. It is important that they take this opportunity – rather than using the networks as another tool to broadcast a single, bland message to an outdated conception of the uniform audience member.
For all companies in the current climate, existing social networks may seem the cheap and easy route to consumers, but they can only be part of their answer. Consumers want something interesting, interactive and secure when they use the web, and clicking a ‘like’ button shouldn’t be the end of the communication. In the interests of the sellers and the buyers, the experience has to be more fulfilling.
Creativity’s a funny thing. Not only is it often thought of as an intangible quality that is bestowed on a rare fortunate few , but we are somewhat used to thinking that those rare few work alone, or that they at the very least, call the shots. Creative agencies have people called ‘creatives’, whose job it is to be creative and direct other people who aren’t creative.
Now of course we have partnerships like Lennon and McCartney, Simon and Garfunkel, Morecambe and Wise, Adam and Joe, examples of people who were on the same wavelength to such an extent that they can produce things which are wonderfully more than the sum of their parts.
But lately I’ve got thinking that creativity itself is starting to take a different turn. Permit me to take you on a tangential dive into one of my pet loves.
Those who know me will know that I go on about gaming a lot. Too much, perhaps. And not in a l33t speak, last-weekend-I-played-CoDMW2-til-my-eyes-bled kind of way, but in a way which acknowledges that gaming’s move into mainstream is an event of real cultural significance, and that entertainment and art may never be the same again.
I have also been, for some time, fairly convinced of the analogy between a game having a designer and a novel having a writer – great novels can be crafted into works of art because often they are written by people with singular visions, who have control over every line, word and punctuation point (to a degree – I realise this is a somewhat naive conception of the contemporary publishing world, at least).
As gaming and the means by which to create games became popularised over the last, say, 20 years, it has become more and more possible for the creators of computer games to exhibit an analogous level of control over their creations. Picture lone programmer/designers, hunched over their machines in the late hours, just as the penniless artist might at their desk furiously scribbling / painting / typing when in the throes of an idea on a dark night, until everything is Just. Right. I believed that if the trend continued, you would eventually get games which were just as honed, just as artful, as great novels.
However, having worked at a digital agency for some time now, it hit me the other day that that vision is unlikely to be the future, for computer games. I’m not discounting the possibility that single individuals can produce captivating gaming experiences; people like Jason Rohrer and Daniel Benmergui. But the thing about games is that they can be so complex and so full of variables, and require so many different skills, that actually the creativity you need to produce a great game is of a very different kind. Some games like Aquaria are created by designer – programmer collaborations, so you get a kind of Lennon-McCartney partnership, more still are created by small teams, like a band jamming to thrash out a song, and others are created by vast studios, like an entire orchestra getting together and saying ‘hey guys, shall we write a concerto? Dave, you take violin.’
To give an example: Bioshock contains innumerable imperceptible touches contributing to the feel of the game as a whole – the way that desks are left open when they’re searched; the way that Houdini splicers teleport in a plume of blood red mist; the way that lone enemies talk to themselves in wrecked corridors as a manifestation of their insanity.
Now, although it’s entirely possible that the same person came up with all of these little ideas, is it really likely? Is it likely that all of these were dictated by the same person who came up with the Ayn-Rand inspired dystopia that is Bioshock’s setting? Is it even likely that whoever decided to set the game in a decrepit, dripping art deco labyrinthine city under the sea, is an individual, rather than a group of writers?
Or is it more plausible that all of these things fell out of when a group of people threw everything they had into a Magimix and pressed ‘On’? For the record, I don’t know who came up with those ideas. Perhaps not even the people who came up with them know. Or maybe it was in fact all one person with a savant-like ability to describe the minutiae of a nightmare they had after finishing Atlas Shrugged in a single sitting.
To bring it back here, the point I’m making is that digital experiences are now so complex, so involved, that to rely on one person to call all of the creative shots would be a nightmare. I’ve produced websites with little touches which I couldn’t have foreseen and told a developer to implement – these decisions come out of discussions and collaboration, and that’s where creativity lies now. We’ve all heard about megalomaniacal directors or musicians dictating absolutely everything on the projects in which they’re involved – but that’s a very difficult thing to do with a digital experience, more so than anything else, I would venture.
And as digital experiences become increasingly common, and increasingly admired, perhaps that will change our conception of creativity. I’m not for a moment suggesting that there’s no room for an individual’s vision, or for the leadership of a creative team, but perhaps there will be less of an emphasis on “genius” as applied to an individual – perhaps what will be most important will be people’s capacity to interact with one another. If games (and digital experiences in general) will become significant contributions to culture, and many of those games are produced by teams, perhaps some of the most valuable contributions to culture in times to come will be put forth by groups, rather than lonely artists. Your thoughts, ladies and gents?
2009 has been a truly dark year for the public image of piracy.
And I’m not talking about Somalian pirates, but the issue of digital rights, specifically in entertainment. It’s estimated that piracy and illegal filesharing costs the television, music and film industries £500m a year in lost revenues.
But whereas that story and the more recent imprisonment of Pirate Bay’s founders were knee-jerk events that had us all wildly jabbering / twittering, I feel that we’re now in the midst of a more subtle undercurrent of significant change in the distribution of online music and television, sustained by almost daily reports of possible mergers and deals, new technologies and services, alleged crackdowns and constant shifts of responsibility for monitoring and controlling internet usage.
In 2000 the issue of digital rights was the almost exclusive concern of emancipated geeks interacting and sharing in a space seemingly designed both by and for them; apoplectic heavy metal fans; and a not insignificant number of terribly confused people sat awkwardly in between.
Jump to 2009, and the internet has become for many the first port of call when looking for entertainment. Mobile devices such as the iPhone plug directly into online music stores, and almost everyone has an iPod or other portable media device. Similarly, network improvements and the penetration of broadband has helped BBC’s iPlayer and its competitors to become almost as popular as “traditional” TV.
In other words, the developments in online media distribution have become mainstream concerns.
However, there remains a fundamental conflict between monetising these distribution services and a historic perception of the internet as user controlled, open-source, a community network without restriction. People don’t like paying for stuff online. Although digital platforms account for about 20 per cent of recorded music sales, 95% of all file downloads are estimated to be illegal. If we want to hear a song once, we might YouTube it or call it up on Last FM or Spotify – if we want it on our iPods, the stats say we are most likely to download it illegally.
Similar issues exist for television and film, where downloads and particularly streaming have been giving producers headaches. The most recent example would be the furore over online leaked scenes from X-Men Origins: Wolverine, which appeared several weeks before release.
(Why anyone would want to see this, for free or otherwise, is beyond me, but apparently it was an issue…) The problem is worsened by advertising – films and shows are heavily advertised on the web (i.e. globally) but release dates are staggered around the world and vary hugely. Inevitably fans are going to get impatient, and at present it’s just too easy to access content illegally.
However, things are changing. Responses to infringements are getting ever more serious and, as we have seen this year, it’s no longer empty rhetoric. The French are being typically Gallic about filesharing, just two weeks ago approving the “three strikes” bill.
The controversial bill proposes the creation of a new government agency, which translates rather grandly as “the High Authority of Diffusion of the Art Works and Protection of Rights on the Internet”, which could have the power to disconnect copyright offenders without legal recourse.
In the UK, creative industry groups such as the BPI, the Publisher’s Association and Equity and broadcasters Channel 4, BSkyB and Virgin Media, are all lobbying the government to force Internet Service Providers (ISPs) to police their users. Of course, this is the last thing the ISPs want to hear, and so they in turn are saying it’s the job of the content providers, leading to what John Woodward of the UK Film Council has reportedly described as a damaging “Mexican stand-off”.
To some extent this apparent impasse has already been breached by evolving the distribution channels and therefore providing more choice. Last year a raft of music subscription services, social networking partnerships like MySpace Music and new licensing channels emerged. Each day sees new reports of mergers, integration and innovation – so watch this space.
When it comes to TV, we’re also getting used to on demand programming. At the moment, broadcasters are giving us content for free, over the internet, and it’s brilliant. The BBC are the front-runners, though 4OD also provides a free catch up service on most of its programmes, (and recently, thankfully, opened its doors for Mac users). And if you’re the kind of person that enjoys pouring absinthe in your eyes, there’s the unforgivably awful ITV Player. But here, too, there are revenue-generating changes afoot. Both 4OD and ITV Player have “forced” adverts, and if the troubled broadband platform Project Kangaroo ever gets a buyer (Orange dropped out of talks just yesterday), on demand TV will almost certainly be delivered on a subscription basis.
The BBC are now behind Project Canvas, which plans to allow viewers to watch on demand services and other internet content via traditional TV – i.e. bring on demand away from the PC in the bedroom and back into the living room (although there must be more to it than that, as cable services such as Virgin Media are already offering on demand services including the iPlayer?)
These suspiciously named “projects” are controversial, in a way partly because of their ambition; the aim seems to be to partner with other broadcasters, channels and media companies to develop an apparently essential media platform, which is an inevitably fiddly business. BSkyB have already thrown an anti-competiveness strop over Kangaroo (which has all but killed it), and last week they accused the BBC Trust of “deficient” consultation over its more recent plans for Canvas.
There’s a more obvious complication for the BBC to grapple with: just where the licence fee fits into the various projects, (iPlayer / Kangaroo / Canvas) is, frankly, anyone’s business.
At this point, It wouldn’t be right to ignore what Bob Geldof thinks about digital TV, so here is what Bob Geldof thinks about digital TV:
“In the age of the internet, the notion of television itself is as archaic as the word wireless – even if that has been reinvented for the digital age.” (Bob Geldof)
To conclude a somewhat wayward post: it seems to me inevitable that our perception of the internet as a distribution channel is set to change over the next couple of years. There will always be infringers pushing their luck, and there will also always be a lot of good stuff available for free.
But we will, I think, also have to get used to the idea of paying money, or suffering adverts, to enjoy premium content on the internet.