CAT | Gaming
“Given Tablets but No Teachers, Ethiopian Children Teach Themselves”
Classrooms are changing. Technological advances are transforming the way that children learn, or at least are taught. This is happening fast – there are dramatic differences between my school experience and that of someone only 5 years younger. My French teacher used chalk and a blackboard to teach us our verbs, something which now seems positively prehistoric, although some teachers were more high-tech and favoured the overhead projector.
It is widely acknowledged that technology can aid learning: Houghton Mifflin Harcourt demonstrated that children who learn from an ipad version of a textbook compared to a standard paper version can score up to 20% higher on standardised tests. Through engaging children and capturing their attention with colours, videos and games, technology can improve learning with the same content just in a different format. But this applies in a school setting with teachers, so what if there are no schools and no teachers? Can technology help children to teach themselves? The organisation ‘One Laptop Per Child’ (OLPC) has teamed up with MIT to give children in Ethiopia Motorola Xoom tablet PCs. In villages with no schools and near 0% literacy rates they distributed solar powered tablets in unlabeled boxes with no instructions and monitored the results.
“Within four minutes, one kid not only opened the box, found the on-off switch … powered it up. Within five days, they were using 47 apps per child, per day. Within two weeks, they were singing ABC songs in the village, and within five months, they had hacked Android” Nicholas Negroponte.
So did it succeed? Can children teach themselves? They taught themselves how to use the tablets and even how to hack into Android but it is as yet unclear whether they will teach themselves to read and write. The fact that the tablets are in English rather than their own language probably won’t help. But even if the children do learn to read and write, to say that the children have ‘taught themselves’ is not strictly true. They may not have been taught by a ruler toting, glasses wearing, librarian-esque old woman but instead they are being taught by app designers and content devisors – the people who wrote and selected the “preloaded alphabet-training games, e-books, movies, cartoons, paintings, and other programs” . Tablets were chosen over laptops because of their intuitive usability which captures and works with the natural curiosity of children. Features which seem intuitive to the user are heavily designed and the fact that they seem easy and natural is a result of brilliant design. The same is true of programming and writing – e-learning programs have to seem intuitive, mimicking the natural learning process to guide you through it.
Even in non-education focused games “good game designers are more like good teachers” because they need to teach you how to play the game; anticipating your possible next moves and steering you through the process without you even realising it. Subtle signaling, encouraging and gentle nudging in the right direction is the style of teaching involved here – in line with the vision of OLPC founder Nicholas Negroponte “I believe that we get into trouble when knowing becomes a surrogate for learning” . It is true that in contrast to the traditional slate tablets which Victorian children used to rote learn facts, modern tablets – some even named after slates – facilitate a more exploratory and creative development but it is not true that the user is unaided in this path to discovery.
This Christmas will see the 2012 Furby revival. The mechanical fur covered children’s must-have of the late nineties has been revamped and is back for a new generation of children to enjoy. The 2012 re – furbish – ments include LCD screen eyes which are even more disturbing that their slowly blinking predecessors, a more complicated mechanical body for an impressively large array of dance moves and more sensors so it will be even harder to turn off. Furbys remain without an off switch. But the most exciting addition is that the 2012 Furby comes with its own smart phone and tablet app.
You will be able to feed your Furby by virtually flinging food at it via an app – a vast improvement on just putting your finger in its mouth. And at last you can get an app that will translate Furbish. So you can finally understand that “yoo?” means “Why will you not play with me today?” along with the subtext “This usually means the Furby is upset”. This is, of course, only useful if you are too lazy to teach your Furby English.
The return of Furbys may not seem significant and indeed the popularity of the 2012 Furby may prove to be as short lived as its forebearers. But the kind of technology they offer and the uses to which it is employed are unlikely to be a fad.
Smart phone and tablet apps for children are very popular – 75% of parents share their smartphones with their children according to a recent study in the UK. There are thousands of apps specifically designed for children which range from educational games to apps for their favourite Disney character. The combination of an app with a physical – more traditional – toy is the next step in the evolution of children’s entertainment. The simplest way to integrate an app and toy is to create an app that functions as a remote control. For example, you can use your phone or tablet as steering wheel to control toy cars or helicopters. More impressive apps go beyond this, such as the app gun which uses a device’s camera to turn the screen into a view finder; transforming your surroundings into a battle field.
The app enhances the toy and the act of playing with it beyond the physicality of the toy itself and in doing so the app creates an augmented reality. Playing and experimenting is how children learn, so there will inevitably be worries regarding any detrimental effects relating to augmented reality i.e. that children will somehow be unable to function in reality.
Will it confuse children? Will it spoil them? Will it make them lazy? Whether augmented reality and gaming are beneficial to learning is a topic that we discuss regularly in this blog. Augmented reality creates new experiences and new ways to interact with topics and as a result facilitates learning.
Many commentators on news reports favour the ‘in my day we had nothing but imagination’ approach to attacking advances in augmented reality. The danger being that children could be presented with toys so brilliant that they don’t have to use their own imagination to have fun. These commentators forget that augmented reality works with imagination to ignite it not to replace it. Augmented reality involves the suspension of disbelief which requires imagination.
There are augmented reality apps that harness children’s imagination for their own benefit, for example the app that claims to make plasters fun. It aims to take away the fear associated with plasters for the child’s –minimal – health benefit demonstrating the possible constructive applications of this technology.
In 1998, age 8, I had a Furby for Christmas. A year later my sister had a Baby Furby. My main memories of the late nineties Furby craze are children telling horror stories. Terrifying tales of Furbys awakening mysteriously in the middle of the night were swapped around the classroom. Furbys that mysterious moved from across the bedroom through the night. Furbys that kept talking when they had their batteries removed.
Just typing the phrase ‘furbys are’ into google produces the above results indicating my recollections may be part of a wider phenomenon. It is very hard to prevent a child’s imagination from enhancing any toy and I doubt that the toys of the future, including this year’s Furby, will escape any imaginitive improvements.
The mouse and keyboard have been around for 50 and 140 yrs respectively – offering fast typing and precision cursor clicks at your fingertips, they have become the essential office tool. But outside the office, cutting-edge computer interfaces are changing our gaming and social lives. Xbox Kinect became the fastest selling electronics device when it went on sale two years ago – its motion sensing technology makes for a more intuitive, easy-access interface than the games controller, which explains its broad appeal. New interfaces are also revolutionising the mobile phone industry. Touch keys are out, touch screens are in; and with iphone 4S, the focus is on voice control commands thanks to Siri. What’s next…thought control?
In fact, thought-controlled technology already exists in the form of thought-controlled wheelchairs and monkeys controlling robotic limbs and there is huge potential for expansion of thought-controlled applications for disability assistance. The technology represents the ‘brain-computer interface’ which uses our brain’s physiology. Nerve cells communicate via electrical impulses: when a nerve cell fires, most of the impulse passes onto a neighbouring nerve cell, but electrical leakage means that some of the signal escapes, making detection possible. Brain-computer interfaces detect and interpret electrical signals, and because the different thoughts and emotions that we experience are associated with different arrays of electrical impulses, a computer that can learn what these different signals mean could potentially read our minds. In practice, signals are detected using electroencephalography or EEG – electrodes are placed in different regions on the scalp and pick up the electrical signals from different regions of the brain.
This has immense potential to benefit the lives of the severely disabled. With the help of a brain-computer interface, ‘locked-in’ patients who are paralysed except for eye movements could control a wheelchair, create a message or even operate a robot. This possibility is becoming a reality thanks to technologies such as ‘BrainAble’ and ‘BrainGate’. Although helping the disabled has been the driving force for brain control interfaces, the technology already has more frivolous applications. For example, a ‘mind-reading’ gaming headset is already on the market. The ‘neuroheadset’ allows the player to control basic on-screen movements such as push/pull and lift/drop using thought alone. The headset also detects facial expressions using motion-sensing technology, which is used to project the player’s emotions onto their on-screen character.
Imagine where the technology could take us: an ipod that could shuffle to different tunes according to what mood your headset picks up from your neural signals. And how about a smartphone operated by thought commands. Rather than talking aloud to your phone as with iphone 4S (“what’s the weather like in London” etc.), which can be rude or embarrassing in some contexts, you could find out more discreetly using thought commands. A rival technology hoping to offer smarter more discreet information access is Google’s augmented reality glasses. Location-specific information would be projected onto the lenses, for example, warning the user of tube disruption as they approach a subway entrance.
Let’s not get carried away – there’s a reason the gaming character could only carry out simple actions. By the time the neural signals have reached a headset they have already had to pass through quite a lot of bone and tissue and are therefore weakened and distorted. For disabled patients, electrodes can be surgically inserted into/on the surface of the brain to get a clearer signal, but few gamers would go that far for their hobby. It’s also unlikely that an external headset could be used to achieve accurate cursor placement and for this reason, the mouse is not likely to be rivalled by brain-controlled cursors any time soon. On the other hand, a thought-controlled cursor has been invented using electrodes placed on the brain surface rather than into the brain tissue. Once brain-computer interfaces are commercially available, they could have real potential to help paralysed patients.
More nuggets from the blogs…
Twitter to show photos and videos in the stream: Twitter experimenting with inline multimedia, but ‘Tweet Media’ setting was only an experiment. http://mashable.com/2010/07/26/tweet-media/
What do you get from participating?: Understanding the benefit of joining online communities. http://flux.futurelab.org.uk/2010/07/28/what-do-you-get-from-participating/
Infographic – The Social Landscape: A graphic showing each social website and how it is rated with marketing objectives. http://www.dontwasteyourtime.co.uk/social-network/infographic-the-social-landscape/
Spreading the message – Growth opportunities in text: Text messaging is still the universal lowest common denominator in communication using a phone other than voice. http://www.msearchgroove.com/2010/07/28/guest-column-spreading-the-message-why-the-major-growth-opportunities-are-still-in-text-messaging/
Hidden YouTube game: New ‘secret’ addition to YouTube allows users to play game while videos load. http://www.socialtimes.com/2010/07/youtube-easter-egg-play-snake-while-you-watch/
Creativity’s a funny thing. Not only is it often thought of as an intangible quality that is bestowed on a rare fortunate few , but we are somewhat used to thinking that those rare few work alone, or that they at the very least, call the shots. Creative agencies have people called ‘creatives’, whose job it is to be creative and direct other people who aren’t creative.
Now of course we have partnerships like Lennon and McCartney, Simon and Garfunkel, Morecambe and Wise, Adam and Joe, examples of people who were on the same wavelength to such an extent that they can produce things which are wonderfully more than the sum of their parts.
But lately I’ve got thinking that creativity itself is starting to take a different turn. Permit me to take you on a tangential dive into one of my pet loves.
Those who know me will know that I go on about gaming a lot. Too much, perhaps. And not in a l33t speak, last-weekend-I-played-CoDMW2-til-my-eyes-bled kind of way, but in a way which acknowledges that gaming’s move into mainstream is an event of real cultural significance, and that entertainment and art may never be the same again.
I have also been, for some time, fairly convinced of the analogy between a game having a designer and a novel having a writer – great novels can be crafted into works of art because often they are written by people with singular visions, who have control over every line, word and punctuation point (to a degree – I realise this is a somewhat naive conception of the contemporary publishing world, at least).
As gaming and the means by which to create games became popularised over the last, say, 20 years, it has become more and more possible for the creators of computer games to exhibit an analogous level of control over their creations. Picture lone programmer/designers, hunched over their machines in the late hours, just as the penniless artist might at their desk furiously scribbling / painting / typing when in the throes of an idea on a dark night, until everything is Just. Right. I believed that if the trend continued, you would eventually get games which were just as honed, just as artful, as great novels.
However, having worked at a digital agency for some time now, it hit me the other day that that vision is unlikely to be the future, for computer games. I’m not discounting the possibility that single individuals can produce captivating gaming experiences; people like Jason Rohrer and Daniel Benmergui. But the thing about games is that they can be so complex and so full of variables, and require so many different skills, that actually the creativity you need to produce a great game is of a very different kind. Some games like Aquaria are created by designer – programmer collaborations, so you get a kind of Lennon-McCartney partnership, more still are created by small teams, like a band jamming to thrash out a song, and others are created by vast studios, like an entire orchestra getting together and saying ‘hey guys, shall we write a concerto? Dave, you take violin.’
To give an example: Bioshock contains innumerable imperceptible touches contributing to the feel of the game as a whole – the way that desks are left open when they’re searched; the way that Houdini splicers teleport in a plume of blood red mist; the way that lone enemies talk to themselves in wrecked corridors as a manifestation of their insanity.
Now, although it’s entirely possible that the same person came up with all of these little ideas, is it really likely? Is it likely that all of these were dictated by the same person who came up with the Ayn-Rand inspired dystopia that is Bioshock’s setting? Is it even likely that whoever decided to set the game in a decrepit, dripping art deco labyrinthine city under the sea, is an individual, rather than a group of writers?
Or is it more plausible that all of these things fell out of when a group of people threw everything they had into a Magimix and pressed ‘On’? For the record, I don’t know who came up with those ideas. Perhaps not even the people who came up with them know. Or maybe it was in fact all one person with a savant-like ability to describe the minutiae of a nightmare they had after finishing Atlas Shrugged in a single sitting.
To bring it back here, the point I’m making is that digital experiences are now so complex, so involved, that to rely on one person to call all of the creative shots would be a nightmare. I’ve produced websites with little touches which I couldn’t have foreseen and told a developer to implement – these decisions come out of discussions and collaboration, and that’s where creativity lies now. We’ve all heard about megalomaniacal directors or musicians dictating absolutely everything on the projects in which they’re involved – but that’s a very difficult thing to do with a digital experience, more so than anything else, I would venture.
And as digital experiences become increasingly common, and increasingly admired, perhaps that will change our conception of creativity. I’m not for a moment suggesting that there’s no room for an individual’s vision, or for the leadership of a creative team, but perhaps there will be less of an emphasis on “genius” as applied to an individual – perhaps what will be most important will be people’s capacity to interact with one another. If games (and digital experiences in general) will become significant contributions to culture, and many of those games are produced by teams, perhaps some of the most valuable contributions to culture in times to come will be put forth by groups, rather than lonely artists. Your thoughts, ladies and gents?
So a recent Geek Dad post on Wired.co.uk asks us whether children are getting spoiled by touch screen technology. It raises an interesting point, particularly in the context of Adam Standings description of catching his son: “smearing his hands all over the TV screen in a bizarre fashion. It turned out that he was trying to change channel the same way he had seen me select music on my iPhone by scanning the Cover Flow system.”
It also interestingly echoes Matthew Robson's glib declamation to Morgan Stanley last summer that 'anything with a touch screen is desirable&apos.
With the developments in gesture based control on the Wii, DS, Playstation Move and Natal, not to mention the iPad and MS Tablet, it&aposs not all that wild a speculation to suppose that the not-too-distant future may see office workers in their cubicles flinging images and files around a la Minority report, like a weird and solemn mass game of charades. But is it spoiling children? Are we setting them up for a fall? Well, no. At least, no more than the development of the telephone or the automobile did (which some might argue is a great deal). Let's imagine a situation where someone who grew up believing touch screen control was completely ubiquitous, was presented with a 1980's gameboy. At best it might excite their curiosity, in the same way that floppy disks or betamax do current adolescents. At worst, dripping disdain.
I would, however, be very surprised to see them left paralysedly clueless, pawing ineffectually at its 2 tone screen. But then, I rather naively think I could get along alright without my mobile phone.
What is worth noting is that Adam's son smearing his hands on the television set may be a harbinger of a real step change in UI design. As people's expectations change to the degree that they think every screen is a touch screen, so UI design will have to keep up. It may be that the UI designer's skillset may be virtually unrecognisable in a couple of years.
Wednesday 13th January sees this year’s BETT Show roll into town. Housed within London’s cavernous Olympia and playing host to 600 exhibitors and almost 30,000 visitors, BETT is the largest educational technology conference in the world.
Every year BETT gives teachers and those involved in education the opportunity to enhance their knowledge of learning through technology. We will be there, catching up with friends, partners and clients – and investigating some of the new developments at the start of an exciting new decade for ICT in education.
The central theme that seems to be coming out of the build up to BETT 2010 is playfulness. Professor Stephen Heppell will be running a new feature at the expo entitled ‘Playful Learning’ – an interactive area where visitors can immerse themselves in educational gaming at its best and use fun technology to overcome learner engagement issues.
Prof Heppell points out that “survey after survey suggests that our UK schoolchildren may be some of the least happy in Europe” and thinks he has the solution: “Playful learning is great fun and has re-energised classrooms, rekindled school-parent relationships and re-engaged brains.”
Other new features for BETT 2010 include the Future Learning Spaces area, which will give visitors a glimpse of what classrooms could look like in several years’ time, and TeachMeet Takeover – thirty minute slots when vendors hand over their stalls to informal, teacher-led discussions.
BETT 2010 looks set to reflect the trends and developments of the past year. The last twelve months has seen the continued rise of social media, and particularly the explosion of Twitter into the mainstream. There has been a degree of acceptance that these media are valid forms of communication for children and young people, with suggestions that they can improve confidence and literacy.
The prominence of these topics is reflected in the seminar programme at the event. Other significant issues of the past year include augmented reality (AR) and eSafety. The former is represented by Futurelab’s Spark, a mobile exhibition which uses 2D AR markers to enhance pupils’ experience in the classroom. Meanwhile Roar Educate’s Us Online seeks to educate pupils on safety, security and good citizenship in the online world.
The Government’s Home Access scheme is being formally launched at BETT 2010. A trial of the scheme – which will seek to remedy the ‘digital divide’ by providing 270,000 low income homes with computers and internet access – “went like a rocket” according to Becta, the government agency in charge of it. The scheme is exciting news for all those working with ICT in education – but it is likely to cause controversy given the state of the economy as a general election approaches.
We will be helping our good friends at QCDA. Since last year’s event we have been working hard together on mycurriculum.com, a website which allows teachers to connect and collaborate with each other by discussing best practice and sharing resources, activities and examples of pupils’ work. QCDA will be showcasing the site on two of their four ‘pods’ so come and check it out at Stand J30.
See you there!