Steven Spielberg has got a lot to answer for.

I’m not referring to the increase in shark killings following the worldwide release of 1975’s Jaws (indeed, the film has arguably been responsible for a ramping up of shark protection policies over the last few decades).

Rather, the movie auteur is at least partly responsible for our collective high-bar expectations of what can be achieved using gesture interaction, as a result of that sequence in 2002’s Minority Report when Tom Cruise dons his finger mittens and starts searching through a database of clips from the ‘crime precogs’ to establish when and where a murder is likely to occur.

Spielberg is also partially to blame for an over-referencing of his movie by journalists the world over in relation to all things gesture and touch controlled (ahem) – although, to be fair, Minority Report has become a convenient shorthand for these solutions; an easy way to visualise the hands-swiping, transparent screen brilliance of Mr Cruise’s future tech.

So, just how far are we from reaching Spielberg’s high bar for gesture interaction and from enacting Tom Cruise’s sequence, or are we already on a par with the movie’s 2054 setting in terms of tech interaction?

And, more to the point, how is this technology helping the commercial integration market to convert futuristic tech into revenue?

Touch technology is, arguably, already ahead of Hollywood in terms of what’s possible, with the same projected capacitive tech utilised on smart phones and tablets now upping the ante on performance and durability and MultiTouch Ltd’s MultiTaction solution using cameras (Computer Vision Through Screen) to add faster response times (200fps), unlimited simultaneous touch and object recognition.

However, despite the fact that projected capacitive displays effectively mean you can be in control without touching, your fingers have to be so close to the screen that you might as well touch – and you will!

The only way to truly create gesture control using touch-screen displays is –rather obviously – to integrate a gesture control camera into the installation; something like Kinect or Leap.

But what other gesture control solutions are available or seeding?

We talk about Oblong Industries’ established Mezzanine solution elsewhere on CI Europe – so I won’t discuss it here.

Prior to working with Oblong, CEO John UnderKoffler was appointed by Spielberg as science and technology adviser on Minority Report, so you can imagine that he’s got plenty to say on the subject!

John Anderton (Tom Cruise) would possibly be happier with start-up Thalmic Lab’s Myo armband.

Initially aimed at smaller computing devices, but with scope to go large, Myo is designed to fit around a user’s forearm, detect small muscle movements, rotations of the arm and also electrical muscle impulses.

Eight sensors contact with a user’s skin, reading electromyographic (EMG) signatures from muscles. Currently, users can flick their wrists to move things across integrated displays and clench fists and rotate arms to adjust volume.

Thalmic Labs has been careful to ensure that Myo is integrated with smart glass wear technology such as Google Glass, Epson Moverio and Recon Jet – and, expectedly, Oculus Rift – and it’s easy to envisage an integrated solution providing excellent experiential fun at launch events or something more boardroom-based.

Myo is very much in its evolutionary stages, but pre-orders are priced at only $150 and it comes packaged with the ‘10-foot Experience’ App, which allows users to control displays from distances from 10 to 30 feet – so perhaps its worth a punt if you’re looking to add this sort of functionality to your integration offering.

Exhibited at CEATEC Japan in early October, Elliptic Labs’ ultrasonic technology has been developed to provide touch-less – and thus more intuitive and speedier – gesture control for smart phones and tablets and this Batman-friendly solution would certainly fit happily into a Hollywood script.

It works by beaming Bat-like ultrasound from transmitter speakers onto a user’s hand and back to integrated microphones, recognising movement up to 180°.

In addition, ‘distributed sensing’ motion captures operator hands from multiple angles, thus avoiding occlusion of objects and ‘range gating’ separates user’s ultrasonic echoes from latter echoes, preventing accidental recognition of non-operational gestures.

It’s obvious why Elliptic Labs is targeting smart phone manufacturers – plenty of money to be made there and it should work well at close quarters – but this ingenious gesture solution could integrate well with existing touch-screen tech to augment the touch experience with gesture control when required.

I’m sure how robust it would be on its own for large screen displays, however; too much ultrasonic interference methinks.

Back to Tom Cruise’s ‘gesture gloves’ and Bristol University’s ‘ultrahaptics’ certainly looks the part.

It also utilises technology to sense movement – tiny ‘ultrasonic transducers’ speakers – projecting waves of ultrasound through the source display, displacing the air and creating ‘acoustic radiation pressure’ which creates tiny vibrations on the user’s skin.

Users should therefore be able to sense varying degrees of vibration, and, in theory, use gesture control in a more precise way.

But that’s not the whole story.

Additional motion sensing tech is required to control the display, because the ultrasound cannot be communicated back to it. All good: we love a bit of integrated tech!

Fujitsu’s ‘glove’ device also seems in love with Minority Report. It can recognise gestures thanks to a gyro sensor and accelerometer in the glove’s wrist and this allows for basic movements – up, down, left, right, rotate-left, rotate-right.

The gesture control is enabled when the user’s wrist is bent back as if revving a motorbike, allowing the glove to distinguish between this operational gesture and regular arm movements.

So lots of new gesture-control solutions and ideas out there, clearly fuelled by Mr Spielberg’s visual extravaganza, but there’s one that the renowned director might also identify with.

The ingenious interactive RoboThespian robot, recently showcased at the FLUX Innovation Lounge in London, brings to mind another Spielberg future-fest, AI: Artificial Intelligence.

Created by Engineered Arts Ltd for human interaction in public environments, RoboThespian is fully interactive, multilingual, and user-friendly, making it a ‘perfect device with which to communicate and entertain’.

In its third generation, with over six years of continuous development, it has been used around the world in science centres, for visitor attractions, by commercial users and academic research institutions – so ideal for many commercial integrator projects.

A built-in camera in the robot’s chest records video and motion tracks human movements, mimicking them by use of air pistons. Hold up your arms and wiggle your fingers and he follows suit!

Okay, so you’re not so much controlling RoboThespian with gestures, as initiating his mickey-taking movements, but this is an ingenious use of motion tracking.

Indeed, the robot is obviously keen to keep the Spielberg references flowing.

As well as able to be pre-programmed with movement and voice, his imitations aren’t limited to mocking you and I.

RoboThespian’s name is no accident – he’s a movie lover and a thesp, performing a number of actor imitations with aplomb, including a cracking rendition of Quint (played by Robert Shaw) delivering his Indianapolis monologue in Jaws: “You know the thing about a shark? He’s got lifeless eyes, black eyes, like a doll’s eye”. If RoboThespian sees the irony there, he doesn’t let on; just keeps on reacting to your gestures…

Rob Lane is founder/director of tech PR agency Bigger Boat PR Ltd and is – unsurprisingly! – a big fan of Jaws.

No more articles
%d bloggers like this: