Making wearables more useful and smart homes less of a chore.

Warables might be set to get a whole lot more useful in future if research being conducted by Carnegie Mellon University’s Future Interfaces Group is indicative of the direction of travel.
While many companies, big and small, have been jumping into the wearables space in recent years, the use-cases for these devices often feels superficial — with fitness perhaps the most compelling scenario at this nascent stage. Yet smartwatches have far richer potential than merely performing a spot of sweat tracking.
The other problem with the current crop of smartwatches is the experience of using apps on wrist-mounted devices does not always live up to the promise of getting stuff done faster or more efficiently. Just having to load an app on this type of supplementary device can feel like an imposition.
If the primary selling point of a smartwatch is really convenience/glanceability the watch wearer really does not want to have to be squinting at lots of tiny icons and manually loading data to get the function they need in a given moment. A wearable needs to be a whole lot smarter to make it worth the wearing vs just using a smartphone.
At the same time, other connected devices populating the growing Internet of Things can feel pretty dumb right now — given the interface demands they also place on users. Such as, for example, connected lightbulbs like Philips Hue that require the user to open an app on their phone just in order to turn a lightbulb on or off, or change the colour of the light.
Which is pretty much the opposite of convenient, and why we’ve already seen startups trying to fix the problems IoT devices are creating via sensor-powered automation.
“The fact that I’m sitting in my livingroom and I have to go into my smartphone and find the right application and then open up the Hue app and then set it to whatever, blue, if that’s the future smart home it’s really dystopian, “ argues Chris Harrison, an assistant professor of Human-Computer Interaction at CMU’s School of Computer Science, discussing some of the interface challenges connected device designers are grappling with in an interview with TechCrunch.
But nor would it be good design to put a screen on every connected object in your home. That would be ugly and irritating in equal measure. Really there needs to be a far smarter way for connected devices to make themselves useful. And smartwatches could hold the key to this, reckons Harrison.
A sensing wearable
He describes one project researchers at the lab are working on, called EM-Sense, which could kill two birds with one stone: provide smartwatches with a killer app by enabling them to act as a shortcut companion app/control interface for other connected devices. And (thereby) also make IoT devices more useful — given their functionality would be automatically surfaced by the watch.
The EM-Sense prototype smartwatch is able to identify other electronic objects via their electromagnetic signals when paired with human touch. A user only has to pick up/touch or switch on another electronic device for the watch to identify what it is — enabling a related app to be automatically loaded onto their wrist. So the core idea here is to make smartwatches more context aware.
Harrison says one example EM-Sense application the team has put together is a timer for brushing your teeth so that when an electric toothbrush is turned on the wearer’s smartwatch automatically starts a timer app so they can glance down to know how long they need to keep brushing.
“Importantly it doesn’t require you to modify anything about the object,” he notes of the tech. “This is the really key thing. It works with your refrigerator already. And the way it does this is it takes advantage of a really clever little physical hack –- and that is that all of these devices emit a small amounts of electromagnetic noise. Anything that uses electricity is like a little miniature radio station.
“And when you touch it it turns out that you become an extension of it as an antenna. So your refrigerator is basically just a giant antenna. When you touch it your body becomes a little bit of an antenna as well. And a smartwatch sitting on the skin can actually detect those emissions and because they are fairly unique among objects it can classify the object the instant that you touch it. And all of the smartness is in the smartwatch; nothing is in the object itself.”
While on the one hand it might seem like the EM-Sense project is narrowing the utility of smartwatches — by shifting focus from them as wrist-mounted mobile computers with fully features apps to zero in on a function more akin to being a digital dial/switch — smartwatches arguably sorely need that kind of focus. Utility is what’s lacking thus far.
And when you pair the envisaged ability to smartly control electrical devices with other extant capabilities of smartwatches, such as fitness/health tracking and notification filtering, the whole wearable proposition starts to feel rather more substantial.
And if wearables can become the lightweight and responsive remote control for the future smart home there’s going to be far more reason to strap one on every day.
“It fails basically if you have to ask your smartwatch a question. The smartwatch is glanceability,” argues Harrison. “Smartwatches will fail if they are not smart enough to know what I need to know in the moment.”
His research group also recently detailed another project aimed at expanding the utility of smartwatches in a different way: by increasing the interaction surface area via a second wearable (a ring), allowing the watch to track finger gestures to compute gesture inputs on the hands, arm and even in the air. Although whether people could be convinced they need two wearables seems a bit of a stretch to me.
A less demanding smart home
To return to the smart home, another barrier to adoption that the CMU researchers are interested in unpicking is the too-many-sensors problem — i.e. the need to physically attach sensors to all the items you want to bring online, which Harrison argues simply does not scale in terms of user experience or cost.
“I think it’s going to be the new desktop replacement,” he tells TechCrunch. “So instead of a desktop metaphor on our desktop computer it will literally be your desktop.
“You put it into your office desk light or your recessed light in your kitchen and you make certain key areas in your home extended and app developers let lose on this platform. So let’s say you had an Info Bulb above your kitchen countertop and you could download apps for that countertop. What kind of things would people make to make your kitchen experience better? Could you run YouTube? Could you have your family calendar? Could you get recipe helpers and so on? And the same for the light above your desk.”
Of course we’ve seen various projection-based and gesture interface projects over the years. The latter tech has also been commercialized by, for example, Microsoft with its Kinect gaming peripheral or Leap Motion’s gesture controller. But it’s fair to say that the uptake of these interfaces has lagged more traditional options, be it joysticks or touchscreens, so gesture tech feels more obviously suited to more specialized niches (such as VR) at this stage.
And it also remains to be seen whether projector-style interfaces can make a leap out of the lab to grab mainstream consumer interest in future — as the Info Bulb project envisages.
“No one of these projects is the magic bullet,” concedes Harrison. “They’re trying to explore some of these richer [interaction] frontiers to envision what it would be like if you had these technologies. A lot of things we do have a new technology component but then we use that as a vehicle to explore what these different interactions look like.”
Which piece of research is he most excited about, in terms of tangible potential? He zooms out at this point, moving away from interface tech to an application of AI for identifying what’s going on in video streams which he says could have very big implications for local governments and city authorities wanting to improve their responsiveness to real-time data on a budget. So basically as possible fuel for powering the oft discussed ‘smart city’. He also thinks the system could prove popular with businesses, given the low cost involved in building custom sensing systems that are ultimately driven by AI.
This project is called Zensors and starts out requiring crowdsourced help from humans, who are sent video stills to parse to answer a specific query about what can be seen in the shots taken from a video feed. The humans act as the mechanical turks training the algorithms to whatever custom task the person setting up the system requires. But all the while the machine learning is running in the background, learning and getting better — and as soon as it becomes as good as the humans the system is switched to being powered by the now trained algorithmic eye, with humans left to do only periodic (sanity) checks.
“You can ask yes, no, count, multiple choice and also scales,” says Harrison, explaining what Zensors is good at. “So it could be: how many cars are in the parking lot? It could be: is this business open or closed? It could be: what type of food is on the counter top? The grad students did this. Grad students love free food, so they had a sensor running, is it pizza, is it indian, is it Chinese, is it bagels, is it cake?”
What makes him so excited about this tech is the low cost of implementing the system. He explains the lab set up a Zensor to watch over a local bus stop to record when the bus arrived and tally that data with the city bus timetables to see whether the buses were running to schedule or not.
Making wearables more useful and smart homes less of a chore.
Reviewed by Unknown
on
13:54:00
Rating:

No comments: