Has UI and UX Innovation Plateaued?

Different strokes for different folks. That’s trite but true when it comes to how people interact with smartphones and tablets. That description also sums up a big challenge for app developers, OS vendors and device manufacturers: designing a UI that each one believes is the ideal way to interact with a device while still accepting the fact that many users will be confused or prefer an alternative.

Case in point: Steve Jobs famously dissed the stylus as a lousy substitute for the finger. But if most iPad — and tablet — users agreed, there wouldn’t be such a healthy market for Bluetooth keyboards. It’s easy to assume people buy these add-ons because they don’t like typing on glass, but that’s not the only reason.

“On many devices and within many apps, having a soft keyboard means not having the full real estate of the screen available,” says Daria Loi, Intel user experience (UX) innovation manager. “I have a never-ending list of users who report frustrations with their soft keyboard covering part of the screen.” [Disclosure: Intel is the sponsor of this content.]

UI and UX preferences also vary by culture, giving developers, OS vendors and device manufacturers another set of variables to accommodate. “I recently interviewed users in Japan who love using the stylus on their tablet or convertible as it enables touch without fingerprints,” Loi says. “Users in India told me that they love the hand for some applications but the stylus for others — in particular more complex apps such as Illustrator, Photoshop, 3D Studio Max and so on.”

One UI to Rule Them All?

Whether it’s touch, speech control or even eye tracking, a break-the-mold UI has to be intuitive enough so that users aren’t daunted by a steep learning curve. With Metro, Microsoft takes that challenge to another level. Its new UI spans multiple device types and thus multiple use-case scenarios.

“All meaningful change requires learning,” says Casey McGee, senior marketing manager for Microsoft’s Windows Phone division. “The key is to expose people to as many relatable scenarios as possible and make learning an enjoyable and rewarding process of discovery. Microsoft is doing that by using similar UI constructs across several consumer products, phones, PCs and game consoles.”

Metro is noteworthy in part because if consumers and business users embrace it, then developers can leverage their work across multiple device types. “Windows Phone and Windows 8 in particular are both part of one comprehensive set of platform offerings from Microsoft, all based on a common set of tools, platform technologies and a consistent Metro UI,” McGee says. “The two use the same familiar Metro UI, and with Windows Phone 8, are now built on the same shared Windows core. This means that developers will be able to leverage much of their work writing applications and games for one to deliver experiences to the other.”

Metro gives developers a single framework for designing user experiences, with a set of controls for each form factor. “Developers building games based on DirectX will be able to reuse substantial amounts of their code in delivering games for both Windows and Windows Phone,” McGee says. “Developers building applications using XAML/.NET will be able to reuse substantial amounts of their business logic code across Windows and Windows Phone.”

Speak Up for a Better UX? Or Look Down?

The growing selection of speech-controlled apps and UIs — including Google Now, Siri and Samsung’s S Voice — shows that some developers and vendors believe that voice is a viable alternative to the finger, at least for some tasks. When people can simply say what they want, the theory goes, it’s less daunting and confusing that having to learn and remember that, say, two fingers swiped in a circle zooms in the page.

But simply adding speech-control features doesn’t automatically guarantee an intuitive user experience. In fact, when implemented poorly, it can make the user experience worse.

One common pitfall is assuming that users will stick to industry lingo rather than using vernacular terms. For example, a travel app that expects users to say “depart” instead of “leave” might frustrate them by responding with: “Not recognized. Try again.” That’s also an example of the difference between simple speech recognition and what’s known as “natural language understanding”: The former looks for a match in its database of terminology, while the latter tries to understand the user’s intent.

“The correct answer for voice is to get more and more intelligent like humans so that you don’t have to get the right word,” says Don Norman, co-founder of the Nelson Norman Group, a consultancy that specializes in UX.

Eye tracking is another potential UI. It could be a good fit for messier applications, such as turning pages in a cookbook app instead of coating the tablet or smartphone screen with flour. But like voice and touch, eye tracking will see myriad implementations as the industry casts about for the best approach.

“Eye tracking is an interesting thing,” Loi says. “I can see usefulness and some interesting usages. My only concern is that we, the industry, might fall in love blindly with it and start to believe it can do anything. Do you remember the initial attitude toward gestural controls? All those unlikely UIs to be navigated through unlikely new gestural languages?  Sometimes the industry gets very excited about a new technology, and the first reaction is to squeeze it into every device and context, without carefully considering why and what for exactly.”

In the case of voice, the user experience could get worse before it gets better simply because there’s a growing selection of solutions that make it relatively easy for developers to speech-enable their apps. That freedom to experiment means users will have to contend with a wide variety of speech UI designs.

“I consider this an exciting, wonderful period,” Norman says. “But for the everyday person trying to get stuff done, it will be more and more frustrating.”