Taming Android UI Fragmentation

More than 55 manufacturers currently offer Android smartphones and tablets. That’s good news for mobile developers, since a broad, deep device selection helps attract more buyers and thus a bigger pool of potential users for Android apps.

The bad news is this device diversity also means a large number of screen sizes, densities and resolutions, which vendors often play with to try to differentiate their products. Developers have to make sure their app’s UI provides a good experience across all of these devices -- or at least as many as possible. Just how big a challenge is that? Opinions vary among experienced developers.

“The biggest problem we’ve had with Android is the various different screen sizes and being able to scale our UI to those different screen sizes,” says Geoff Pado, lead developer at Newsy, whose apps provide news videos. “It’s one of the things that take up a lot of time when we’re building apps.”

Other developers say that planning ahead minimizes the amount of time and effort required. “It’s not hard to be resolution-agnostic if you plan for it ahead of time, especially because Google provides all kinds of interface tools for building UIs that can scale and stretch and change based on the current display,” says Chris Pruett, chief taskmaster at Robot Invader, which specializes in games. “The only time it becomes difficult is when you try to port an application that was written to target just one screen size and relies on hard-coded absolute pixel offsets and regions.

“Such applications are usually full of assumptions that were made about its original platform (such as ‘How many pixels tall is the screen?’) which are no longer true,” says Pruett. “Replacing all of those hard-coded bits with proper rules to scale or stretch UIs after the fact is a lot of work. A lot of complaints about Android screen size diversity boil down to ‘I didn’t consider this when I originally wrote this code.’”

Pay Attention to Densities and Documentation
The first step in ensuring a consistently great UI is to take advantage of Google’s documentation.

“The bible is here: http://developer.android.com/guide/practices/screens_support.html,” says Vincent Chavy, director of product management for mobile and desktop solutions at RADVISION, whose apps enable video conferencing. “This is a must-read for any Android developer willing to ensure an app will be great on any screen.”

As the Android OS has evolved, it’s gradually added features and documentation that help developers accommodate device diversity. “Android 3.0 introduced new elements called fragments,” says Volodymyr Kasyanenko, RADVISION’s lead architect for mobile and desktop solutions. “This guide provides an overview. Android introduced fragments in Android 3.0 (API level 11), primarily to support more dynamic and flexible UI designs on large screens, such as tablets.”

Google’s documentation and tools can be particularly helpful for developers who are just starting out and are not yet comfortable with the idea of figuring out their own techniques and tricks. Experience aside, the role of Google’s documentation and tools also varies by the type of application.

“Since we are making games, we do not rely as heavily on Google’s UI tools as other sorts of applications do, because we render everything -- including our UI -- with our game engine,” says Pruett. “However, nongame applications should totally leverage Google’s tools to make size-independent UI.

“There’s tons of functionality already built into the OS to deal with this, and while we choose to roll our own solutions sometimes, it’s so that our codebase remains as platform-agnostic as possible,” says Pruett. “Just utilizing what’s already provided by the platform is the right way to solve this problem for most developers.”

It’s also important to understand the major ways that displays differ, including the pixel density and resolution.

“To deal with screen density, ensure that all the GUI elements will take the same space regardless of the density of the screen,” says Chavy. “If you do not ensure this, then your app will have different layouts depending on the density of the screen, and you do not want this!”

The next step is to decide whether to optimize the app for devices with high-density screens. For example, for its SCOPIA Mobile app, RADVISION has separate sets of bitmaps for medium- and high-density screens.

“If you do provide the same bitmap for any screen density, it will work, but Android will scale the bitmap to fit, and this will most likely result in a blurry or pixelated result,” says Chavy. “The same applies with iOS, where it’s better to provide one set of graphics for the regular devices and another, higher-definition set of images for iOS devices supporting Retina displays.”

Understand the App’s and Device’s Use Case

Many vendors are pricing their Android tablets so they’re affordable for as many people as possible. If that strategy pays off, it will mean a large pool of Android tablets for developers to target.

But that doesn’t mean all developers need to start thinking about how to make their apps look equally good on tablets and smartphones. Just the opposite: Some developers can afford to ignore tablets if their target audience is unlikely to use those devices for their apps. A prime example is fitness, where toting a tablet isn’t practical.

“Our application is not suited for tablets, so we only care about the phone resolutions,” says Morten Keldebaek, head of mobile applications at Endomondo, which combines exercise with social networking.

An app’s use case can also affect how people hold their device and, as a result, what developers need to focus on UI-wise. “We force portrait mode on most screens, so we do not really care about landscape,” says Keldebaek. “We do it so that we give priority to making the application look as good as possible on the most-used screen sizes, and then we let Android scale to anything else. We have the 480 by 800 and the 320 by 480 resolutions as top priority, and then we ensure that the new 720 by 1280 looks good as well. Then, we ensure that QVGA ‘works,’ but it is really difficult to make it look nice.”

Test and Test Again
Regardless of whether the target devices are tablets, smartphones or both, testing is critical for ensuring that UI provides a consistently great experience. “The best way to handle scaling issues is to continue to test for all of these different platforms,” says Pado. “Between knowing Google’s best practices and then testing, testing, testing, that’s really all we can do about the fragmentation.”

Professional developers often have the advantage of a large staff and thus more people to test an app. Newsy, for example, has a staff of about 65 full- and part-timers. Smaller developers can use friends and family as testers, but they also shouldn’t overlook their app’s users as a way to identify UI fragmentation problems -- or other types of issues, for that matter. Checking reviews in the Google Play Store (formerly Android Market) and third-party stores such as Amazon’s App Store are obvious places to start. Also consider creating a Google Alert for the app’s name and terms such as “screen.”

Finally, make it easy for users to provide feedback directly, such as via an email address that’s prominent in the app and on the website that the stores link to. After all, the worst time to find out about UI problems is when an app’s reputation has already been damaged.

Photo: @iStockphoto.com/Obaba

What’s Next For OpenACC?

The OpenACC parallel programming standard emerged late last year with the goal of making it easier for developers to tap graphics process units (GPUs) to accelerate applications. The scientific and technical programming community is a key audience for this development. Jeffrey Vetter, professor at Georgia Tech’s College of Computing and leader of the Future Technologies Group at Oak Ridge National Laboratory, recently discussed the standard. He is currently project director for the National Science Foundation’s (NSF) Track 2D Experimental Computer Facility, a cooperative effort that involves the Georgia Institute of Technology and Oak Ridge, among other institutions. Track 2D’s Keeneland Project employs GPUs for large-scale heterogeneous computing.

Q: What problems does OpenACC address?

Jeffrey Vetter: We have this Keeneland Project -- 360 GPUs deployed in its initial delivery system. We are responsible for making that available to users across NSF. The thinking behind OpenACC is that all of those people may not have the expertise or funding to write CUDA code or OpenCL code for all of their scientific applications.

Some science codes are large, and any rewriting of them -- whether it is for acceleration or a new architecture of any type -- creates another version of the code and the need to maintain that software. Some of the teams, like the climate modeling team, just don’t want to do that. They have validated their codes. They have a verification test that they run, and they don’t want to have different versions of their code floating around.

It is a common problem in software engineering: People branch their code to add more capability to it, and at some point they have to branch it back together again. In some cases, it causes conflict.

OpenACC really lets you keep your applications looking like normal C or C++ or Fortran code, and you can go in and put the pragmas in the code. It’s just an annotation on the code that’s available to the compiler. The compiler takes that and says, “The user thinks this particular block or structured loop is a good candidate for acceleration.”

Q: What’s the impact on scientific/technical users?

J.V.: We have certain groups of users that are very sophisticated and willing to do most anything to port their code to a GPU -- write new version of code, sit down with an architecture expert and optimize it.

But some don’t want to write any new code other than putting pragmas in the code. They really are conservative in that respect. A lot of the large codes out there used by DOE labs just haven’t been ported to GPUs because there’s uncertainty over what sort of performance improvement they might see, as well as a lack of time to just go and explore that space.

What we are trying to do is broaden the user base on the system and make GPUs, and in fact other types of accelerators, more relevant for other users who are more conservative.

After a week of just going through the OpenACC tutorials, users should be able to go in and start experimenting with accelerating certain chunks of their applications. And those would be people who don’t have experience in CUDA or OpenCL.

Q: Does OpenACC have sufficient support at this point?

J.V.: PGI, CAPS and Cray: We expect they will start adhering to OpenACC with not too much trouble. What’s less certain is how libraries and performance analysis tools and debugging tools will work with the new standard. One thing that someone needs to make happen is to ensure that there is really a development environment around OpenACC.

OpenMP was a decade ago -- they had the same issue. They had to create the specification and the pragmas and other language constructs, and people had to create the runtime system that executes the code and does the data movement.

Q: What types of applications need acceleration?

J.V.: Generally, we have been looking at applications that have this high computational intensity. You have things like molecular dynamics and reverse time migration and financial modeling -- things that basically have the characteristic that you take a kernel and put it in a GPU and it runs there for many iterations, without having to transfer data off the GPU.

OpenACC itself is really targeted at kernels that have a structured block or a structured loop that is regular. That limits the applicability of the compiler to certain applications. There will be applications with unstructured mesh or loops that are irregular in that they have conditions or some type of compound statements that make it impossible for the compiler to analyze. Users will have to unroll those loops so an OpenACC compiler has enough information to generate the code.

Some kernels are not going to work well on OpenACC whether you work manually or with a compiler. There isn’t any magic. I’m supportive, but trust and verify.

Leading New Developments in Visual Computing

Advancements in computing technology involve more than simply increasing speed and reducing size and power consumption. Enhancements to features, capabilities and specifications go hand-in-hand with education of the development community and those teaching the trade.

Many universities and professors around the world are making outstanding contributions to computer-science information technology, including training fellow educators, reviewing beta curriculum, creating case studies, blogging and more. Here, DIG highlights four standout projects at universities around the world.

The University of North Carolina at Chapel Hill
Distinguished professor of computer science Dinesh Manocha, along with co-investigator Ming Lin, is developing a method of rendering sound from the physics of objects in motion. He oversees projects exploring sound synthesis (which creates sounds from the principles of physics) and sound propagation (which explores the movement of sound once it is emitted).

Numerous practical applications for the study of sound exist, from creating virtual simulators to manufacturing aircraft and automobiles.

Manocha’s team used multicore processing to solve key problems. For the sound-synthesis project, they developed what Manocha calls “a very simple algorithm in terms of computation cost that will exploit a set of features of human perception -- what we hear well and what we can’t hear well.”

The second problem was associated with sound propagation: “How does the sound wave -- which starts from the speaker, hits walls and gets reflected, deflected and refracted -- eventually reach the receiver?” asks Manocha. For this hurdle, the team used a processing system with 16 cores -- or four quad-core chips.

University of California, Berkeley
The primary area of interest for associate professor of computer science James O’Brien is computer animation, with an emphasis on generating realistic motion using physically based simulation and motion-capture techniques.

Apple Mac Pro computers -- each with two quad-core processors -- are at the heart of O’Brien’s research to render simulations of a flexible needle used in the treatment of prostate cancer. Current methods employ rigid needles and ultrasound, a difficult method that provides very little detail for the surgeon.

The flexible needle that O’Brien and his team are helping to develop has a beveled tip, allowing it to travel in a circular path and avoid vital organs and bones.

“The basic idea,” says O’Brien, “is to realistically model the process of inserting a needle into living tissue for the purposes of modeling biopsies -- a technique called brachytherapy -- where the physician wants to kill cancerous tissue without hurting healthy tissue. By rotating the base of the needle you can control what direction it curves in and steer it around parts of the body you don’t want to penetrate.”

There are also nonsurgical applications for his work. “One technique is a needle going into a human being, another is a Jedi warrior going around smashing things,” he says. “The two use very similar underlying simulation methods.”

St. Petersburg State Polytechnical University
Vladimir Belyaev was a technical lead at Driver-Inter Ltd., a Russian 3D-game developer affiliated with the St. Petersburg State Polytechnical University, where he worked on ways to improve the realism of simulated grass. He does most of his development on quad-core systems.

“Imagine waves across a grass field,” says Belyaev. “Then you see somebody in the grass and it divides, and you can see the trail behind the person. If you go down and see the trail close up, you’ll see each grass blade and some laying flat where they were stepped upon.”

Belyaev says handling all the variables -- whether dealing with storage, control or manipulation -- presented a challenge. “There is a lot of information you have to handle, because if you want to make this truly real, you have billions of values,” he says. “It was extremely challenging to make this all look seamless.”

NHTV University
Jacco Bikker is an instructor for the four-year vocational program International Game Architecture and Design (IGAD), a Ph.D. student and a senior lecturer for the International Architecture and Design program at the NHTV University of Applied Sciences in the Netherlands, where he teaches C/C++ and graphics programming.

A 10-year game industry veteran, Bikker is working on a real-time ray tracer called Arauna, which will render realistic 3D images of virtual worlds. The use of fast multicore processor technology is playing a significant role in helping to accelerate the engine.

“A sufficient frame rate for games is about 30 frames per second,” says Bikker. “For multiplayer, you want to see a higher rate. For resolution, we started with a low 400 pixels by 300 pixels because we anticipated we would have students developing on laptops.” With the latest processors, says Bikker, they get frame rates at 1024 by 768 resolutions.

The Arauna ray tracer was specifically built with games and performance in mind. But whether researchers are improving cancer treatments or animation, multicore processing helps the application of visual computing to the development of real-life simulations.