Securing the Enterprise Better With Encryption Instructions

The popular encryption standard, the Advanced Encryption Standard (AES), was adopted by the U.S. government in 2001, and is widely used today across the software ecosystem to protect network traffic, personal data and corporate IT infrastructure. AES applications include secure commerce, data security in database and storage, secure virtual machine migration, and full disk encryption. According to an IDC Encryption Usage Survey , the most widely used applications are corporate databases and archival backup. Full disk encryption is also receiving lots of attention.

In order to achieve faster, more secure encryption -- which makes the use of encryption feasible where it was not before -- Intel introduced the Intel Advanced Encryption Standard New Instructions (IntelAES-NI), a set of seven new instructions in the Intel Xeon  processor family and the 2nd gen Intel Core processors:

  • Four instructions accelerate encryption and decryption.
  • Two instructions improve key generation and matrix manipulation.
  • The seventh aids in carry-less multiplication.

By implementing some complex and costly sub-steps of the AES algorithm in hardware, AES-NI accelerates execution of the AES-based encryption. The results include performance improvement implications, and cryptographic libraries that independent software vendors (ISVs) can use to replace basic AES routines with these optimizations.

AES-NI implements in hardware some sub-steps of the AES algorithm. This speeds up execution of the AES encryption/decryption algorithms and removes one of the main objections to using encryption to protect data: the performance penalty.

To be clear, AES-NI doesn’t implement the entire AES application. Instead, it accelerates just parts of it. This is important for legal classification purposes because encryption is a controlled technology in many countries. AES-NI adds six new AES instructions, four for encryption and decryption, one for the mix column, and one for generating next round text. These instructions speed up the AES operations in the rounds of transformation and assist in the generation of the round keys. AES-NI also includes a seventh new instruction: CLMUL. This instruction could speed up the AES-GCM and binary Elliptical Curve Cryptography (ECC), and assists in error-correcting codes, general-purpose cyclic redundancy checks (CRCs) and data de-duplication. It particularly helps in carry-less multiplication, also known as “binary polynomial multiplication.”

Besides the performance benefit of these instructions, execution of instructions in hardware provides some additional security in helping prevent software side-channel attacks. Software side channels are vulnerabilities in the software implementation of cryptographic algorithms. They emerge in multiple processing environments (multiple cores, threads or operating systems).Cache-based software side-channel attacks exploit the fact that software-based AES has encryption blocks, keys and lookup tables held in memory. In a cache collision-timing side-channel attack, a piece of malicious code running on the platform could seed. For more information on the AES new instructions, see this report . For more information on the CLMUL instruction and its handling of carry-less multiplication, see explanation.

Encryption Usage Models

There are three main usage models for AES-NI: network encryption, full disk encryption (FDE) and application-level encryption. Networking applications use encryption to protect data in flight with protocols encompassing SSL, TLS, IPsec, HTTPS, FTP and SSH. AES-NI also assists FDE and application-level models that use encryption to protect data at rest. In all three of these models, improved performance is gained. Such performance improvements can enable the use of encryption where it might have otherwise been impractical due to performance impact.

In today’s highly networked world, Web servers, application servers and database back-ends all connect via an IP network through gateways and appliances. SSL is typically used to deliver secure transactions over the network. It’s well-known for providing secure processing for banking transactions and other ecommerce, as well as for enterprise communications (such as an intranet).

Where AES-NI provides a real opportunity is in reducing the computation impact (load) for those SSL transactions that use the AES algorithm. There is significant overhead in establishing secure communications, and this can be multiplied by hundreds or thousands, depending on how many systems want to concurrently establish secure communications with a server. Think of your favorite online shopping site during the holiday season. Integrating AES-NI would improve performance by reducing the computation impact of all these secure transactions.

With the growing popularity of cloud services, secure HTTPS connections are getting increased attention -- and use. The growth in cloud services is putting enormous amounts of user data on the Web. To protect users, operators of public or private clouds must ensure the privacy and confidentiality of each individual’s data as it moves between client and cloud. This means instituting a security infrastructure across their multitude of service offerings and points of access. For these reasons, the amount of data encrypted, transmitted, and decrypted in conjunction with HTTPS connections is predicted to grow as clouds proliferate.

For cloud providers, the performance and responsiveness of transactions, streaming content and collaborative sessions over the cloud are all critical to customer satisfaction. Yet the more subscribers cloud services attract, the heavier the load placed on servers. This makes every ounce of performance that can be gained anywhere incredibly important. AES-NI and its ability to accelerate the performance of encryption/ decryption can play a significant role in helping the cloud computing movement improve the user experience and speed up secure data exchanges.

Most enterprise applications offer some kind of option to use encryption to secure information. It is a common option used for email, and for collaborative and portal applications. ERP and CRM applications also offer encryption in their architectures with a database backend. Database encryption offers granularity and flexibility at the data cell level, column level, file system level, table space and database level. Transparent data encryption (TDE) is a feature on some databases that automatically encrypts the data when it is stored to the disk and decrypts it when it is read back into memory. Retailers can use features like TDE to help address PCI-DSS requirements. University and health care organizations can use it to automatically encrypt their data to safeguard social security numbers and other sensitive information on disk drives and backup media from unauthorized access. Since AES is a supported algorithm in most enterprise application encryption schemes, the use of AES-NI provides an excellent opportunity to speed up these applications and enhance security.

Full disk encryption (FDE) uses disk encryption software, which encrypts every bit of data that goes on a disk or disk volume. While the term FDE is often used to signify that everything on a disk is encrypted, including the programs that boot OS partitions, the master boot record (MBR) is not and thus this small part of the disk remains unencrypted. FDE can be implemented either through disk encryption software or an encrypted hard drive. Direct-attached storage (DAS) is commonly connected to one or more Serial-attached SCSI (SAS) or SATA hard drives in the server enclosure. Since there are relatively few hard disks and interconnects, the effective bandwidth is relatively low. This generally makes it reasonable for a host processor to encrypt the data in software at a rate compatible with the DAS bandwidth requirements.

In addition to protecting data from loss and theft, full disk encryption facilitates decommissioning and repair. For example, if a damaged hard drive has unencrypted confidential information on it, sending it out for warranty repair could potentially expose its data. Consider, for instance, the experience of the National Archives and Records Administration (NARA). When a hard drive with the personal information of around 76 million servicemen malfunctioned, NARA sent it back to its IT contractor for repairs. By failing to wipe the drive before sending it out, NARA arguably created the biggest government data breach ever. Similarly, as a specific hard drive gets decommissioned at the end of its life or re-provisioned for a new use, encryption can spare the need for special steps to protect any confidential data. In a data center with thousands of disks, improving the ease of repair, decommissioning and re-provisioning can save money.

In summary, these AES-NI capabilities are able to make performance-intensive encryption feasible and can be easily applied into various usage models.


Photo: @iStockphoto.com/deepblue4you

Help for Clearing the Online App Store Submission Hurdle

The last stage of any journey may prove the hardest.

That observation seems to hold true for app development. Some mobile technologists cite the online app store submission process as one of the more difficult parts of app development. The process can take a while, particularly if an app needs to be resubmitted for running afoul of submission guidelines.

“It can be very frustrating, especially when you pay the submission fee and then are required to make numerous revisions that can take several weeks of correspondence and tweaking prior to getting accepted,” says Chris Vendilli, founder of ProFromGo, a Pittsburgh-based Internet marketing firm that specializes in mobile application development.

William McCarthy, director of app development at Mobile Magnus, a mobile app maker with a development team in Ireland and the U.S., says the app submission task isn’t so much arduous as it is time consuming. “I wouldn’t say the process is difficult, but it is long-winded,” notes McCarthy. He says the process can take five to 10 days, adding, “If you mess up and have to do it again, it takes another five to 10 days.”

Timing was particularly important for the targeted release date for Mobile Magnus’ Leapin’ Leprechaun Lite: St. Patrick’s Day. The game -- with some help from an email to Apple -- made it into the App Store on March 17.

Each app store submission process -- from Apple’s App Store to Google Play to BlackBerry App World -- requires developers to follow different criteria and poses its own set of challenges, says a spokesman for Verivo Software, a Waltham, Mass., company that offers a mobile enterprise application platform. “One of the items that becomes difficult for nonplatform developers is re-submitting to app stores following every change made to their app, forcing them to repeat the process numerous times over the course of an app’s development and management.”

In listening to developer feedback and trying to make the submission process as painless as possible, some stores have changed how they accept apps to make submitting easier. For instance, Intel AppUp, a digital store designed for PCs, has redesigned their onboarding process and changed some of the requirements so developers with existing applications can more easily submit without making code changes. [Disclosure: Intel is the sponsor of this content.]

Avoiding Trouble
The app submission routine seems simple enough: It generally requires completing a form that describes the app and uploading the binary code. But there are pitfalls to avoid and, for developers, the task is finding ways to make things run as smoothly as possible.

Where to begin? For starters, app makers should avoid any obvious infractions that bring out an app store’s rejection notice.

“There are certain functionalities that almost always get rejected -- anything that interferes with the native functions and operations of the phone will get shot down every time,” says Vendilli. “For example, if you were to try and use a ‘Call Now’ feature that attempted to use your own VoIP protocol to place the call to the business, Apple will never allow it because they want you to use their built-in functionality for making calls.”

Earlier this month, ProFromGo announced a mobile app development service for iPhone and Android that aims to help Pittsburgh businesses roll out apps for their customers. The development service includes getting the businesses’ apps approved for the iPhone and Android app stores.

The company will focus on using a set of features that most business owners find valuable and that are also known to be easily accepted in the app store submission process, says Vendilli. “As app designers/developers, we’ve grown very familiar with what will fly and what will die ...”

Verivo, meanwhile, suggests that use of a mobile app development platform can help developers make a favorable app store impression. The company’s enterprise mobility platform steers developers in the right direction, matching UI and UX design standards set forth by multiple app stores, according to its spokesman.

App Testing
After the development phase, rigorous testing can help avoid app store trouble. Chris Eyhorn, executive vice president of the Testing Tools Division at Telerik, which provides developer and automated testing tools, says developers are under pressure to deliver “master golden copies” of their apps to app stores -- a situation he says recalls the days of physical software distribution.

“It goes back to where we need to make sure the quality of the app is super high,” says Eyhorn. “If we get an app with bugs in it, we will get nasty feedback and one-star reviews. It has a significant impact on downloads.”

That situation, says Eyhorn, underscores the need for putting apps -- and software updates -- through a series of tests, including unit tests to test the atomic features of the app, and integration as well as functional tests, to examine end-to-end scenarios.

Eyhorn also emphasizes that developers should observe caution when it comes to the test subject. An end-to-end test should be conducted against the final build to be uploaded as opposed to release builds. “Take the final output of your compilation process and use that for test,” advises Eyhorn.

Is Near Field Communication a Near-term Opportunity?

Backed by household names such as AT&T, Google, Intel, MasterCard and Microsoft, Near Field Communication (NFC) seems poised to be the next big thing in mobile apps. Another reason? The ultra-short-range wireless technology can facilitate a wide variety of tasks, including brokering cashless payments, unlocking doors, validating IDs and making digital signage interactive.

Yet another reason why app developers should start getting up-to-speed on NFC is that it’s already built into dozens of mobile phone models, such as the Samsung Galaxy S II. Although it will take at least another year before the installed base is big enough to label NFC a mainstream technology, Touchanote is among the dozens of apps already available for platforms such as Android. Here’s an overview of NFC and how to add it to your app.

How Does NFC Work?
NFC signals have a range of between 4 centimeters and 0.2 meters. That’s much shorter than Bluetooth and Wi-Fi, and that design has a couple of benefits. First, it minimizes the chance of interference when multiple NFC devices are near one another, such as checkout lanes or subway turnstiles. Less interference means fewer annoying glitches for users.

A second benefit is security. Eavesdropping on an NFC connection to harvest, say, credit card information requires either standing uncomfortably close to the user or carrying a large antenna -- two big red flags.

Another thing that differentiates NFC from Bluetooth and Wi-Fi is that NFC is designed to establish a connection and exchange information in less than 0.1 second. That speed makes it an obvious fit for quick transactions, such as waving a phone at a subway turnstile during rush hour.

NFC’s speed also includes providing a way to exchange an extensive amount of information automatically rather than manually. For example, some digital signage allows passersby to use NFC to submit their names, addresses and phone numbers to enter a contest instead of having to type that on their phone first.

“One of NFC’s biggest premises is that you can take five clicks and turn them into one tap,” says Kent Helm, engineering manager in Intel’s Communications Architecture and Solutions Engineering unit. [Disclosure: Intel is the sponsor of this content.]

Although its signals are wireless, NFC doesn’t necessarily require power from the phone every time it’s used. Instead, the receiving device can use inductance to pull the information from the phone’s NFC chip. That design means it’s possible to create NFC-enabled apps with little or no battery drain even when they’re designed to be used frequently throughout the day.

Who’s Backing It?
NFC’s e-commerce potential is a major reason why so many companies are backing the technology. Two of the more high-profile initiatives are Google Wallet and Isis, which is a phone-based wallet created by AT&T Mobility, T-Mobile USA and Verizon Wireless.

Google Wallet and Isis are noteworthy for another reason: They’re examples of a power struggle between wireless carriers and other companies for control over -- or at least a financial benefit from -- transactions involving mobile phones. How that eventually plays out will determine factors such as whether a wireless carrier or another party handles billing on behalf of the app developer.

“There are a lot of e-wallets coming out that have nothing to do with a carrier,” says Helm. “At the end of the day, it’s not necessarily going to be tied to the carriers for e-commerce.”

So far, NFC is best known for tasks centered on a mobile phone. But Windows 8 will have native NFC stacks, and that support means a wider range of potential users and uses.

“If a developer has an NFC solution from Intel or anybody else, if they’re compliant with the Windows 8 logo requirements, then there’s no reason it shouldn’t work, according to the MSFT SDKs,” says Helm. “So it should all be a transferable experience between Windows 8 phones, laptops, tablets and desktops.”

How Do I Implement NFC in My App?
As the NFC ecosystem grows, so does the selection of tools for adding NFC to apps. One obvious place to start is operating system vendors.

For example, at Developer.Android.com, Google has an overview of Android’s Beam feature and how to leverage it to NFC-enable apps.

Another potential tool is under development at the Massachusetts Institute of Technology, whose App Inventor tool is designed to simplify Android development. Starting this summer, MIT researchers will develop a new feature for the tool that lets users add NFC functionality when creating an app. “It will be delivered sometime later in the year,” says Stephen Miles, co-chair of the NFC Cluster in the MIT Enterprise Forum. “That’s the plan.”

Migration to the Cloud: Evolution Without Confusion

The rapid rise of cloud computing has been driven by the benefits it delivers: huge cost savings with low initial investment, ease of adoption, operational efficiency, elasticity and scalability, on-demand resources, and the use of equipment that is largely abstracted from the user and enterprise.

Of course, these cloud computing benefits all come with an array of new challenges and decisions. That’s partly because cloud products and services are being introduced in increasingly varied forms as public clouds, private clouds and hybrid clouds. They also deliver software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) solutions, and come with emerging licensing, pricing and delivery models that raise budgeting, security, compliance and governance implications.

Making these decisions is also about balancing the benefits, challenges and risks of those cloud computing options against your company’s technology criteria. Many core criteria matter: agility, availability, capacity, cost, device and location independence, latency, manageability, multi-tenancy, performance, reliability, scalability, security, etc. And the available cloud options all vary widely in terms of each of these criteria -- not to mention, there are significant challenges integrating all of this with your existing infrastructure.

There are fundamentally challenging questions that companies will be forced to grapple with as they decide what cloud functionality suits them best. The central issues include security, cost, scalability and integration.

Public, Private or Hybrid?

There are a few differences among the three:

  • Public cloud services require the least investment to get started, have the lowest costs of operation, and their capacity is eminently scalable to many servers and users. But security and compliance concerns persist regarding multi-tenancy of the most sensitive enterprise data and applications, both while resident in the cloud and during transfer over the Internet. Some organizations may not accept this loss of control of their data center function.

  • Private cloud services offer the ability to host applications or virtual machines in a company’s own infrastructure, thus providing the cloud benefits of shared hardware costs (thanks to virtualization, the hardware is abstracted), federated resources from external providers, the ability to recover from failure, and the ability to scale depending upon demand. There are fewer security concerns because existing data center security stays in place, and IT organizations retain data center control. But because companies must buy, build, and manage their private cloud(s), they don’t benefit from lower up-front capital costs and less hands-on management. Further, their operational processes must be adapted whenever existing processes are not suitable for a private cloud environment. They are just not as elastic or cost-effective as public clouds.

  • Hybrid clouds are just a mix of at least one public cloud and one private cloud, combined with your existing infrastructure. Hybrid cloud interest is powered by the desire to take advantage of public and private cloud benefits in a seamless manner. Hybrid combines the benefits and risks of public and private: offering security, compliance, and control of the enterprise private cloud for sensitive, mission-critical workloads, and scalable elasticity and lower costs for apps and services deployed in the public cloud.

That combination of operational flexibility and scalability for peak and bursty workloads is the ideal goal, but the reality is that hybrid cloud solutions are just emerging, require additional management capabilities and come with the same kind of security issues for data moved between private and public clouds.

Transformational Change or Legacy Environment?
The diversity of cloud offerings means businesses evaluating various cloud computing options must decide how to integrate cloud resources with their legacy equipment, applications, people and processes, and determine whether and how this will transform their business IT or simply extend what they have today and plan for the future.

The reality of cloud environments is that they will need to coexist with the legacy environments. A publicly traded firm with thousands of deployed apps is not going to rewrite them for the public cloud.

One determining factor may be whether the services being deployed to the cloud are “greenfield” (lacking any constraints imposed by prior work), or “brownfield” (development and deployment in the presence of existing systems). In the absence of constraints, greenfield applications are more easily deployed to the cloud.

Ideally, hybrid solutions allow organizations to create or move existing applications between clouds, without having to alter networking, security policies, operational processes or management and monitoring tools. But the reality is that, due to issues of interoperability, mobility, differing APIs, tools, policies and processes, hybrid clouds generally increase complexity.

The Forecast Is Cloudy, Turning Sunny
Where this is all headed is that, for the foreseeable future, many organizations will employ a mixed IT environment that includes both public and private clouds as well as non-cloud systems and applications, because the economics are so attractive. But, as they adopt the cloud, enterprise IT shops will need to focus on security, performance, scalability, cost and avoid vendor lock-in, in order to achieve overall efficiencies.

Security concerns will be decisive for many CIOs, but companies are increasingly going to move all but their most sensitive data to the cloud. Companies will weave together cloud and non-cloud environments, and take steps to ensure that security is assured.

Non-mission critical applications -- such as collaboration, communications, customer-service and supply-chain tools -- will be excellent candidates for the public cloud.

There’s a Cloud Solution for That

As hybrid cloud offerings mature, cloud capabilities will be built into a variety of product offerings, including virtualization platforms and system management suites. Vendor and service provider offerings will blur the boundaries between public and private environments, enabling applications to move between clouds based on real-time demand and economics.

In the not-too-distant future, hybrid cloud platforms will provide capabilities to connect and execute complex workflows across multiple types of clouds in a federated ecosystem.

Products from Amazon, HP, IBM, Red Hat, VMware and others offer companies the ability to create hybrid clouds using existing computing resources, including virtual servers, and in-house or hosted physical servers.

There are also hybrid devices that are designed to sit in data centers and connect to public cloud providers, and offer control and security along with cost savings of connecting to the cloud. For example:

  • Red Hat open-source products enable interoperability and portability via a flexible cloud stack that includes its operating system, middleware and virtualization. The company recently announced its own platform as a service called OpenShift (for the public cloud) and infrastructure-as-a-service offering called CloudForms, which is a private cloud solution.
  • VMware’s vCenter Operations combines the data center functionality of system configuration, performance management and capacity management. It also supports virtual machines that can be deployed either inside the data center or outside beyond the firewall in the cloud.

Are we there yet?

Firaxis Goes Back to the Future With XCOM: Enemy Unknown

It’s been 18 years since PC gamers took on an invading alien force in the original XCOM. A lot has changed since then. But 2K Games has enlisted Firaxis to update the classic strategy game using Unreal Engine 3 technology and designing it for today’s powerful PCs. In this exclusive interview, Jake Solomon -- lead designer of XCOM: Enemy Unknown -- talks about what’s in store for PC gamers in this new take on a classic.

How close do you stick to the original game?

XCOM is pretty heavily inspired by the original one, so the heart of that game is that transition between taking your soldiers into combat, fighting it out and then the additional strategy layer over the top of that. After combat, you return to base, where you make a bunch of interesting decisions and control the entire war. I think that’s the unique thing about XCOM.

What’s the PC gaming experience going to be like for those who turn up all the sliders and see the full visual fidelity?

I actually work on a 30-inch monitor when I play; I max it out and it’s just amazing. There’s the additional resolution that PC gamers will get. But we also have a completely separate UI for PC gamers and a different way to interact with the experience because it’s more tactical. We have different zoom levels designed for PC gamers.

How are you scaling the game for PC players who don’t have the most high-end laptops?

That’s one of the great things about Unreal Engine 3. The minimum specs are decent enough that gamers don’t need a dedicated gaming laptop to play XCOM. Obviously, we have the ability to scale down for that experience as well. There are a lot of things the game does -- with destruction and things like that -- that are pretty high-end. But it runs pretty well on some of our lower-spec machines.

What are the challenges of developing this game for a new generation of gamers while also remaining faithful to XCOM fans?

That’s definitely been the challenge: to take something that is sacred to a lot of people, myself included, but also introduce this game to a new audience. The industry has changed. Plus, we’re not remaking the original; we’re reimagining it for ourselves. I really am one of the biggest fans of the original game, so I know what things are important there and certainly want to stay true to that.

There’s still no game like XCOM, where you’re making all these epic decisions on the strategy layer. Then you’re going and making all these intimate decisions, turn by turn, with these individual soldiers on a combat layer. The hope is that if we make it accessible and add these new design elements, then that magic that was in the original game can translate to a modern audience. We don’t want to get rid of the core tenants of the original game, because we think that’s what made it special.

What’s something that today’s technology has opened up for your team?

One of the hallmarks of the original game is destructible environments. And we’ve been able to push that forward with Unreal Engine 3. Our environments are completely destructible: More than just being visually appealing, when an alien breaks through a wall, that changes the very dynamic of the gameplay. Shoot out the front wall and part of the roof of the diner and the dynamic fire will spread. Your strategy will evolve based on how the environments change.

This also ties into another key component to the game in that once your soldiers die, they are gone forever. There are real consequences for actions in this game. We’ve been able to add another layer of depth to the game through today’s technology.

What role will XCOM HQ play in this new game?

We’ve completely redone headquarters; it’s now a detailed 3D building that’s completely expandable and customizable. There’s a barracks, where your soldiers hang out. XCOM is a combat game, but it’s very open-ended, so the player can choose what to research in the lab. There are only three research options at the beginning of the game, but many more open up as the game progresses. Engineering is where all the theories from the labs become practice. This is where the player can now build any new items they’ve researched. And there are the hangars, where the jets await orders to go on strikes.

Photo: XCOM.com