What Lies Beneath the Sea: Shooting in Stereo 3D

Scott Cassell and Dave Faires are on a mission. Cassell is a wildlife filmmaker and underwater explorer. Faires is his director of photography. Together, they’re out to help marine researchers, educators, students and “citizen scientists” discover and safeguard what lies beneath the ocean. As Cassell puts it: “People are motivated to preserve and protect the things they understand and appreciate.”

To accomplish this, the duo has been documenting Cassell’s underwater adventures using cutting-edge digital video technology. So when Cassell attempted to break a world record by swimming underwater from Catalina Island to the California coast -- a dive of 30 miles -- they assembled a support crew and armed them with an array of 2D and S3D video cameras, which included Sony XDCAMs, Sony HXR-NX3D1Us, Panasonic AG-3DA1s, a Panasonic HDC-Z10000 and multiple GoPro 3D Hero rigs. In addition, they used Canon EOS 7D Digital SLRs to shoot both 2D still pictures and HD video. Their documentary, 30-Mile-Dive, is currently in production.

“We decided to complement traditional 2D video with stereoscopic 3D (S3D) footage because it has such a powerful effect on audiences,” says Faires. “We had cameras everywhere you looked. A Sony XDCAM caught the action above water from the deck of our boat. On the tow sled, we had 2D and S3D cameras covering Scott. The support divers used helmet cams.”

Underwater Shooting in S3D: The Challenges

files for digital projection in theaters with RealD S3D technology. “Our goal has always been to produce, shoot, edit and finish 30-Mile-Dive in 2D and S3D for broadcast and theatrical release using cameras and lenses characteristically not designed for the cinema,” says Faires.

“We feel we have a compelling documentary on the declining state of the ocean and how we need to pay attention to her,” he adds. “If her health goes away, so do we.”

Read more about tips and tools for digital media production from our sponsor.

CES 2012: More Powerful PCs and New Ways to Game

It was a record year across the board for the 2012 International Consumer Electronics Show (CES). More than 150,000 people converged in Las Vegas to check out gadgets, computers and electronics from more than 3,100 exhibitors from around the globe. Plenty of new computer technology and games were spread across the 1.85 million net square feet of show floor in the Las Vegas Convention Center and neighboring hotels. And for good reason, consumer electronics are forecast to top $1 trillion in 2012 for the first time, including more than $202 billion in the U.S. alone.

Ultrabooks were everywhere during the show, opening up new gaming capabilities for those on the go. Dell debuted its XPS 13 ultrabook, an 11-inch ultrabook that’s only 6 millimeters at its thinnest and features a carbon fiber base, which means it weighs less than three pounds. Future ultrabook laptops will implement tablet features and have touch screens, voice recognition and longer battery life.

Innovation is always a key driver at CES, but this year there were plenty of leftover trends from the past few shows. 3D isn’t going away. In fact, there were more large-screen autostereoscopic (glasses-free) 3D devices than ever before from big companies like Sony, LG, Samsung and Panasonic.

Although it will still be years before price points on these devices come down for the mainstream, new laptops from Toshiba (Qosmio F755 3D) and new smartphones bring the third dimension to smaller screens at an affordable price. Stereoscopic 3D has seen price drops and larger, thinner screens. There is also continued support for 3D content, something that has been sorely lacking thus far, in the form of new Blu-ray 3D movies from Hollywood studios and new games for PlayStation 3 and Xbox 360.

will evolve as well.

All photos: Getty Images

Getting Acquainted With the 3D Generation

High-definition gaming is on the cusp of a visual evolution. The past year’s introduction and slow proliferation of 3D-enabled games, displays and laptops suggests that the next major frontier is on the horizon.

Whether 3D moves beyond a stylistic evolution and becomes a revolution, though, has yet to be seen. As some developers and players note, the unique visual effect of 3D -- with the initial disorientation of viewing a scene with an illusion of depth and then continuing to direct the action -- can take some getting used to. However, the PC games that have made the jump to 3D run the gamut, including StarCraft II, Call of Duty: Black Ops, World of Warcraft and Duke Nukem Forever.

Leading graphics card manufacturers have released platforms that comprise cards, drivers and glasses that allow developers to optimize their games in 3D, or players to apply 3D to their existing games. Studies show that game ratings measurably increase as new effects such as 3D are added to a game. So now that 3D is here, adding it to a game may only help.

PC developers need to spend much less time tweaking the rendering effects in their games, as opposed to more than several months to rewrite a console game engine from the ground up to support 3D. And Mick Hocking, a vice president at Sony Computer Entertainment Europe and the head of the company’s 3D initiative, says that while some of the technology used to produce high-quality 3D displays has existed for a long time, it’s only recently become available at a consumer price point.

With these things in mind, what do developers who are interested in 3D need to know?

Getting the Basics

A common misconception is that 3D only works for certain genres of games, like shooters that require judgments in depth, or slow-moving games that afford players more time to enjoy the view. But it’s more about figuring out how the specific aspects of 3D can be best applied to a given game.

“It’s not just about adding depth to a game,” says Hocking. The basic principle of 3D is displaying two separate images: one for the left eye and one for the right eye.

Using a technique called full-frame dual-camera 3D, this means rendering two camera positions set a certain distance apart, seen through shutter glasses. Another technique called reprojection, 2D-to-3D conversion, or virtual 3D, creates the image by offsetting the pixels in the original game frame to the left and right.

Achieving a comfortable level of depth means setting the left and right images at the proper distances to achieve positive parallax (depth in the screen) and negative parallax (depth in front of the screen). In the real world, eyes are parallel when viewing an object at the horizon. Exceeding comfortable limits by placing distant images too far left and right can result in an image that goes beyond parallel with the eyes, asking them to look outward. Bringing an object too far out of the screen compels the player’s eyes to rotate inward. In terms of depth, a basic balance must be met between excitement and comfort.

“Mostly when we produce 3D, we have the main object of interest near the plane of the screen,” says Hocking. “We have a nice sense of depth going in the screen, which is typically how you’ll play most of the game.”


Moving Into the Next Dimension

PC monitors have an advantage over TVs in being able to display 3D at 1080p60. Although glasses-free monitors and televisions are emerging, and passive polarized glasses present a less bulky option, the current standard is set by combinations of active shutter glasses and 120Hz 3D displays.

Hocking says that glasses-free screens currently suffer from limited viewing angles, limited depth -- meaning that one needs to be sitting at a sweet spot for the effect to work -- and effects like shimmering or ghosting. There is still some debate about the level of eye strain caused by an active shutter setup. But passive 3D displays that use polarized glasses -- as in movie theaters -- cut the resolution in half, so games that rely on details such as text, maps and items will suffer from the more garbled image.

Anti-ghosting is another important consideration. A monitor with an insufficiently fast response time leaves a double image in the eye, a sort of shadow effect that stresses the player. According to developer Phil Nowell of Ready at Dawn Studios, ghosting is also much less prominent on passive displays compared to active displays. Considering that image resolution is especially important in many PC games, however, this may be a necessary tradeoff. Artistic decisions can also affect ghosting: Jim Van Verth, an engine programmer at Insomniac Games, found that ghosting occurs often when a bright section of the screen sits next to a very dark section.

3D as an Art Form

3D presents a number of creative challenges and questions, which will only increase as more developers use it. Convergence -- where the focal point of a scene is, determining its range of depth -- affects both gameplay and cut scenes. The specific camera implementation in a game -- whether it’s a fully controllable first-person camera, a third-person camera with a fixed distance to the avatar, or a static isometric camera -- naturally makes this more or less complicated.

Using negative parallax to suddenly bring an image out of the screen is perhaps the archetypal 3D effect. Hocking recommends restricting this to dramatic moments,as it isn’t comfortable to view for long; it may be ideal for cut scenes, where users can’t control the camera. Bringing HUD or other important UI elements just out of the screen can also be a simple but effective use of 3D.

Another basic principle is easing the player into various levels of depth with subtle transitions. “We found that when we did a camera cut from a really deep scene, we needed to just flatten everything and slowly let it expand,” says Nowell. “Otherwise people go, ‘Ah, that was a camera cut; I’m playing a game.’” Objects protruding out from the sides of the screen are also visually disruptive, as they call attention to the real-world borders of the image.

Poorly implemented 3D “feels horrible to view,” says Hocking. “It can have many problems. There can be uncomfortable depth to view, and there can be misalignment between the images. There could be a poor-quality screen that’s being used. It could be adding 3D where 3D doesn’t really need to be added to the experience.”

On the other hand, when used the right way, even intrinsic aspects of 3D can make the difference between a promising scenario and an immersive encounter. A greater sense of depth has a tangible benefit to the player in a baseball or tennis game, for example, while a clearer sense of scale can change how a massive building or boss creature is perceived.

It’s a common sentiment that 3D game development, as a creative approach, is in its very earliest stage. Experts and developers speculate on using 3D to deliver a true sense of vertigo by controlling the rate of change of convergence planes or amplifying its shock value in survival horror games. In one innovative use of 3D technology, Sony is experimenting with allowing active shutter wearers to play together on one screen by having one player view the 2D left image and one player view the right. When it comes to 3D development, the horizon’s the limit.

Photo Credit: @iStockphoto.com/domeniko

Watching Nations Fall

The MMORPG is a classic PC game genre, one with a long and nuanced history. The biggest and arguably greatest of them is World of Warcraft, an RPG born from real-time strategy beginnings.

In Trion’s forthcoming MMO, End of Nations, the goal is to get back to those roots and create the very first massively multiplayer online RTS game. It’s an ambitious goal, one backed up with some incredibly detailed visuals and a powerhouse engine.

DIG had the opportunity to talk to End of Nations’ executive producer David Luehmann about the game’s development and his hopes for the future.

DIG: What is End of Nations?

David Luehmann: In a nutshell, End of Nations is a massive online, persistent, real-time strategy game. The game is set in a near future in which society as we know it has continued on the downward spiral until ultimately it fails and billions of lives are lost in the chaos that follows the collapse. 

DIG: So how is the gameplay for an MMORTS going to work?

D.L.: Internally, we actually think of it as an RTSMMO. Our canon is that it’s a great RTS that utilizes MMO features in a manner that improves upon the core RTS gameplay.

So in most ways it will be familiar to RTS players. There are two playable factions that have different units and abilities. The user interface will also be quickly recognizable and familiar to RTS players. The gameplay is best described as more tactical in focus, and there will still be resources that need to be managed, but players won’t have to optimize around build-order queues.

However, unlike traditional RTS games, everything is online, always online and persistent. For example, much like MMOs, there really isn’t a simple single-player campaign. There is a campaign mode, but it is very PVE/co-op focused and players will be bound to see other users as they play through the campaigns.

We also utilize other beneficial design constructs from MMOs, like the concept of leveling. So as users go through missions, they will earn persistent resources that can be used to unlock technology trees, new unit types, and new abilities -- and customize their units uniquely for each faction -- which in turn can then be used in both campaign and massive PVP battles.

DIG: What are some of the challenges you have faced in developing a massively multiplayer real-time strategy game?

D.L.: At a high level, the challenges fit into two categories: gameplay and technology. From a gameplay perspective, we need to focus on large-scale, moment-to-moment gameplay and avoid big build-order-based gameplay, as that won’t be fun for 50-plus players online together. We also want to be cautious of not turning it into an RPG with full loot dropping and character paper-dolls. Again, it’s an RTS game and we don’t want to muddy that focus.

For the technology side, the challenges really revolve around the core network architecture common in RTS games, typically peer-to-peer based. In a peer-based system, you are playing on a local game that is networked to other peers who are all doing the same thing, and the world state is shared amongst all players.

End of Nations is a pure client-server-based technology. You aren’t playing the game on your home computer, you are playing the game on a server in a data center, and your computer is just the client that is interfaced into the server. Another way of saying this is that your computer is a window through which you are seeing the game. This type of architecture is common for MMO games, as it allows for much larger numbers of users and helps with a bunch of anti-cheat challenges as well.

DIG: What are some of the key features we can expect to see in the game?

D.L.: There are three big feature buckets:

1. Scale, large scale co-op and competitive battles like you have never seen in an RTS game before, with lots of ways to team up with friends.

2. Persistence. Everything you do counts, but in the campaign mode and in the larger meta-game battles for territory. There will be thousands of players fighting for control of the world, and if you are part of big assault or are keeping the base safe -- what you do will matter.

3. Customization. This is both aesthetically and gameplay changing. What choices you make in building out your army, equipping it and upgrading units and abilities will be a big part of the strategy found in this game.

DIG: How are you balancing the MMO aspects with the RTS aspects?

D.L.: We address balancing through a couple of methods. The first is via a smart matchmaking/rewards system that takes rank, skill, clan, group and other player preferences into account in the big competitive battles. The second is really about embracing the differences between newer and more veteran players and employing design concepts in which there is a symbiotic relationship between new players and veterans.

DIG: How do you see the world and mechanics developing past launch?

D.L.: That’s very difficult to predict. First we’ll listen to our customers. We think of this as a service and, if there are particular features or needs that our customers have, we’ll want to address those.

Beyond that, we’ll certainly introduce new units, mods, areas, missions and new stories. Getting wilder, we could potentially release new factions -- or even wilder still, persistent player bases and the like.

DIG: End of Nations packs some serious visual firepower. What technology did you use to develop it?

D.L.: Everything you see is born from propriety tech created by our development partner Petroglyph or by our platform team here at Trion.

DIG: Has it been difficult to scale End of Nations? Are there any specific things you have done or used to ensure the game will run on legacy machines?

D.L.: Yes and yes! It has been difficult and there are many things we’ve done to keep the barrier to entry as low as possible. Technically we have a really solid rendering engine that can scale the complexity of all the visuals down to different levels of detail appropriate for older machines, and we’ve made design and platform decisions that will offload many of the logic needs to server. In this model the clients don’t need the entire world state in memory and/or have to calculate all the math, which really lowers the overhead on CPU and RAM.

DIG: When all is said and done, what is the one core thing you hope to accomplish with End of Nations
D.L.:
There’s a bunch of little goals all tied into this, but at the core I want to see us deliver a game that finds fans who think it simply kicks ass!

Transformational Force Meets Threaded Objects

There’s a revolution going on, and the good guys are winning. They’re not setting up barricades of burning tires, so don’t be alarmed. They aren’t after your stash of survival seeds, either. All they want to do is democratize interactive 3D technology across every major device and market. And they apparently won’t stop until every entrepreneur who has an idea for a cool new game or application has downloaded the Unity Engine and put it to the test.

Millions of Internet users have already interfaced with the Unity Web Player to experience a netbook app or play a game on a handheld device. Paying customers for the Unity Engine include Coca-Cola, Microsoft and NASA, while hundreds of thousands of individuals have hooked up for the free individual license.

The Unity Engine is also good enough to attract top studios -- such as Electronic Arts (EA), publisher of Tiger Woods Online -- and flexible enough to support Web browsers, smartphones, netbooks, laptops, desktops and more. The same code base can compile across multiple markets, yielding potentially lucrative hits and relatively inexpensive misses.

Based in San Francisco, Unity has experienced tremendous growth since its founding in 2006. By September 2010, Unity 3.0 had debuted to wide acclaim among its enthusiastic user base.

A Transformational Force
Unity 3.0 was a major step forward for users, with built-in Beast lightmapping and occlusion culling, a debugger, a full editor overhaul, and stunning performance gains. Unity optimized the graphics pipeline and achieved a performance increase on the order of 40 to 60 percent across the board. They added dozens of new features and more than 100 enhancements.

The release was celebrated in the growing user community, as revealed on the Unity discussion boards. It was almost a statement as much as a release: “With Unity 3, we’re demonstrating that we can move faster than any other middleware company,” says Steffen Toksvig, the development director at Unity Technologies.

The release brought so many new features and upgraded technologies to bear that it moved the needle on application building in general. Toksvig says it showed a fundamental commitment to the industry and the customer: “We’re serious about the long term, because high technology made simple is a transformational force.”

“With Unity 3, we spent a lot of time refactoring our code to make it easy to add new platforms that we can publish to while keeping a single authoring environment. We split up the runtime code in such a way that we can do platform-specific optimizations and make sure that Unity runs optimally everywhere,” says Toksvig.

The Hits Keep Coming
Unity 3.1 appeared in late 2010 and regular, predictable updates are guaranteed to follow. That cadence has fueled tremendous growth in the company. “We’ve been seeing hypergrowth,” says Toksvig. “We’re now nearing 300,000 developers and 40 million installs of our player.”

The flagship feature of Unity 3.1 is the Unity Asset Store. Accessed directly within Unity, it is the way developers get assets for their games. The store launched with around 70 existing packages included, and from now on, it’ll be the prime repository for art, tutorials, scripts and libraries.

What’s Your Idea?
“We had great dreams when we started Unity,” says Joachim Ante, one of Unity’s founders and its chief technology officer. “We had the vision of democratizing game development and enabling everyone to create rich, interactive 3D.” Ante and his co-founders, David Helgason and Nicholas Francis, knew such a powerful tool could completely disrupt the game engine market, but that was part of the idea.

Unity is still growing. User downloads happen continually, 24-7, because that entrepreneurial spirit is a powerful global dream. Create the right app, and you could laugh your way to the bank. Figure out what the market needs before it knows what to look for, and you can guide a new industry. It used to be called the “American Dream,” but now it’s gone viral. Racing, golf, puzzles, bird identification or volcano snooping -- there’s no way to predict the next killer app.

“We have a remarkable community of developers,” says Ante. “They range from 14-year-old kids creating amazing content to the EAs of the world creating super-polished products. This is what continues to blow my mind -- that it is actually possible to create a platform that supports such a wide range of users.”

Photo Credit: http://unity3d.com/