Very good trip, but not a huge ‘groundbreaking’ year of new technologies.
- Courses were really good and had a lot of good information from top-notch game developers/houses. Most of the courses included a review/update on ‘what is the current state of the art’ which is good for those who aren’t following those niches closely on a regular basis.
- Voxels and voxel methods are kind of back in rage again – but for things like visibility, ambient occlusion, and AI pathfinding.
- Lots of industry leaders questioning direction and disparate energies trying to figure out where the market and graphics are going. API developers and HW vendors desperately trying to find out what the next platform of choice will be (laptop/mobile phone/pad/etc) so they can be positioned to capitalize on it when it becomes clear. Nobody seems to know.
- MLAA was the star tech of the show as far as people showing up with implementations and praise of it.
- Many talks stressed that fact fast content creation and rapid artist pipelines are *the* driver of technologies and tools; and most coding/engine requirements are subservient to the needs of the content pipeline.
- While the show floor was very much smaller than the California years, 3D printing and 3D capture was big. You could get your face scanned at one booth and 3D print it at another one in just an hour or so.
- Intel’s booth was really well attended with lots of interest in onloading techniques. I had a number of good talks with developers – some of which wanted to know if we’d be bringing Larrabee back. J
Course – Destruction and Dynamics for Film and Game Production
This all afternoon session started with a great overview of current destruction techniques. There was a great and thorough overview of the current ways to break up a mesh (Voronoi, Boolean, FEM convex decomposition, and tetrahedral creation), the methods and problems of collision detection after the breakup, and constraint solving. The focus now is not so much on the math of the constraint solvers, but on how to use them in intuitive and fast ways for the artists.
- Use the best freely/commercially already available solver for your purpose out there. Constraint solvers are now very clever and (all things considered) computationally cheap.
- Don’t futz with the solvers to get what you want – use tons of constraints to get the effects you want. Many movie models (like the earthquake scenes of the movie 2012) used literally thousands of constraints per building. By using constraints, you can scale the effect up and get nearly the same look/feel as when you did it on the low-poly version. Fiddling with the solver just gets you more and more headaches as low-poly pre-vis ones will collapse very different than high-poly versions and vice-versa.
- Treat the solvers as black boxes and use different activation systems to achieve desired effects. It is easy and intuitive for artists to use these below methods and nobody needs to experiment with the super-fiddly parameters of the constraint solvers for each kind of breaking they want
- Dynamic activation = when you break an impulse threshold, you break the constraint holding it in place
- ‘Glue objects’ that simply hold sets of other constraints together to get ‘clumping’ effects. When the set has a higher impulse than the individual pieces
- Cascaded activation = you don’t even consider objects as breakable until it becomes an edge piece or is touching an edge piece. Gives nice ‘collapsing wall’ effects.
Course – Advances in Real-Time Rendering – Parts I & II
- Went to an all-day course with presentations of new real-time techniques by such big-wigs as CryEngine3, Halo R&D team, Little Big Planet, Disney and many others. It appears that voxel techniques are making a comeback for Halo and Little Big Planet guys.
- Upcoming Halo title had a fascinating technique for occlusion via voxels. They collaborated with Umbru software to develop a new kind of voxel-based visibility and portal system when their existing watershed algorithm started breaking down with the new outdoor content
- Use a BSP tree of voxel areas to descritize the scene into a hierarchy of voxels and connect the polygon geometry with voxels. Then use that unique structure to generate portals automatically, do fast runtime visibility, and AI pathfinding
- They also changed their engine’s threading system to be job based instead of a more manual load-balancing that didn’t scale. They found novel ways to avoid synchronizing the game engine between threads and dramatically reduce memory copies.
- For their new Cars 2 game, Disney used a unique uniform-grid system to lay out light probes for their spherical harmonic lighting system. Lead to some great optimizations and easy authoring.
- Little Big Planet was using voxels for spherical harmonics and ambient occlusion lighting. They also used a single particle system with really clever screen-space blending in Little Big Planet 2 to generate all their special water/explosion/fire effects.
- Cryengine 3 guys talked about how they used previous frames with re-projection to get fast occlusion culling and local reflection. They also used a lazy technique of updating cascaded shadow maps with further out maps getting updated much less often.
- Frostbite used a rhombus shaped boken technique which has more passes, but is separable and therefore runable on multiple threads. They also used cascaded maps for culling, not just shadowing.
- Treyarch talked about techniques they used on Call of Duty Black ops. They only allowed one dynamic light per surface, but unlimited lightmaps/lightgrids and environment probles per object to always maintain a 60fps goal but with great lighting effects.
- A talk by a new developer in ‘stealth mode’ who talked about Real-Time Image Quilting – which allowed realtime texture quilting without seams for expansive environments.
- The Gears of War 3 guys talked about overcoming lighting constraints on the PS3 by combining the light vectors into a single combined vector and doing tricks to extract it in the shader
- Final talk was by a fellow who did super-accurate, highly realistic facial rendering with subsurface scattering. His technique was an improvement on the nVidia one in many ways (speed/quality), but still not realitime. Still, was interesting conversation about the difficulties of rendering realistic skin.
Course- Beyond Programmable Shading I + II
All day course that talked about the direction and emerging graphics trends. I alternated between this and looking at posters during the day.
- Raja Koduri from Apple did a great talk on how power is managed in current devices. He basically echoed what Intel has been saying: power isn’t scaling with Moore’s law – and how simple programming mistakes can save or kill a battery. He talked about the need for dialog between hardware and software folks to develop API’s for power usage feedback and for current API’s that might need to be re-worked for low-power futures.
- There was talk about allowing the actual scheduling methods used internally to graphics processors might need to be exposed in the future for greater programmability and what possibilities that might open.
- There was a talk about how nVidia Optix SIMD-izes it’s ray tracing pipeline
- Decent talk on current state of the art on bokeh, motion blur and DOF effects.
- There was a good talk by Intel’s Marco Salvi on their OIT methodology along with other recent efforts.
- At the end, there was a SUPER open-panel discussion with speakers about where DirectX/OpenGL/OpenCL were going. The consensus is that there is no consensus. Things are in a real flux right now with radically competing demands. Console devs want absolute control (open the gfx api command buffer to us!), mobile developers that want low-power/long life, desktop devs still wanting the best quality, and the uncertainty of the ‘platform of the future’ being really unknown (will we all be on iPads/consoles/laptops/etc). The only thing they seem to agree on is that DirectX/OpenGL are basically still the best solutions we have unless we want to really bifurcate the API landscape – and it’s unclear whether we want to do that. There’s a lot of opportunity for new technologies and new movers to make a big impact.
Poster sessions was pretty good with lots of entrants. Nothing ground breaking, but there were several topics that I now want to look up. (don’t have notes with me today).
- Spent good part of the morning wandering the show floor, checking out some of the art and book displays, and seeing the show reels in the digital theater. The afternoon was my turn in booth duty.
- Show floor and hiring area was noticeably smaller than the California venue. Big things this year: 3D printers were really big with lots of people doing super-cool live demos. 3D image capture was also big with some great facial/body capture systems. Lots of 2-3 year computer art degree programs and schools were really pushing their schools. I worry that there is going to be a glut of artists in about 3-4 years when these masses of graduates get out.
A grab bag day of talks as the all-day courses were mostly over.
Hiding Complexity (occlusion culling):
- For Allen Wake’s almost all outdoor scenes, he used a method of culling shadows that looked from objects to their visible lights. He’d then use scissor rects for each cascaded shadow layer to avoid filling pixels where they weren’t needed and not even render certain cascades if they were empty. He also used temporal cues/temporal bounding boxes to avoid recalculating cascades if they didn’t change much
- Latest Need for Speed – the classic SIMD tale. They realized a 40x speedup by just throwing out their fancy data structures and simply SIMD-izing their culling system and arranging the data by the track layout.
- Killzone 3 culling – many details, but basically they use a quick sw rasterizer distributed on many PS3 SPU’s to determine list of objects to draw on a 640x360p depth buffer in 16 pixel high strips. In the visibility step, the assume all objects visible last frame are still visible, then test individual objects against the depth buffer in parallel. Do simple sphere reject first, then bounding box test if sphere test passes. For determining good occluders automatically, they use the low-poly physics mesh of the object. They pick objects that have a fairly large bounding box, and who’s coverage area is near it’s bounding box area. (actual coverage area is stored in meta-data).
- Mostly non-realtime smoke simulation talks. Best talk was on capturing thin features in smoke simulations. They used a moderately low voxel field with particles to simulate. Then they did a clever pass over the data to ‘condition’ the voxel data to maintain superb fine details. Turned out to be almost 10x faster than traditional method and didn’t suffer from the problems of simulations looking very different at higher and higher simulation densities.
Discrete differential geometry
- Some pretty heady talks on discrete laplacians and circular arc structures. Most of it was above my head – but was an interesting talk about how they could identify whether a curvy building could be made up of manufacturable, uniform pieces (like a geodesic dome)
Filtering Techniques for AA
- Great talk on AA techniques which really turned into the ‘Why you should convert your game to MLAA today” cheering session. Presentations started with different presentations on how people implemented MLAA in God of War III, on Xbox, and on PS3 for various games.
- Good talk on FXAA and SRAA as alternatives from nVidia – but over half the room left by this point.
- One guy from nVidia got up and said if they could give you MSAA or the like for nearly free if they’d convert back. The speaker said, “No, I’d still keep MLAA on the CPU and then use the GPU for something else”.