Browsed by
Category: Technical

Booting from a 4.0TB raid drive in Windows 7

Booting from a 4.0TB raid drive in Windows 7

Woot!  I just set up my system the last few nights with *three* 2TB drives all connected in RAID 5 setup into a single, 4.0 TB partition – AND I am using that huge bit bucket as my primary boot drive.  Now I have speed from the data striping and redundancy from the RAID.   It’s quite nice so far, but I’m still getting all my old software installed.   However, up until about a month ago – this wasn’t possible with the standard Intel storage controller that comes with almost all the new 2nd generation core systems (or with most RAID controllers).  But getting this magical land to work involves a lot of new information and a good bit of bleeding-edge fighting.

What you need to know:
First off, getting a RAID-ed boot partition larger than 2.2TB involves several pieces that need to work together.  If you only have the first 2, then you can still boot from a single-drive larger than 2.2TB, but you won’t be able to use it in a RAID setup with large partitions.

  1. A motherboard with EFI support
  2. An OS that can boot in EFI mode and supports GPT partitions
  3. A raid controller that can support creating and using boot partitions larger than 2.2TB.

First off, you need a motherboard with an EFI (Extensible Firmware Interface) bios.  EFI bios is the successor to the 20+ year old CMOS bios.  You might remember or have seen CMOS when that awful text screen came up when you hit the power and it would count memory or say ‘Press [del] to enter setup’.  Well, Intel with a lot of other companies finally, happily, heavenly, triumphantly are putting the nails in what had become a painful bit of computing legacy by throwing out CMOS and inventing EFI.  Apple has had EFI bios in their machines for a few years now, and some of the first-generation Core i3/i5/i7 motherboards had EFI support too – but it was spotty and very few things seemed to take advantage of it.  Which brings us to #2 – needing an OS that supports EFI booting and GPT.

If you have a motherboard that supports booting in EFI mode, you still need an OS that supports it and knows how to boot from partitions created with GPT (GUID Partitioning Table). I had an EFI board on my old system, but you couldn’t actually boot in EFI mode unless you used an early Linux distribution with EFI support or until about Windows Vista SP1/Win7 range.  Not only that, but EFI booting only worked on 64-bit versions of the Vista/Win7.  If you’d bought the 32-bit version – then you were out of luck.  The Vista route was also painful because unless you had an install disk that had the EFI boot files on it, you couldn’t actually install Vista in EFI mode.  Early versions of the Vista install CD’s (at least the one I had) didn’t have the EFI boot files.  I never learned if someone who had one of those early disks could return it and get an install disk with EFI boot support.  The reason you needed this is because if you couldn’t boot in EFI mode, you were stuck with the 2.2TB CMOS MBR boot limit.  If you weren’t in EFI mode during the install process, Vista/Win7 will refuse to create a boot partition larger than 2.2TB because it knew a MBR system couldn’t boot from it.  Unfortunately, you couldn’t override this in the install.  GPT booting was usually only supported when the BIOS booted in EFI mode.  So if your OS install disk couldn’t boot to EFI, or wasn’t able to make GPT boot partitions in CMOS mode, you were still out of luck.  As you’ll see, booting in EFI mode is actually more than just having a disk with the EFI files on it.  It’s a two-step approach, but those details will come later.

Finally, Windows 7 64-bit comes along and seems to solve our problems.   The OS install disk has EFI boot support on it.  The install disk is able to make bootable GPT partitions larger than 2.2tb.  Awesome.  But until this last month, you weren’t able to get your Intel RAID controller to make a bootable RAID set larger than 2.2tb because Intel hadn’t finished writing the EFI bios support for it.  You have to check the motherboard manufacturer’s website for a BIOS update – but you should now see new BIOS’s with the new large-partition RAID booting support.  That was the final key to the puzzle.

So, without further adieu, here is how one does this with the following equipment:

  • Asus P8P67 rev 3.1 motherboard (socket 1155 with 8 SATA ports and embedded Intel matrix storage controller) with BIOS flashed to version 1704 or higher
  • Three 2.0TB Hitachi drives all plugged into the Intel SATA ports
  • Windows 7 64-bit install DVD
  1. Back up everything
    The way I did it IS a destructive process and your drives will get erased.  There is no way that I know that you can migrate from a CMOS partitioning system to GPT.  There might be, but fiddling with this sort of stuff in a RAID setup is voodoo and you’ve been warned.
  2. Get and install the BIOS patch
    Be sure you are backed up at this point.  Asus bios flashing seems to always forget my RAID setup when I flash the bios.  In other words, poof – they’re gone.
    I went to Asus and found the newest BIOS patch. In my case it had the very clearly named:
    P8P67 (REV 3.1) 1704 BIOS – 2.2TB or larger HDD can be supported under RAID mode.
    I downloaded the patch file and put it on a USB key.  I then rebooted the machine and entered the BIOS setup.  Asus has a really nice built-in BIOS flashing utility with their new EFI BIOS.  I was able to point it at the USB key, find the image, flash the BIOS, and reboot. If the flashing forgot your RAID drives, then you’ll likely be greeted by a ‘no boot device found’ error.  You cannot safely re-create them if this happened.  The data on them is gone.
  3. Reboot your machine, and create your raid set
    With your BIOS patched, you should now be able to enter the Intel storage manager portion of the boot-up cycle via CTRL-I and create a large RAID set as bootable.  In my case, I selected: create a new set, selected the 3 drives I had plugged in, set the RAID configuration to RAID 5, 64k blocks, and made it bootable.  Save the changes and reboot.
    I find it extremely helpful and safer to shut off the machine, and physically unplug any extra drives you don’t want to get accidentally erased when manipulating RAID setups.  When you create the RAID set, you simply select the drives by serial number.  If you accidentally include a drive that (let say was your backup drive), the moment you add it to the RAID set – the data is gone. Be careful.
  4. Insert your EFI bootable Linux/Win7/Vista DVD, or USB key
    This is where you’ll need to consult your individual motherboard’s docs – and where things can get a little hairy.  For EFI booting on my ASUS board, I have to insert the EFI bootable media (in my case it was the Windows 7 x64 install disk), then reboot/turn the machine on.  Then I had to:

    1. press [DEL] to enter the CMOS setup during boot (with my Windows 7 disk in the drive)
    2. Go to the ‘boot’ menu in the CMOS
    3. Scroll down to list all the bootable devices (CD-Rom/Volume0 RAID set/etc)
    4. I saw the CD-Rom drive I wanted listed at the top, but that’s NOT the one you want.
    5. Keep scrolling down, and at the bottom you’ll see the device listed AGAIN but with the word EFI printed at the start of it.  THAT’S the one to choose.  It tells the system to boot from that device in EFI mode.  If you do NOT see your device listed a second time with the letters EFI in front of it, then that means the BIOS has not been able to find the key EFI boot files on the media you’re using.  In my case, my old Vista x64 disk wouldn’t allow me the option.  Why Asus doesn’t let you just manually set the mode to EFI only booting – I’ll never know.  I think it’s stupid they don’t let you, my old Intel DP35DP board let you do that…
    6. Select the device with the EFI bootable disk in it and tell the bios to boot.  You’ll notice that the fonts are different on startup, and that the cursor will do a funny indented thing during the boot cycle.  This tells you it’s in EFI booting mode.
  5. Boot the Windows install disk and create your partitions
    You should see the windows installer start just as normal.  When you get to the partitioning menu, you can select the auto-create option, and that should work.  However, this step is the MOST finicky and the place where you’ll find if you really are in EFI mode or not.  I chose the manual creation of the partitions and told it to make me the biggest one.  Sure enough, it said 3.7TB.  By default, you will get 2 other 100MB partitions created by windows for recovery purposes.  You can live without them, and they do not show up as drive letters in your system after it’s up.  Instructions on how to create your system without those extra partitions are here. The important thing is that you make SURE the partition is the right size at this point.  If it won’t create a 2.2+TB partition, or if it says it did, but the size is 2.2TB, then you’re not really in EFI mode or the Vista disk you have isn’t EFI bootable.  After you see that beautiful, full-sized partition – you might get a warning about the partition being potentially unbootable due to it’s size -but you can safely ignore it.  Just make sure that it actually created the partition.
  6. Finish the OS install as normal
    From here on out, follow the Windows install as normal.  On reboot, the motherboard will detect the GPT partition and properly boot in EFI mode automatically from here on out.  You’ll never need to do the selective EFI bios boot again.
  7. Install the Intel matrix storage manager software
    This really excellent software should be on your system.  If you have a UPS device on your system, I highly recommend turning the buffering on.  It allows for the drive read/writes to also be buffered in system memory.  It is dangerous without a UPS because a power loss means anything in the system buffer is lost, but delivers a noticeable speed improvement if you do have a UPS. Also, make sure to ‘Initialize’ the drives.  When you’re just fresh installed with windows, this is instantaneous.  Finally, this software is great because it’ll report any SMART errors your drives start throwing.  If a drive starts dying, it’ll warn you (hopefully) long before it finally goes bad.  I’ve seen this software work exactly as advertized when one of my own drives started failing in a previous setup.  It also gives you information about which drive is failing so you know which one to pull out and shows you status of any rebuilds when you swap in a new one – which can happen WHILE you’re actively using the system.  How cool is that?
  8. Done!  When you get into windows, you should have one large C: drive with all your space and no extra ‘boot’ drives!

I know many Linux afficianatos cringe at big bit buckets like this.   Put your swap on another partition!  Make a partition for your programs files!  Make a tiny one for your boot partition!  This is all well and good if you like micro-managing.  I hate micro-managing.  I hate extra drive letters.  We’re in the f-ing 2000’s now people!  3TB drives are $150, and 2TB drives are $65.  RAID comes built into boards for free.  Storage is a commodity.
I use Windows for most of my daily game playing/tv watching/etc and don’t want to futz with the headaches of multiple partitions.  I want to play games, watch tv, and surf the web.  I don’t want to have to worry about how big to make a partition for my programs files or data files or worry about how big my boot partition is on windows.  What if I over or under-guess?  I get to either re-install or test my luck with a partition resizer or have some programs one place, and others somewhere else.  Or the fun of every. single. time. I install a program, I have to select a custom install and pick a different drive letter.  Windows patches often automatically install in the system32 directory – what if a new service pack is too big to fit on my tiny boot partition?  It makes no difference for security if my boot is on a different partition than my data or programs.  Creating a separate partition on the same drive just for the swap file is no gain.  Unless it’s on another physical drive, you’re not gaining any speed in Windows.  And even then, I have 16GB of memory in my machine (and you can too – it’s ridiculously cheap at $60/8GB of high-quality DDR3 ram), I never swap anymore.

In short, I never have to worry about my space or what I’m putting where.  We’re not in the 70’s anymore where drives were real investments.  They’re cheap, replaceable commodities now.  I have a RAID 5 which means I’m getting the best of both speed and redundancy all in one place, and it’s details are taken care of by the hardware.  What more could one ask for.

Gotcha’s of using ID3D11ShaderReflection

Gotcha’s of using ID3D11ShaderReflection

So, I am working on something that requires I programmatically know what’s in a DX11 shader file.  One of the cool things you can use to figure out how many constant buffers, get string names from position streams, etc – is the DirectX Shader Reflection system.  It gives you the ability to query a loaded shader blob for names/types/etc.  However, it’s not quite as straightforward as one might expect to use.  The docs were pretty incomplete up until the DX11 version.  But here’s the basics of the ID3D11ShaderReflection system:

  1. Load your vertex/pixel shaders.  Get the actual shader blob – or extract it from the FX system if you used that.
  2. Bind it to a shader reflection object using D3DReflect:
    pd3dDevice->CreatePixelShader( pPixelShaderBuffer->GetBufferPointer(), pPixelShaderBuffer->GetBufferSize(), g_pPSClassLinkage, &g_pPixelShader );
    ID3D11ShaderReflection* pReflector = NULL;
    D3DReflect( pPixelShaderBuffer->GetBufferPointer(), pPixelShaderBuffer->GetBufferSize(), IID_ID3D11ShaderReflection, (void**) &pReflector);
  3. Query the reflection object for whatever constant buffer, input/output stream, or other info you want using the many methods available.  It’s a GREAT way to test your shaders at load time so you don’t get cryptic runtime errors later when you try to actually draw objects.

Now, the gotcha’s:

  1. You must also include this:
    #include <D3DCompiler.h>
    Or you’ll get a compile error with D3DReflect() – despite the fact the official Microsoft docs seem to say you should (only) include:
    #include <D3DShader.h>
  2. Finally, you MUST have the DirectX SDK include directory listed before the windows include directory in the Visual Studio compiler include directory list or you’ll get compile errors in D3DShader.h. This is apparently because the windows headers that come with Visual Studio actually have a few DX definitions in them that conflict with the DX SDK header definitions.  
    In Visual Studio, check: Tools/Options -> Project and Solutions -> VC++ Directories and make sure Windows SDK include & library paths appear AFTER DirectX include & library paths.  See here for the thread.
iPhone programming – Learnings

iPhone programming – Learnings

From the wayback machine…

I found a half-completed post on when I was first learning to program on the iPhone.  I think these things are all still valid…
So, I’ve embarked on an ambitious programming project.  One that is now 1/3 of the way complete (first third was the ‘heaviest lifting’ part of the project – took about a month of outsourced grunt work).  The next 1/3 involves programming an app on the iPhone/iTouch.  I thought that would be the easy part, but as it turns out – there is a substantial learning curve that one should be aware of before starting this endeavor.  Here are some of the learnings I’ve found so far after having just got the bare bones of my own app up and running.

  1. Apple’s documentation is very robust – but frustratingly ‘useless’ at times.  They provide many very detailed and excellent vertical stacks of information.  Want to learn about pooled allocators?  They have a document that gives you all the information you could ever want.  But not in the context of how you’d actually use them – or why you hit various problems you hit.   You’ll find yourself sifting around from document to document trying to find out why your NSString object is throwing an exception when returning from a function – only to find the real reason is buried in the memory management section of the Cocoa programming guide – not anywhere near the NSString documentation or their samples
  2. You need to learn Objective C and Cocoa. – and learn them in THAT order.  It’s annoying, but try to find a good Objective C book first.  There aren’t many of them.  There are lots of bad ones though.  Learn how to write a class, add functions with multiple calling parameters, how to make calls to classes, and how allocation works.  That last one is key.  Really spend time learning why:
    [[NSString alloc] init]
    is the right way to allocate and initialize an object and then learn why:
    NSString *s = [[NSString alloc] initFromString:@”hello”];
    [s appendString:@”this string now leaks”];
    leaks memory.  And how the autoAllocator pools work/don’t work.  This is all huge and the first biggest gotcha’s your get hit with right off the bat.
  3. You’ll struggle to get even a 5 line program working at first.  Your first Objective C/Cocoa programs will be very painful.  The simplest things will feel like fighting a brick wall.  I tried to do a simple enum, and kept getting burned by an invalid class definition.  Wha?  The objective C compiler on the iPhone requires you typedef your enums and structs:
    enum newEnum { a, b, c, d }; (NOPE – will cause a compiler error in whatever line follows this definition)
    typedef enum { a, b, c, d } newEnum; (yes – works)
    Just millions of little things like that.  Again, logical if you put your old C/gcc hat on, but it’s been a bit since I last had that hat on and it had some dust on it…
  4. The GUI IDE is great – if you do things they way THEY want them and only for straight-forward designs.  The interfaces for GUI controls holds very much like Win32/X programming philosophies, albeit the syntax is very different.  X Code comes with it’s own WYSIWYG GUI editor, but I found it very limited if you’re going to come up with any kind of innovative interface.  If you just want some standard buttons/etc – you’re probably fine.  But if you want to have some interesting scrolling effects/etc – you best be ready to spend another few days learning how the UI control systems work underneath and experimenting.  The built-in GUI editor reminds me of the old VS2005 GUI editor that you could drag-n-drop controls, compile, and you’d see the results.  If you want any dynamic elements – I haven’t found a way to do that using their gui editor.  And if you want to embed controls on scrolling panels and some other more modern effects – you’ll be tossing the GUI editor altogether.  Trouble is, you spend a good bit of time learning how the built-in GUI system works, find out it won’t do the thing you need without tons of digging through forums and ‘tricking’ the thing to do what you want, then give up and have to learn a bunch of NEW stuff when you give up and decide to just dynamically/hand-generate the controls yourself.  Frustrating.



							
Windows 7 blue-screens due to stdriver64.sys

Windows 7 blue-screens due to stdriver64.sys

I recently started getting blue-screens with stdriver64.sys. In doing my blue-screen debug, very little useful information was given:


SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (7e)
This is a very common bugcheck.  Usually the exception address pinpoints
the driver/function that caused the problem.  Always note this address
as well as the link date of the driver/image that contains this address.
Arguments:
Arg1: ffffffffc0000005, The exception code that was not handled
Arg2: fffff8800970bbe8, The address that the exception occurred at
Arg3: fffff8800676d7f8, Exception Record Address
Arg4: fffff8800676d050, Context Record Address

Debugging Details:
------------------

EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.

FAULTING_IP:
stdriver64+1be8
fffff880`0970bbe8 49              dec     ecx

EXCEPTION_RECORD:  fffff8800676d7f8 -- (.exr 0xfffff8800676d7f8)
ExceptionAddress: 0000000000000000
ExceptionCode: c0000005 (Access violation)
ExceptionFlags: 00000000
NumberParameters: 158383080
Parameter[0]: fffffffffffff880
Parameter[1]: 0000000000000002
Parameter[2]: 0000000000000000
Parameter[3]: 0000000000000000
Parameter[4]: 0000000000000000
Parameter[5]: 000000000022dba8
Parameter[6]: 0000000000000000
Parameter[7]: 0000000000000000
Parameter[8]: 0000000000000000
Parameter[9]: 0000000000000018
Parameter[10]: 0000000000000000
Parameter[11]: ffffffffffffffff
Parameter[12]: 000000000000007f
Parameter[13]: 0000000000000000
Parameter[14]: 0000000000000000
Attempt to execute non-executable address 0000000000000002

CONTEXT:  fffff8800676d050 -- (.cxr 0xfffff8800676d050)
Unable to read context, NTSTATUS 0xC0000147

DEFAULT_BUCKET_ID:  VISTA_DRIVER_FAULT

BUGCHECK_STR:  0x7E

CURRENT_IRQL:  0

LAST_CONTROL_TRANSFER:  from 0000000000000000 to 0000000000000000

STACK_TEXT:
00000000 00000000 00000000 00000000 00000000 0x0

STACK_COMMAND:  .bugcheck ; kb

FOLLOWUP_IP:
stdriver64+1be8
fffff880`0970bbe8 49              dec     ecx

SYMBOL_NAME:  stdriver64+1be8
FOLLOWUP_NAME:  MachineOwner
MODULE_NAME: Unknown_Module
IMAGE_NAME:  Unknown_Image
DEBUG_FLR_IMAGE_TIMESTAMP:  0

BUCKET_ID:  INVALID_KERNEL_CONTEXT

I dug around on my support forums, and exception 7E has been related to loads of different problems: bad logitech mice drivers, cardbus adapters, audio drivers, failed USB drivers when it happens on awake from hibernate, etc.  Basically, any and all services seem to be known to cause it.  Also, having the module and image names completely unknown wasn’t very hopeful either.  I considered the idea of stepping into the memory location specificed until I found this write-up.  It mentions the program SoundTap as a very common source.  I had recently got Soundtap as part of a kit including the excellent Switch sound converter.  I uninstalled Sound Tap and sure enough – bluescreens went away.  Soundtap installs an audio driver to divert playback so you can do raw rips of anything you play on your pc (think sound ripper for youtube/shoutcast streams/shockwave players/etc).

So, it’s always a good idea to keep track of what you’ve changed on your pc and suspect even the unlikely things such as a simple software install.  That and googling for others with the same problems. 🙂

Successful iPhone 3GS battery replacement

Successful iPhone 3GS battery replacement

Well, just shy of 2 years, the battery in my iPhone 3GS had gotten pretty sad.  While it would charge up to 100%, it would run down really fast.  I wasn’t able to get more than half a day of good use out of it before it needed recharging.  Even sitting on my desk at work doing nothing it would lose 20% in an 8 hour stretch and seem to be getting worse every day.

Call up an Apple store, and ask them.  Battery replacement on a iPhone 3GS?  $199.  Really?  $199 sir.  Hangup.  I look around on the internet.  Lots of folks selling iPhone batteries, and most for $5-$15.  I start looking around for the most reputable guys, and find iFixIt.com.  They have lots of great step-by-step diagrams, descriptions, videos, and pictures.   Even better, they have a comments section that has literally hundreds of folks who put tips/gotchas/etc. This is looking good.

They sell a 3GS battery kit with all the tools you need for $15 – which includes a tiny screwdriver and a small plastic pry tool that they claim is the same one made for Apple.  You can also buy other helpful tools like a suction cup for removing the 3GS screen.  I decide to take the risk and have it all shipped – $25 total.

Kit comes in the mail and looks pretty well packaged and has all the tools I need.  I open it up and go to the website for instructions.

I follow the instructions pretty closely.  This isn’t like swapping the battery in your remote.  Mine has 16 different steps with no less than the removal of 10 different tiny screws and 7 different tiny ribbon cables.  You need to remove the screen, logic board, camera, and …well… basically everything because the battery is on the very bottom *glued* to the back plastic panel.  I remove it all, install the new battery, and reverse the 16 steps.

I flip it on, *blip!* it makes the normal startup sound – the screen backlights – but NO image/icons/etc.  🙁 I plug it into my computer and it syncs and sees the phone just fine.  I can even see my pictures and music on it – but no image.  I take it apart 2 more times and try re-seating the 3 display cables.  Same result.  Ugh.  I go to check iFixIt.com’s forums, but the site is now down for maintenance – right in the middle!  I remember from a previous look on the forum where one guy had to restore the factory defaults to fix a problem with his phone not charging properly.  I figured what the heck and tell iTunes to restore to factory defaults.   Turns out there was an update due, so it took 30 minutes to download the patch and another 15-20 to restore the defaults.  Shesh.  But right after the restore of factory defaults – I get my display back!  WOOT!  I restore everything, then make a phone call – works like a champ!

I make a phone call, test the buttons, test the camera/screen/music/etc, and everything seems to work great.  Why a software reset is needed to have your display come back after a simple battery replacement is beyond me – but there it is.

So, I don’t recommend the procedure to everyone.  It took about 30 minutes of steady concentration and a equally steady hands – but it seems to be working like a total champ.  It just finished charging to 100%, so we’ll see how well this battery lasts, but it already seems much better.

My own market direction speculation from GDC 2011

My own market direction speculation from GDC 2011

Here’s are my guesses/market trends based just on what I’ve been seeing/hearing/guessing:

  • Mobile computing (i.e. smartphones) is a powerful new force that is here to stay and is really pushing computing and gaming in new directions.  Since mobile is now entering into its ‘teenage’ years, there is still a lot of rushing around and ‘land-grabbing’ going on trying to align markets, business models, programming models, distribution, and revenue streams.  There is very good money to be made for the savvy, but it’s also more risky.  Those software companies that enter and establish a good name/brand will likely do very well, but almost certainly will be smaller, more nimble companies.  Nobody is making a killing on phone apps and very few can quit their day jobs and live off what they make solely from that revenue stream until things settle more.
  • Consoles will continue to be developers’ target platform because they have a stable user base, known revenue stream/model – but are in severe danger of being rendered impotent.  They need to solve:
    • Aging graphics.  With no new consoles in the works for a few years, things are already looking dated and it will only get worse.  There was absolutely no talk of any new consoles – which means it’s at least 2+ years away.  Microsoft appears to be trying to figure out it’s own strategy as there was no info for a new version of DirectX and a lot of effort clearly trying to get Winphone 7 adoption/features/devs going.
    • Terrible loading times, frequent updates, etc are all a terrible experience and hindrance to keeping people using their consoles.  I already know several people to whom their XBox 360 is really just a Netflix box.  And with the advent of all those features in smart TV’s coming – that selling point will soon disappear.
    • There were a number of rants about 40+ hour games and how people simply don’t finish them or want to play one game that long anymore (except for AAA titles like Call of Duty/etc).
    • Not as solid of a digital distribution of whole games like on PC/Mac where you can get whole games online (Steam/etc)
    • Higher entry bar for indie developers to develop for vs PC/Mac
    • No instant on/off features like every laptop and phone has.  Waiting 5 minutes to boot your console and scan past splashes/intro crap/etc is intolerable in an age they will be competing against smartphone games.
  • Smaller, faster casual games will grow like wildfire but generating a sustainable (and livable) revenue stream from them will still be getting hammered out for the next few years.  Studios that develop them will stay smaller and live leaner – but likely deliver a lot of the innovative new gameplay for casual markets.  We’re in the exuberant pre-teen days of this movement, so it’s direction is still very malleable.  Yet, there will be a point at time at which the limitations are felt out and they get their stride.  It’s already happening and the signs are very positive.
  • The indie developers will continue to come out with buckets of games.  Following the 80/20 rule, 80% of them will be garbage, but 20% will do well.  The top 1-5% will be phenomenons (i.e. Minecraft) and those 1-5% will really move gaming in a newer direction. That direction being:
    • Game developers will develop with off-the-shelf engines and middleware, not programming stuff themselves.
    • Games will focus on simpler and more creative elements.
    • It will keep the industry from going into stagnation and death, but take it in a new direction very different than the old guard.
  • Big houses will become fewer, but get more powerful and likely be lonelier at the top (ala movie studios).  As the cost of AAA games rises each year, consolidation and shake-out will happen.  Which means they’ll likely be more and more conservative and get more entrenched in their franchises – which probably means less innovation on the IP front, but they will be cutting-edge beautiful.
  • PSP’s and handheld gaming will just about disappear in the next 5 years as smart-phones will become so ubiquitous and be just as powerful with many more options.  Nobody will pay for two wireless plans to connect their PSP’s/DS’s.  This is bad news for Nintendo, and somewhat Sony, who have a large portion of their revenue stream from their handheld gaming devices.  It’s unclear whether they see this threat – but Sony appears to be seeing this threat with a ‘certification’ program for mobile devices.
GDC 2011 Trip report

GDC 2011 Trip report

It’s been about 4 years since I last attended GDC, but some interesting trendlines seem to have solidified/gone away since last time I was there.  Winning mobile developers was THE topic for the show. Sessions continued to be very good – but I noticed that almost (if not more than) half of them are now related to art, gameplay, or business concerns (not technical).  The Indie game scene appears to be moving beyond just closet developers and becoming a big and energetic movement in the industry.    Hiring appears to be back judging by the energy and sheer number of interviewing/companies interviewing on the show floor.  Attendance seemed to be at record levels at 19,000 attendees.

 Details:

  • Attended Wednesday thru mid-Friday sessions on my own accord – so no booth duty/official duties on that front.
  • Notable sessions I attended:

1.       Data Management For Modern Game Pipelines – Two fellows from Autodesk/Maya went over the state of current content pipelines from Maya to game engine.  They are apparently hard at work at Autodesk trying to make these converters and content back-and-forth between engine and Maya easier with a project called DNA.  It is a system of metadata and a database that, once integrated into your engine, allows you to quickly get assets out of Maya into your game engine, and back into Maya quickly.  It was likely too cumbersome for most game developers, but it was a good recognition on their part of the needs of the industry. 

2.       Noon poster sessions.  Real-time music generation was well done (for what it was) and the ’10 things to know about usability testing’ was a good list of resources for those trying to understand what doing usability testing was.

3.       One hour 10 speakers talk had a lot of great ideas.  One of best was why consoles are failing: 1. In age of instant-on mobile/PSP-type devices, waiting 15 minutes to get into a game is intolerable.  2. Too many cutscenes, license agreements, booting, etc.  Want instant-on wake-from-sleep into my game just like a laptop.  3. Wants no more 40+ hour games – they’re just too long. 4. Update sizes and frequency are just ridiculous – you only should have 1 update per quarter.  Period.  Many other good thoughts if we ever want to go into that realm.

4.       Multicore Memory management in Mortal Kombat.  EXCELLLENT talk on a multi-threaded memory manager.  Takeaway is that it took them 11 months to get it done (3 months of that was just to get the multi-threaded/lockfree library built they needed), but it’s a really fantastic system they are using in all their games going forward and shows some amazing speed, efficiency, and debugging features.

5.       Dice talk about Data Oriented Design.  Very good talk with good analysis and results.  Takeaway: throw away all your fancy data structures and line up your data for SSE manipulation.  It’s much more performant than non-cache friendly data structures, orders faster, and can easily be threaded.

6.       Kinect skeletal tracking deep Dive.  Very interesting talk on the problems (and solutions) unique to skeletal tracking/Kinect given by Microsoft AE.  Some good general solutions for multi-threaded timing issues.

7.       Halo Reach Effects. Excellent talk on the special effects in Halo Reach.  Excellent new way to do dynamic particle systems that interact with geometry in the real world, shield effects, and a few other really visually stunning, and very realtime, techniques. 

8.       Mega-Meshes – modeling/rendering worlds of 100 billion polygons.  Very interesting talk that seems to be along the lines of last year’s ID siggraph talk on streaming massive geometry.  The second half had to do with getting pretty decent spherical harmonic lighting techniques on Xbox and other consoles.  Lots to digest in the talk – so I’ll likely look at the slides when published.

9.       Experimental gameplay – 10 or so people positing or showing interesting gameplay techniques and ideas.  This session really showed what the Indie scene is about – trying to create unique experiences.

10.    Marble Madness, Pitfall, and Doom postmortems
       

  • Trend speculations based on what I saw:

1.       Mobile – the real energy this year was clearly behind winning developers into each camp’s mobile and tablets.  Unity showed extensively.  Free hardware galore, tons of sessions, and big parties were being thrown.   With years left to go before another console refresh – it was mostly just quiet, incremental changes from the big players like Xbox, Sony, and Nintendo.  With smartphones getting such sophisticated graphics hardware, it does make me seriously wonder if the days of PSP or DS-like gaming devices are limited, which could spell big problems for console companies that see such large revenue streams from them.

2.       Monitizing – Lots of financial companies on the show floor and session talks focused on micropayments and new revenue streams.  The fact there were old-school financial and credit card companies on the show floor was a real shock.

3.       Indie culture – The indie area of the show floor was packed every time I went by it.  Lots of young energy and it’s more than just hype judging by the awards Minecraft won.  Attended 2 classic game post-mortems which really clarified in my own mind the shift in game dev culture.  Used to be programmers ruled the roost in developing games, but now I think we’re seeing that most of the current game developers in the indie scene are content with using off-the-shelf engines/tools and focus 90% their attention on the gameplay and creating unique experiences as opposed to focusing on the newest/greatest tech. 

4.       Hard-core technical talks seemed to be diminishing in number, but not quality.  Still excellent work being done, but I wonder if this large trend towards younger developers just using more off-the-shelf engines and simpler mechanics will create a two-tiered system in which the majority of smaller Indie games will use low-end/runs on just about anything techniques, while AAA titles will always continue but just get more technically impressive, but (with fewer studios able to afford it) smaller in quantity and more insular.  Do we turn into a world where there are a few engine makers and game houses are primarily programmers that write games in scripts on top of them?

SQLite on iPhone is byte compatible with Windows

SQLite on iPhone is byte compatible with Windows

I was working on a good write-up on how to use SQLite in C# (since the most popular package install for SQLite on C# is kinda broken in Visual Studio 2010), but thought this might be a good data point for folks.

So, the short answer?  Yes, you can build SQLite databases on Windows, copy them across to your Mac, and then use them in iPhone applications without any issues.
You should open the database with the proper character formatting  ( sqlite3_open([dbPath UTF8String], &database)),
but other than that the SQL files can be just copied straight across the devices from Win 7->Mac->iPhone and the same select/delete/etc commands work like a champ.

How do I know?  I did it last night. 🙂

O’Reilly book list contest

O’Reilly book list contest

O’Reilly publishes a lot of nerdy books, and right now they’re having a contest to win up to $500 of their books.  You register on their website and publish your wish list to your blog (like this), and you get entered into a chance to win those books!

Here’s my list (which adds up to $497.83) 🙂  I sure hope I win!

  • Programming Interactivity, 1Ed
  • Intel Threading Building Blocks, 1Ed
  • The Art of Concurrency, 1Ed
  • Building Embedded Linux Systems, 2Ed
  • Designing BSD Rootkits, 1Ed
  • Linux System Administration, 1Ed
  • Linux Kernel in a Nutshell, 1Ed
  • Unicode Explained, 1Ed
  • flex & bison, 1Ed
  • Learning SQL, 2Ed
  • Getting Started with Arduino, 1Ed
  • Mastering Regular Expressions, 3Ed
  • Regular Expression Pocket Reference, 2Ed
Fedora Core 14 on VMWare

Fedora Core 14 on VMWare

Error during Install:
If you try to install Fedora Core 14 on VMWare and use the wizard which automatically sets up the VMWare settings, shortly after booting the install DVD, you’ll get a fatal error:

Section does not end with %%end

Go to VMWare, and for the Fedora list VM:
Go to settings -> CD’s and DVD’s->  (here will be two CD/DVD setup) delete or unset the cd/dvd whose disc image is autoinst.iso

Reboot the install, and all should go as expected.

VMTools

Also, VMWare Tools (VMTools) won’t install correctly by default.  You’ll need to follow this procedure:
http://www.sysprobs.com/fedora-14-vmware-install-vmware-tools-fedora-14