Randy,
As Sean said..."this month, the dual-core Xeons" are top of the line. But being on the 'bleeding edge', the consumer pays the initial R&D and first run production costs.
I have several servers for crunching .JPGs at shows, my first ones were based on dual Athlon MPs, followed by dual HT Xeons, and my latest build are dual Opterons. My disclaimer here is that I don't use C1 - rarely shoot RAW - but do process up to 90,000 jpgs per day - simple rotate and shrink. I would cede to Nill, who recently built a box for C1 processing.
What I can say about my dual 2.8Ghz HyperThreaded Xeons versus my dual 2.0Ghz DualCore Opterons - is that the Opterons at 40% slower clock speed - aren't a little slower then the Xeons and they aren't a little faster than the Xeons... the Opterons
smoke the Xeons as far as performance. Everything else in the boxes are the same: RAID 10 w/hot spare data drives, RAID1 w/hot spare OS/program drives, 2Gb ECC memory, 4Gbit NICs - yet the Opterons can crunch through images about 40% faster than the Xeons!
My scripting does take advantage of multiple threads - as many as I need. And having multiple RAID arrays helps with the feeding of images.
What I can say for your purposes, really depends on your workflow...???
When you have C1 batched, are you doing anything else? PS-CS?, downloading? Flight sim?
What are the sizes of your images?
What is the destination? (file format, size, location)
Why I ask these things will determine if you have the need for:
- Extra memory
- more CPUs (either now or later)
- Multiple drive channels - whether independent or RAID
With my workflow, I need just over 1Gb of memory, so 2Gb fits me just fine right now on my servers that are just processing the images.
Although on my PS-CS box, I can never have enough. If I was to go new with a box now, it would have 6-8Gb of memory running x64 and CS2, to dynamically and intelligently take care of my memory management.
If C1 can use two CPUs, and you are doing more things, you may want to get more CPUs. Or at least get a motherboard that you can add a CPU to if you find yourself maxed out on horsepower.
Multiple drives arrays is where I've found to be a critical bottleneck. I started off with RAID10 for performance and redundancy and it worked well. For a couple weeks, I ran RAID5 to get more drive space and the array couldn't handle keeping up with the simultaneous downloading of new images and processing of original files to web sized ones. Also, I have a secondary RAID1 array that I dump the processed images to - that way, I'm not overtaxing the main array with additional writes with simultaneous reads all across the drives.
Another reason I recommend RAID - although it doesn't replace the need for backups (why I have two servers - actually two of everything) - is that when (not if) a drive goes down, you are not sitting still. With a single drive situation and a good backup, you still have to replace a dead drive, reload the backup before you can continue working. With RAID, you simply keep working. For me, I have hot spares - that automatically kick in when a drive goes bad. No need for me to down everything and replace the drive to start rebuilting, I replace the drive when I get back to the office, knowing everything is still running fine. Granted, my demands on hardware do not mirror that of many, but even as a hardware junkie, I don't want to be spending my time troubleshooting/swapping out hardware, when I need to get images to customers.
Anyway, some things to think about - again, based on your work flow, demands on time, and respect for redundancy.
Brian.