1. Great post and very well needed. Areca SUCKS as far as support goes, but the products is great. I’ve got a 1260 and using Linux. I’m pulling my hair out trying to get to 1.5tb so at least your posting gives me the general direction.

    Again thx

  2. Thanks for documenting the above. I had figured out all the same except didn’t know about the diskpart.exe tool. Thanks. I did hit another problem. I was going from 2TB to 2.5TB and diskpart said…. “Diskpart cannot extend partitions beyond the 2TB mark on large MBR disks.” Bummer. Time to do some more research. Still, Thanks!

  3. I work in a linux environment; regardless, this article is still helpful. F Mack is on point, Areca tech support is far far beyond awful; it is so bad, that it is unlikely that I will ever purchase from them again; It is so bad that I will work to convince all those I interact with on a professional basis, never to buy Areca.

    Areca’s US office is neglected by the Taiwan office [where all of the engineers are located]; if Areca wants to forgo the US market then they should continue to act the way they are currently acting. I love visiting a knowledge base on a website where the most recent post is three months old [sarcasm]. I love it when I email Areca support and the rep suggest what the solution to my problem is before I ever described the problem…

    Does it make sense at this point to create a website that warns people never to Areca products in a datacenter?

  4. Well, I guess I don’t have quite the same complaints with Areca as you have. I think the manual is descent and their tech support (in Taiwan) did answer my e-mail. However, there is no substitute for knowledgable and immediate tech support over the phone. I would purchase another Areca card without hesitation, but I guess I’ve gotten past any potential learning curve.

  5. This is GREAT Carlton, but here is the million $ question:

    How do you expand a GPT disk that is already larger than 2TB?

    If what I read is correct DiskPart.exe will not work beyond 2TB!!! Is that correct? Has anyone expanded a 2+TB GPT partition even further? My setup takes 16 drives and I have 8 500GB drives in now. If I add more drives I need to make sure I can expand without having to back everything up on tape 1st! While my Areca 1160ML will expand the RaidSet & VolumeSet just fine, it is Server 2003 SP1 I am worried about.

  6. The answer to the million dollar questions is YES! You can use diskpart to expand a partition beyond 2TB with a GPT table. This (currently) is probably the only way to expand a GPT partition that I know of (even paragon server sees my gpt as unallocated space and only has the capability to expand/merge fat and ntfs).

    I have a raid5 with a 3ware 9650se-16ml and had 4 750gbs in there (over 2 TBs to one drive so it’s a GPT) and just added a fifth (and a 6th on the way). It took almost 72 hours to migrate the drive into the array (using the web utility), but once that was over using the diskpart command only took about 10 seconds to complete.

    After the reboot the change was shown instantly in windows. This is however my 1st experience adding a hard drive to an exisiting array and I’m a bit curious because the hd I added was previously ntfs and I never changed the partition format, so I’m guessing that the migration wrote over all partition information (and any data on the hd).

    I’m doing a verify unit now just to make sure everything is fine with my array, but so far it seems great.

    Thanks and hopefully this info helps =)


  7. Hi – I found this thread and have been dealing with the same thing. I’m Running windows 2003 Advanced Server / SP1. I have an Adaptec SATA RAID Controller that I started out with 5 – 500GB Drives. I added the remaining 3 drives and expanded the array. It is now showing at a 3.2TB Array / Volume. I went in to the O/S and ran the Diskpart.exe Utility and performed the steps to expand the partition and it worked but only expanded to 2TB. I have an additional 1.2TB of space showing on the Drive and can’t seem to do anything with it whatsoever. Yes it is an MBR Partition so does that mean that I’m Screwed and have to remove the data and recreate the partition as a GPT (GUID Partition Table)?

  8. Pleblanc: You can only convert between MBR and GPT if the partition is empty. Here is an excerpt from the Microsoft Technet Article referenced in post #4:

    You can convert an MBR disk to a GPT disk and vice versa only if the disk is empty.
    Link to article.

  9. Great now the question is how I offload / backup 2TB of Data so I can recreate the Array as a GPT that supports over 2TB. I wish I would have known that before I created it in the first place. Now I have over 1TB of space on the Array that I can’t do anything with whatsoever. I can’t even create a separate partition. So expanding it was a huge waste of time / money (took something like 24 days to expand (yes you read that right DAYS!!!).

  10. 24 days?!?! Using diskpart? Or migrating the additional units onto the unit?

    Migration for me takes about 3 days with an additional 750gb, I guess if you do multiple drives it gets expotentially longer. As for disk part, that literally took 5-10 seconds to complete (I’ve added two drives at different times to this array and am going to add another shortly).

    As for how to offload 2TB of data, I’d go to best buy and pick up 2 of those external terabyte drives and then return them once you’re done =)


  11. Sorry if I wasn’t clear the Expansion of the Array took something like 24 days. This was adding Two 500GB Drives. I previously added 1 500GB drive which took something like 12 days so I guess that makes sense but the array was online and fully accessible. I was supposed to have a “serious performance hit” but it was completely unnoticeable. When I was able to expand the original Expanded array to 2TB it took only seconds – WAY less than 10!

  12. I wonder why it takes you 12 days to migrate a 500gb drive and it only takes me 3 days to do a 750gb drive (my drives are 7200, but I’m sure yours are too). I also didn’t notice a performance difference. I have my controller set to fastest rebuild/migrate, rather than i/o. I thought 3 days was an insane amount of time (and made me think raid 6 might be the better option).

  13. Keep in mind that if you have a drive fail a rebuild of the parity drive will be a whole lot faster than expanding. Expanding an RAID 5 array which is what I think we are all talking about involves a lot more than just a rebuild of a failed drive. I think that a rebuild will take a day or so which seems reasonable. Obviously if another drive fails in that time frame you are screwed which does make RAID 6 a little more attractive. It does have a performance trade off but offers a little more security. I’m not mission critical and even though 2 drives failing within a short amount of time is possible since they were bought in batches I think that RAID 5 is good for me for now. I might do the buying some Mass Storage devices to backup my data then return them. I really don’t like doing that but in this case I don’t really have a choice / need to keep them.

  14. The big difference between a rebuild and an expansion is the amount of data being relocated. With a rebuild, it is only 1 drives worth of data being reconstructed from parity (500GB for a 500GB drive) and it is not recreating parity info. For an expansion, the entire data set is migrated across all of the drives in the array (4->5 500GB drives results in 2.5TB of data being moved total and 500GB of parity being re-created.) So migrations take much longer. For Areca controllers, adding another disk and expanding does not not cause the loss of parity data, so if a drive fails during the migration, it will not result in data loss. Only if two drives fail will there be an issue, just as in standard RAID 5 operation.

  15. Same with Adaptec Controllers – Once you apply the new configuration to the controller (Expansion no matter how many new drives) it is still a RAID 5 Array and can withstand a single drive failure.

  16. Does anyone know if 3ware is like that (if a drive fails during migration if the parity bits are not lost). I’m assuming it’s the same as the other brands.

    Well 2 drive failures is rare in my opinion, however, what is more likely to happen is a bad parity bit. That’s really why I like the looks of raid 6 (to me natural diaster will ruin my array before 2 of my seagates fail in the same day). I do maintenance regularly (once every 5 days), but it still worries me that a bit could be bad and when the drive fails that could be linked to something I need. Out of curoiusity, how often do you guys to maintenance?

    In my case though, since it’s for personal storage and not some enterprise solution, raid 5 is the right answer. I have never had to do a rebuild (i love seagates) so I’m not sure of how long it would take, but again I am very interested in why a migration for you with a smaller drive takes 4x longer than it does for me with a larger drive. I’m curious because whatever variable(s) that is would influence me on my next raid controller purchase.

    Can you provide me the details of your raid setup?

    Mine is this

    3ware 9650SE-16ML PCI Express x8 SATA II Controller Card
    Seagate Barracuda 7200.10 ST3750640AS 750GB 7200 RPM
    TYAN S5160G2NR-RS Socket T (LGA 775) Intel E7230 ATX Server Motherboard
    Intel Pentium D 925 Presler 3.0GHz
    2 Sticks of Kingston 1GB 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC Unbuffered Server Memory

    And my installation of windows is on a seperate 160gb drive. I wonder if part of the reason it takes you so much longer is you have your os installed on your array.


  17. 3Ware is like this according to their website. They support Array Expansion as well as RAID Level Migration. Multiple Drive Failures can certainly occur especially if it is due to a manufacturing issues like the drive bearings from years gone by. This is why I mentioned I bought my drives in batches. If we are talking about a single drive that fails for whatever reason then yes a second failure in the time it takes to get a replacement drive if one isn’t on hand and replace it in the array and rebuild is a risk that I’m willing to take at this point **knock on wood** 🙂

    I don’t do any maintenance on my array.

    At the same time I don’t know why it takes / took so long for my controller to expand the array. When I built the array it only took about 10 hours to create / initialize it. I thought it would take that long or maybe a little longer. I didn’t expect days to weeks for this to happen.

    I do understand the controller needs to recalculate the parity / data to utilize the new drives in the expansion process which I can see as being pretty time consuming. Obviously it would have taken a whole lot less time to backup the data and rebuild the array but since I don’t have that kind of spare storage kicking around it wasn’t an option but now it looks like I have no choice but to do something like that to get be able to use the additional 1TB of space I’ve added to the array.

    My Setup is an Adaptec 2820SA SATA II Controller in an ASUS Pro Workstation Motherboard (PCI-X 133Mhz) with 1GB of Memory. AMD Semperon 3Ghz. All of this is installed in a Super Micro Server Chassis with 8 Hot Swap Drive bays and an SATA II Backplane. Oh yeah I also have a separate Hard Drive dedicated to the O/S.

    Interestingly enough I have a second controller in another system that I had an External Array on but recently sold. Ultimately I think I will be going with a 16 Port 3Ware controller utilizing a 16 Port Hotswap / Backplane Chassis I picked up off ebay.

  18. Why don’t you ever do any maintenance? I though this was an important task that should be done regularly on the array.

  19. Hey guys,

    I thought id just share my experiences…

    I have 2 X Highpoint 2320’s. I very very highly recommend them, ill explain why shortly.

    The first card has 8 X 320GB in raid5, or about 2.24TB (unformatted). Initially i was running XP SP2, so i could only see 1.99TB (2047GB issue). Anyway recently i purchased the second card and 5 X 500GB drives.

    I created another RAID5 array showing up 2000GB (Unformated) space, which was about 1900GB in windows.

    A few days ago i purchased 3 X 500GB drives and added them to array. The highpoint cards offer the OCE/ORLM option. So i added in the 3 drives into my server and expanded the array. Silly me moved the server into my bedroom while i was installing the new drives. It took about 26 hours to expand the array to include the new drives.

    This is damn impressive as these cards are about half the price of the Acera ones.
    Only have 1 sleepless night (server with 20HDD makes a lot of noise).

    Anyway, when i went to expand the NTFS partition i run into massive issues. I couldnt get it to expand beyond the 2TB limit. After a lot of reading etc i decided to upgrade to Server 2003 SP1 (decided against 64bit xp). The 320GB Raid5 array was able to see the extra space, (although i couldnt expand it beyond 2TB).

    However, i had converted the 500GB array to a dynamic drive to get past the limit.

    Looks as though my only option now is to format the 500gb array, and restart it as a GPT drive….

    Just need to find somewhere to backup 2TB of data :(.

    Hopefully this helps someone else, and i recommend looking at the Highpoint cards… they are damn good.


  20. Hey guys I thought I would check back in and give you an update on my set up. I’ve gotten away from the Adaptec Controllers and have now embraced 3Ware Controllers. I originally picked up a 3Ware 9500 12 port but since then I’ve settled on a 3Ware 9550-16ML Port Controller and originally started with 6 – 750GB Seagate Drives.

    I’m running Windows 2003 Advanced Server and made this in to a GPT Disk so I can get past the 2TB Limitation.

    Since building the Array I’ve added a 7th 750GB Drive. The expansion took a little under 4 days. I didn’t notice any significant performance hit while expanding. Once the Expansion was complete I ran Diskpart.exe and expanded it to the full 4TB Capacity. This is obviously WAY better then the 28 days that it took to expand my old Adaptec set up.

    I also built another server for a friend that is utilizing the 9500 12 Port that I sold him. He put in 4 – 320GB Drives to start then expanded that with a 5th. He said it took about 2 days for that expansion. He also put in 4 – 750’s then added a 5th a while later and the same thing about 2 – 3 days to expand. He is running Windows XP Pro on this machine and has “Auto Carving” enabled and set to 2TB. Once he went past the 2TB the additional space showed up as a separate drive within the OS. Maybe not the ideal setup but hey a whole lot painful than paying for Server 2003!!!

    On the 3Ware controllers I DO run the maintenance as it only takes about 4 hours for it to go through and check the parity which is sweet. The Adaptec was SO slow I couldn’t deal with it. I also think that the 3Ware controllers offer better transfer speed than the original Adaptec 2820SA I was running.

    These are by far the best servers I’ve ever built, I found an awesome case that I customized to give me 15 Hotswap Drive bays plus the DVD Drive and 2 internal 3.5″ bays one for my boot / os drive and the other for my 16th drive if / when I get there.

    If anyone is looking for a TON of storage I’d be more than happy to build more 🙂

  21. Since you guys are talking about GPT disk…

    I have a server with HighPoint 2320 (PCI eX), 8 x 500GB Seagate, RAID5 in GPT mode (o/s is XP 64 bit, installed on a 80gb hd)
    I’d like to “clone” the RAID5 array & expand to 15 x 750GB (keeping the existing MB & o/s)
    The MB only has 1 PCI eX slot.

    I need to be able to:
    1) “clone” & “move” the data(approx 1.25TB worth) over the network from the RAID5 array and store it in my NAS
    2) install a 16 channel 3Ware or Areca RAID6 card on the existing mb, after I remove the Highpoint
    3) add 15 x 750 drives to the new controller
    4) transfer the “cloned” data back into the new RAID6 array

    Is this possible? How do I make this happen?


  22. I’m sorry to say that you will not be able to do a GPT Disk with Windows XP. GPT is only supposed in Windows 2003 and possibly in the new Home Server Edition however I can not confirm that since I haven’t specifically played with that version yet.

    This is the pain that I went though… However you Can still use all of the disk space if you enable Auto Carving on the 3Ware controller and set it to no higher than 2TB. This will report each 2TB increment of RAID Space to the os even thought they are still physically on the same Array.

    Other then that the plan you have sounds perfectly doable unless I’m missing a question you are asking? If so let me know.

  23. Hey dexter,

    My advice would be to purchase a board with 2 X PCIe ports. Install the second card, copy the data etc, then sell the board. You’ll probably make a $20-50 loss on the board, but considering the cost of the raid cards and hdds its a small price to pay.


  24. Errol: XP 64-bit does support GPT. If you have the free space available on your NAS, I see no problem with copying your data over to that server (copy / paste), installing the new card and drives, creating the new array, and copying the data back to the new array. Since it is only 1.25 TB, you could simply back it up to 2 of the 750 GB drives, create 13×750 GB array, copy the data from the 2 750 GB drives to the new array, then expand the array with the two 750 GB drives. You have a lot of options and I don’t see a problem with any of them.

    Funhouse69: Thanks for the info on the 3Ware controllers; auto carving is an interesting feature; it sounds like the automated equivalent of creating multiple 2 TB volume sets on single raid set.

  25. Errol: You are correct. A mb with 2 x PCIe would be a small price to pay.
    If I take this route, how do I:
    1) “copy” or “move” the o/s, C drive (installed on an 80gb hdd) to the new mb?
    2) “copy” or “move” the RAID5 array over to the new RAID6 array?

  26. Carlton Bale: Since there is an app on the C drive, thats “talks” to the Highpoint controller, I assumed that the data from the existing RAID needed to be cloned instead of a copy/paste to the NAS?

    If I were to copy/paste like you suggest, wouldn’t the app on the C drive be “looking” for a specific RAID controller?

  27. Dexter: There is a driver for Windows that allows Windows to mount the volume set on the RAID array as a drive letter. After that, it is just a standard drive letter. The data on it can be treated the same way the data on any hard drive would be treated. Windows sees the data on the RAID array just as ti would the data on any hard drive and this is how you’ll want to copy and backup the data. You seem to be talking about making an image of the RAID array or something; no need worry about this. Just backup your data to another drive, then restore it back to the new array.

  28. Carlton Bale: Thx for the advice. I was probably “overthinking” the RAID + imaging. Copy/pasting the data, certainly makes the operation a lot easier.

  29. Plec you have an almost identically setup as me and glad to here it takes about the same amount of time. I’m actually migrating my 8th seagate 750 into the array now which should top the total space available to over 5 TBs. I have a few questions for you though.

    Is the 9550 capable of OCE (Online Capacity Expansion I think it stands for)? I didn’t think it was, meaning that if you added drives at a later time you couldn’t add them to that specific array. You’d have to make a new raid5 or whatever on the card. This was a big downfall for me and is why I stuck with the 9650.

    Also, can you let me know exactly what kind of case you have. When I purchased mine it said 17 bays and I never bothered to personally count. I realize now that it’s actually only 14 bays (they counted 3 of them twice as both 3.5 and 5.25 or something stupid, I fell for their trick!) so I’m going to need to do a new case here eventually. I’m actually looking for a case that can hold 18 drives (16 in array + 1 cdrom + 1 hd for OS).


  30. About the case, I’d prefer an 18 bay case, but the 17 you have will suffice since I can just pull out the cd rom and use an external one if I ever need it.


  31. Nate – I have a 9550-16ML that I got for a really great price. I was using Adaptec controllers and I have to say that 3Ware has impressed the heck out of me from the second I installed my first one. Because of that I ended up selling off my inventory of Adaptec controllers.

    That said the 3Ware 9550 absolutely supports OCE. I’m not just saying that because the manual or spec sheet says so but because I have expanded my array successfully a few times now. I also know that the 9500 controllers support it as well as another system I built with a 9500-12 has been expanded a few times already with no issues.

    When I say expanded I mean pop in another 750 Seagate, ad it to the existing array and when the expansion (3ware calls it migration) is complete the OS will see the additional space. From there run Diskpart.exe and extend the volume and you’ve got the full space.

    The only thing that I can see that the 9650 offers that the 9550’s does not is RAID 6. While it would be nice to have RAID 6 I don’t think **knock on wood** that I will suffer multiple HD failures at the same time and would probably stick with RAID 5. At this point getting a replacement 750 from New Egg takes only a day so I guess I will take my chances and in my experience Seagate certainly stands by their product.

    As for a case well I don’t really know what your options are at this point. I haven’t seen any cases to my knowledge with 17 external bays. However, if you are looking for a really nice case and don’t mind spending the money then pick up a Super Micro SC836 which has 16 Hotswap SATA Drive Bays and a redundant power supply included. They also just released a 24 bay unit that looks really sweet as well. While I am a very big fan of SuperMicro I think that they are quite pricey for what I am looking to do.

    With that in mind I went with a case that has 10 external 5-1/4″ Bays and added Three 5 Bay hotswap enclosures which gives me a total of 15 Hotswap bays as well as the 10th bay for my DVD Drive. The case also has two internal 3-1/2″ bays one of which I use for my Boot / OS drive and the other will be used for my 16th drive if / when I fill up the Hotswaps. Although this did require a little customization of the case I have to say that I am thrilled with the results and the case is one of the best made I’ve ever worked with including SuperMicro which in my experience is the best server case made.

    Here is a picture if two of them that I have built.

  32. Pheblanc:
    Could you please give me a part #/model # for the Black hard drive tray in your pic?

  33. Wow, that’s really impressive (the case). And thanks for the company name on the 16 & 24 bay cases I’ll have to check them out.

    I have the 9650-16ml and yeah it supports raid 6, but I (like you) decided that the chances are two hd failures are pretty slim so I have mine setup for raid 5 (like I said in previous posts, I think the chance of an hd failure AND a bad parity bit are much higher).

    As for the migration/disk part, yeah I’ve expanded my configuration about 4 times now (I started with 4 hds and now I’m up to I think 8 or 9.

    When you’re expanding your array do your hard drives make a lot of noise? Mine make one where it sounds likes the heads are crashing. The first time I heard it when expanding I was scared, but I’ve noticed it makes those awful noises (not just regular reading sounding noises) everytime I expand. Quite discomforting since my server sleeps in my bedroom closet so I hear what it’s up to when I’m laying down.

    I do have 1 more question for you. On your server do you have anything that manages temperature/heat and if it gets to high to shut down the server. As I said, my server is in my closet (so not the best ventilation) and I’m always paranoid about it overheating. I have a temperature monitor installed that came w/my motherboard (tyan system monitor or whatever), but if it starts to get too hot it doesn’t have an option to shut it self down. I’ve seen some 3rd party software for this, but was wondering if you’ve had any personal experience with this.

    And again great job on the case.

    Thanks again,


  34. My Seagate drives have a huge amount of head seek noise, that is very apparent during expansions, etc. My Samsung drives are pretty much inaudible regardless of what is going on. The Areca controller monitors drive temps and logs an error if they are too high (sounds alarm, sends e-mail etc.)

  35. Dexter – As you can see in the pictures I am using 3 Super Micro enclosures. These are SATA II Enclosures with built in Fans, Temp monitors with alarms. I’ve been using these enclosures for quite sometime (since back in the SCSI Days). I’ve tried others in the past and keep going back to Supermicro as I think that they just make a good high quality product compared to some of the cheap stuff that is out there.

    The Part Numbers are:

    CSE-M35T-1B (Black)
    CSE-M35T-1 (White)

    These are made to fit in to a specific Supermicro chassis but just so happen to be the same size as 3 5-1/4″ Drive Bays which is what led me to this latest creation.

    You can find these in several places but believe it or not one of the cheapest places I’ve found them at is Buy.com although they almost never seem to have the black ones.

  36. Nate – I’ve copied over several TB of data on to my array’s and I can’t say that the sound is any different from that as it is when I expand the array. That said I don’t feel that the drives I’m using (exclusively Seagate) are that loud. That said one of the other units I have built has been expanded using other drives (mix and match what ever is on sale) and I feel that has lowered the performance a little and does seem a little louder.

    When it comes to cooling I’ve kind of had one of the “Live and Learn” experiences with my custom rig. On one of the hottest days of the year I was notified by one of my HDD Enclosures that I was reaching a Temp Threshold. Granted it was well over 100 degrees in the room it was in and very humid. Silly me was using the crappy stock fans that came with the case (3 – 90mm & 1 – 120mm).

    Since then I replaced all of the fans with high volume / velocity fans and connected them to my ASUS motherboard which has the “Q-Fan” Option which allows the motherboard to control the speed of the fans depending on CPU / MB Temp and I have to say it works wonderful. Also the Supermicro enclosures also have fairly decent velocity 90mm fans in them as well so airflow hasn’t been a problem since. It is great when you reboot the system all of the fans goes in to full velocity before the MB takes over the speed control (just like HP DL580’s do) and it sounds like the thing is going to lift off  then it is so nice and quiet.

If you have a comment or question, please post it here!