1. Hi, nice article.
    Have you tried any of the options in your list?
    And have you looked into expanding any of these partitions?

  2. Yes, I did try all three. Unfortunately, I didn’t plan ahead and had to use option 3 (create a second volume set) for my production data. It was a standard NTFS/MBR partition, so I couldn’t exceed 2TB with it. I expanded it from 500GB->1TB->1.5TB->2TB, then added another 500GB volume set for 2.5TB total. I used the instructions I documented in this article about Windows RAID array capacity expansion and it was painless.

    I really wanted to switch to Linux, but I had zero confidence around expanding partitions. GParted is easy to use, but it gave me errors viewing large partitions (2.0TB total, combination of NTFS and ext3), so I gave up on it. I looked into command line tools, but I felt certain I’d be more likely to erase everything than successfully expand the intended partition. Windows does have some ease-of-use advantages.

  3. XP Service Pack 2 don’t support GPT Disk?

  4. Nanashi:

    From the original post: “You will need Windows XP Service Pack 2 or later or Windows Server 2003 Service Pack 1 or later for GPT support.”

  5. Suggest you go back and double check that advice about 32-bit Windows XP Service Pack 2. It does NOT support GPT disks.

    Microsoft explicitly said this when they announced the changing of the limits for Windows 2003 Server and Windows XP 64-bit.

  6. Richard: Thanks for pointing that out. I’ve corrected the post to say Windows XP x64 Edition in place of Windows XP SP2. I’d like to say it was my original intention to say x64 instead of SP2, but to tell you the truth I can’t remember what I was thinking.

    For additional reference, here a Microsoft GPT FAQ document that clearly states XP x64 is the only version of XP that supports GUID Partition Tables.

  7. Hi, I could not make this work under linux. I have 4x1TB with Adpatec Raid 5. I created a 200GB RAID volume and left the rest alone and installed CentOS 5 x64 and although it installs, GRUB panics after initial reboot and cannot load the OS. Jonathan

  8. I think the problem is with GRUB; I’ve had similar problems. I was viewing a 2 TB partition in GRUB and it would not display correctly, much less resize. I’m far from a Linux expert but I do know that large volumes are better managed from a terminal window. I spent some time investigating this but can’t remember the exact command to do so. Personally, I’d much rather have a well-designed GUI that minimizes my chances of doing something really stupid – like deleting all my partitions just because I typed a “b” instead of a “d”. Unfortunately, I don’t think that is an option.

  9. Thanks for this information; I think you summed everything up nicely. I’ve been meeting a few customers here and there who have hit the 2TB limit, but I imagine we’ll be seeing more.

  10. Hi and thanks for the info. I’m setting up a Linux server with 2.5TB worth of space (6 500GB HDs RAID 5), so it’s always good to have information.

    Jonathan: When I was first installing Linux on my desktop system with a RAID 0 configuration, I had the same issue at first. Just a suggestion, if you are using DMRAID, make sure BEFORE you install GRUB but AFTER you get the OS installed that you first install libdevmapper and DMRAID in a chroot of your system. Also, once GRUB is installed, edit the menu.lst to remove ANY lines of “savedefault”, as that will prevent it from loading as well.

    If you really want or have any problems, email me (pyrodrake1134 {at} gmail {dot} com). I wrote out a tutorial for a friend of mine, and it works fairly well using Ubuntu 7.04.

  11. I heard that you can already use more than 2 terabytes on a single hard disk if it was using NTFS since FAT32 is the one that has the 2 terabyte limit. NTFS has a limit of 256 terabytes. Correct me if I’m wrong.

  12. UnknownMan: Read the article; you are wrong. NTFS has a huge theoretical limit, but the partition tables used by default for all version of Windows have a much lower limit (2TB). You have to have a version of Windows that supports GUID partition tables and you have to format the “big drive” with the NTFS file system and specify that it use GUID instead of MBR partition tables. Without GUID, the limit for NTFS is 2 TB.

  13. This thread is EXTREMELY helpful for our situation where we’re producing over 2.5TB of data per week and we’re stuck with Windows on both the client and server side due to lab equipment vendor requirements. It’s not practical for us to keep moving data around and/or have multiple 2TB mount points etc. We have 6 scanners (fancy confocal microscopes) producing data, potentially simultaneously, theoretically continuously, but as a practical matter fewer and not all the time. The Windows workstations controlling these scanners and processing these images expect to be able to write the data to a Windows share. So my question is, if we built a Windows fileserver with >2TB filesystem, could we expose it as a normal Windows share and mount it from the lab workstations as usual? or would there be some complications/barriers that aren’t self-evident? THANKS!

  14. Lee: In general, there should be an issues sharing a >2TB volume with clients. The server takes care of the file system, which is transparent to the client.

  15. Yes, one single logical volume is limited to 2 TB. We can have multiple 2 TB logical volumes on one single storage array. What we can do is to use windows “disk management” and use “spanned” or “get together” as many 2TB logical volumes as we want and “extend” as many 2TB volumes as we want in order to have 1 single >2TB unit.

  16. Note that to boot from a GPT volume you must have EFI firmware. Intel Macs have it, but even then you must make sure all OS installed on the HD uses EFI to boot, instead of booting through the CSM, which would require a MBR.

    1. No, you can boot from GPT on BIOS. There’s no secret to it, really. Debian for example does that just fine.

  17. I have a HP server runing 2003 server sp2 on a 32 bit system.

    I have a MSA attached with (8) 750gb sata hardives and want this to be all one drive for stoarage. I’ve went though and tried option 1 but can’t get more than 2tb of hardspace to be recognized……….anyone have the same issues????


  18. This is all great info… however if I have a IP-SAN with >2tb shares on them I realize that I can’t see these in XP 32 bit. My next question is if I connect my IP-SAN to a Win2k3R2 server, can I share partitions >2tb out to my XP 32 bit stations?

    I can’t seem to locate this info…
    Thanks for your responses!

  19. How I broke out with a 2095.55GB NTFS partition (larger than 2TB) in Windows Vista 64 (3x750RAID0):
    Go in to disk management and right-click on the new disk on the left hand side. ->>Convert to GPT<<- (~disk must be empty, unformated~) Please read what you’re doing BEFORE you do this, I CAN’T OVER EMPHASIZE THIS, PERIOD. You can now create larger than two Tib partitions. 🙂 Originally, the drive was split in two with a 2048GB and a smaller almost 50GB partition. I should buy a RAID card. Best of luck

  20. I have created a server with Vista Ultimate and it has a 250GB boot drive and 15 1TB drives in a RAID5 configuration using a 3Ware RAID card and it works wonderfully. This article and thread was extremely usefull in helping me do this, therefore my thanks to all concerned. I can share the 13.5TB volume in Vista and XP Home clients have no problem in accessing the data. I just thought I’d share my experience here.

  21. For anyone battling to do so (as we just have been), it seems that you CAN generate single-drive letter volumes well over 2TB under 32 bit Windows XP – without needing GPT.

    Proceed as follows (this is a specific example of a machine we have just built):

    1. Use a raid controller such as the 3Ware 9650 to handle a number of drives (six x 1TB in our case).

    2. DO NOT use the RAID features in the card, configure it so that each Unit is one physical drive. So as you exit the Raid controller setup it reports (in our example) six individual 1TB units (931Gb in fact).

    3. Enter Logical Disk Manager from Administrative Tools, and the six (our case) logical drives show up as “Basic” (MS parlance) drives. Right click on the lefthand panel beside each and choose “Convert to Dynamic Disc” – you are offered tickboxes, so check all that apply and click on OK.

    4. Once those logical drives are converted to ‘Dynamic’, you can then right-click in the unallocated space of any one of them and elect to create a spanned volume: choose to include all the logical drives you just made Dynamic (5 x 931GB in our case), and one new span volume is created with a single drive letter (5.45TB in our case).

    This one volume then appears in Windows Explorer as a single drive.

    You need current WinXP 32 bit (we used SP3 though SP2 should be fine) to do this.

    Credit to my head of software for spotting this.

    Good luck all.

    1. Too bad you did it this way. You will suffer the following problems.
      1. You are throwing away the capability of the 9540 by using Software Raid. Doesn’t hurt, just slows you down.
      2. You are adding load onto your processor to do the parity calculations that are NOT being handeled by the 9650
      3. You could could have saved the price of the 9650 and used 1-3 cheap $20 software raid cards to accomplish the exact same thing.

      What you seem to have missed, is that this article is about how to make the low level hardware (9650 card and bios) talk to the raid array as a single drive. You are talking to them individually (thereby bypassing the MBR problem by running each physical disk seperately) and then using Windows to combine them together in software. As the author pointed out, that Windows limit for NTFS is 256TB. You can add a lot more drives if you want using your method. However, you will still suffer the waste of not using the hardware parity capability of the 9650 and adding that workload onto the CPU.
      Best Luck….

  22. I like Mike HH’s idea, but I would think that if you combine option 3 with Mike HH’s idea, you would end up with Raid 5 error protection AND a drive larger than 2Tb.

    I’m currently attempting option 3 right now, so we’ll see how it goes. Initalizing 2Tb Raid 5 arrays takes hours.


  23. The problem with Dynamic Discs is that a failure of one drive can result in loss of all of the data; there is no redundancy. But if you keep backups or if you just need a very large temp drive, this is an easy option.

  24. This is great, but what about windows vista?
    Are there any limitations in ANY of vistas editions, for example, does the home basic or other low end editions support GPT volumes over 2 tb? I can’t seem to find any info on this and I have called microsoft pre-sales and they have proven to be *surprise* stupid.

    Any info would be appreciated about >2TB volume / GPT support in all versions of vista.
    I suspect that vista enterprise / ultimate 32 and 64 do support this, but what about the lower end versions?

  25. I’m not positive, but I believe all version of Vista support GPT. It’s a part of the core OS.

  26. Response to #22 and #24:
    Combine hardware/pseudo hardware RAID 5 with Dynamic Disk and you can break 2tb limit in windows xp 32 bit without changing anything.

    You also don’t sacrifice reliability because that’s handled by raid 5.

    Only cost is compatibility liability when you upgrade to vista. I would recommend backup/rebuilding the whole raid stack if you had to upgrade.

    I have this setup. I have 9 x 1TB drives (10 actually, since I set 1 as online hot spare for my RAID card); giving me a total RAID 5 workable space of 8TB. Which I then split into four 2TB RAID5 volumes;

    So XP sees I have 4 2TB drives.

    Then I proceed to convert all 4 into dynamic drives, span them; and viola I get an 8TB single drive letter in 32 bit XP! I broke the 2TB barrier!

    Additionally, my card supports OCE; so if I wanted to; I can add another 2 1TB drives to the stack, do the OCE, and create another 2TB volume for XP to extend dynamic spanned volume into.

    However, I probably won’t go beyond 2 more drives from my 9-drive RAID. RAID5’s maximum reliability starts going down once you get 12 drives or more. In 15 drive+ realm, you’ll need RAID6 and my card doesn’t support that right now.

    Network File Sharing works; FTP works; even “scandisk” works. I even had windows succesfully tried to recover lost clusters from this drive when I didn’t shut down properly (OS crashed). All works.

    Obviously, I can’t upgrade to vista without a huge problem; and I can’t go to Linux either, so you have to be sure before you start this path.

    Additional, if you follow this, know that I now have a *HUGE* problem trying to backup 8TB of data reliably. I don’t have a good solution, so it’s only backing up key data and not the whole volume for me right now.

    🙁 I know, it’s sad, but how do you backup 8TB cost effectively in 2008?

  27. For those interested…

    One possible idea for volumes larger than 2TB on 32 Bit XP and still maintaining some level of redundancy, when you configure the RAID arrays, break the individual drives up into sections.

    Since all 6 drives are still used in each partition/array, there is no performance loss for read/writes.

    For example: If you have 6 x 1TB drives. Create a Raid 5 array **with spare** using the first 250 GB of each of the 6 drives. Repeat the same process with the next 250 GB of each drive and so forth making sure to set the same physical drive as hot spare for each array.

    At the end of it, you will have 4 partitions in RAID5 showing 1.00 TB free (250 GB used for parity on each partition). You can then use Windows Diskmanager to create a spanned set with all 4 partitions for a total of 4.00 TB. You lose 1TB of space for parity and 1TB for the hotswap spare, but it’s 4 TB of usable space with RAID5 protection. If you need more space, simply use more drives, reduce the amount of space used on each physical drive so that each RAID5 array is under the 2TB maximum.

    If one drive does happen to fail, the RAID5 spare will keep your data in place because a single drive failure will be managed by the RAID5 redundancy across all of the created paritions. The spanned array *might* temporarily become unavailable (haven’t tested that) until the rebuild completes, but once completed, the Windows spanned array should remount as normal.

    Be sure to replace any defective drive ASAP in case a second one decides to die without a spare available.

    Although…if 2 drives fail, you may lose data. But this would occur even if you were able to use the entire physical disks in a RAID5 array.

    Just an idea that might work for those who are using 32 bit version of XP.

  28. I am having trouble with Option 3. Perhaps this is semantics. My end goal is to have a single RAID 5 Array with 6 x 1TB drives and have 5TB of it total be usable in Windows XP 32 bit. However, I do not need the array all to be a single volume. A single 1TB volume and two 2TB volumes would be acceptable, for example.

    I am using a Highpoint RocketRaid 2320 eight channel card. I have been using them for years. I created the single RAID 5 Array at the bios screen, but have NO idea how to created the “Volume Sets.”

    Volume Sets are create by the RAID controller and reside on top of RAID Sets. A Volume Set set is presented to the operating system as a single, virtual disk drive. This is a little confusing, but the RAID level (RAID level 5 in this example) is determined when the Volume Set is created (not when the RAID Set is created.) It is possible to have multiple Volume Sets residing on the same RAID set, and the Volume Sets may even use different RAID levels.

    Any idea where to start looking to create these volume sets? When I go to disk manager in Windows XP Pro, it still only sees the 2TB maximum on the single volume that is there. I do not know where to go to create additional volume sets. Can someone provide step by step instructiosn on this? Do only specific controller cards support “Multiple Volume Sets?” If so, can somebody provide a part number for their RAID controller and perhaps some instructions specific to that model?

    I can say for certain there is no way to create multiple volume sets within the RAID Controller’s BIOS. Under the crate menu there are only options for:

    RAID 0
    RAID 1
    RAID 1+0
    RAID 5
    RAID 5+0

    And there is nothing mentioning volumes anywhere else. There is a delete menu for deleting arrays, a view menu for viewing drive / array configuration, and initialize menu that ALWAYS says NO DISK AVAILABLE regardless of RAID configuration, and a settings menu that lets you set the bootable array if you have more than one. That’s it.

    So, essentially, being a bit of a retard (apparently) I’m looking for something akin to this:

    1) Insert Drives
    2) Go into RAID Bios
    3) Create RAID 5 array
    4) Do some magic (maybe?)
    5) Install Windows
    6) Do some other magic here (maybe?)

    Etc… If I see that I cannot complete, say, step 4, then I know I need another RAID controller or another approach.

    Thanks for the help!

    1. My older 3ware card is unable to create any raid partions over the original 2TB sise. Not only that, but it could only create the single 2 TB partition and no more. You should check the newer controller cards and verify they can handle more than 2TB…..

  29. Volume sets are created within the RAID controller firmware before any drive / partition is presented to the operating system. It could be that your RAID controller can’t create multiple volume sets. I performed the setup in the RAID controller BIOS setup prior to booting to Windows.

  30. I have an infotrend external raid, Does anyone know if the Adaptec 29320 card should be able to see the raid5 disk at bios level (under CTL A adaptec utility)?

    It sees IFT device but when I go to disk utils it says this is not a disk


  31. Well, I have read all of the threads and it looks as though I am the only person using a RAID 6…. We just upgrade our NAS boxes 6 x 1Tb drives and I wanted the RAID 6 since our data is too valuable to lose any one or more drives. I have had this happen to me in the past on a RAID 5 and it’s not fun explaining how two drives died and all the data is gone.

    I have already gotten our boxes into this configuration and I was hoping that some one can help me determine which was to go with this, while staying in a RAID 6 config.

    1. RAID 6 is handled at the hardware level and the whether an array is RAID 5 or 6 should be invisible to the OS. So create the RAID set in the RAID controller BIOS, then create a single 4TB volume set, and then format that in the OS.

  32. First: Very good article!

    My storage partition is allready with VolumSet RAID 5 and now I wanted to extend it (from 2TB to 3TB) (4 HD -> 1GB each) and so I run into this problem (and found your page g*)

    My question: Is there still no way to convert a MBR Partition to GPT Partition? I mean it’s a hard job to export the 2TB data to another drive first, just to convert the partition logic and copy the whole things back…

    would be great if there is a tool (like acronis diskDirector) or so who can do that…

  33. Hello Carlton.

    Thank you for the informative article.

    I just recently setup a new Raid 5 array and I had a few questions as I want to ensure I won’t run into any problems if I try to add new HDDs in the future.

    System Specs:

    Motherboard: Asus P5N-T Deluxe (Nvidia 780i Chipset)
    HDDs: Three 1 TB Drives (Western Digital WD1001FALS Caviar Black 1TB SATA2 7200RPM 4.2MS)
    OS: WIN XP x64 (SP2)

    I am using the latest BIOS and Nforce drivers which I have read (not tested yet) support 64-bit LBA addressing so I am hoping I won’t have any problems with trying to get the Array greater than 2 TB.

    I created a new Raid 5 array (set) using all 3 drives (1.81TB in total) from within the Bios pressing F10 to get into the RAID Utility.

    WDCWD1001FALS – 931.51 GB – Channel SATA 0.0
    WDCWD1001FALS – 931.51 GB – Channel SATA 0.1
    WDCWD1001FALS – 931.51 GB – Channel SATA 1.0

    I installed WIN XP x64 successfully onto the C: drive. I used Nlite and slip streamed the Nvidia Nforce Chipset drivers onto the XP x64 disc which was fairly easy to do. (Don’t have a floppy disk installed on this PC)

    WIN XP (C:) 45 GB
    WIN 7 (D:) 45 GB
    Data (E:) 1.8 TB

    I only have Win XP installed for now, but in the future I would like to dual boot with Win 7 so I would like to stay away from Dynamic Disks and GPT as I’ve read I won’t be able to dual boot if I do so.

    My question is, what would be the easiest way to add another 2 Drives to my setup.

    Physically install the 2 HDDS, add the 2 drives to the Raid Array (set) and then hopefully the bios will let me create a 2 TB Raid Volume which should show up in Windows as an un-partitioned Logical Drive and then I can just format using NTFS and keeping it as a Basic Disk? I don’t mind that I will have an additional drive letter (partition listed in my computer) as this is my personal data so I don’t have any clients, but I do use this PC as a media server streaming everything to my PS3.

    Or, am I better off converting to dynamic disks and just don’t dual boot?

    I have all my old DATA on 2 500GB Seagate drives right now, but I am holding off on a transferring the data over just in case I have to rebuild this array losing all my data.

    I am also thinking of just ordering another 2 WD 1 TB drives right now instead of doing it 6-12 months from now.

    My main reason for Raid 5 is for redunancy instead of just using the drives separatly gaining 3 TBs instead of 2TBs. I don’t backup my data, I want to rely on this raid setup instead, that way if a Drive does fail, I can just pop in a replacement dirve and I’m good to go.

    Should I buy 2 different HDDs, and create a separate Raid Array for the OS instead? Ex. Maybe I can create a Mirrored Array for the OS, and Create a Raid 5 array for my Data.

    Do you have any suggestions? I am pretty excited about Raid, but extremely worried at the same time.



    1. I would leave the RAID array formatted as a basic drive for maximum flexibility. There is really no advantage to other options.

      The thing to keep in mind with a RAID 5 array is that data is stripped over multiple discs, so no single drive is readable if the array fails (other than the 1-at-a-time drive failure allowed by RAID 5.) RAID 1 mirrors the data between 2 drives, so either is readable if the other drive or the controller card fails. And you will not be able to expand your RAID 5 array with your built-in controller. All things to consider.

  34. Hi,

    Interesting discussion here… my issue is that I have bought a 3TB Lacie Big Disk Quad that I want to use with my xp 32-bit workstation. I was advised by Lacie to do the formatting on a Vista machine so this is what I did:

    1) (VISTA) Disk was Mac formatted so I initialised the 3TB drive causing 1 partition of 2TB and 1 partition of 1TB to appear (in Disk Management console). Both partitions were “unallocated” at this time;

    2) (VISTA) I formatted with the first 2TB partition to NTFS with MBR (since I want to use the drive with XP-32, GPT is not an option). It formatted fine.

    3) (VISTA) Now the problem I’m having is that I can’t do anything with the remainder (unallocated) part of the drive. All I can do is get Properties, there is no way to create and format a partition.

    My idea was to create the second partition and then span the NTFS partitions together to get a virtual 3TB drive.

    Any suggestions getting round 3) above would be greatly appreciated.


    1. Garret: Unfortunately, it doesn’t matter how it’s partitioned, the OS still has to be able to address the entire drive before recognizing the partitions. And if you are not using GPT, you will not be able to address beyond 2TB, no matter how many partitions you have. So use GPT and then setup the partitions how ever you want; that’s the only way to go beyond 2TB in your setup.

  35. EP45-UD3R v.1.1 need help with 64bit and RAID - Page 5 - TweakTown Forums

Leave a Reply

Your email address will not be published. Required fields are marked *