How to Break the 2TB (2 TeraByte) File System Limit

In 2006, a RAID array linking together multiple 500GB drives was required to reach the 2TB (2,000 GB) limit and operating systems were just starting to implement solutions to break through this barrier.  Now, with single hard drives exceeding capacities of 4 TB (4,000 GB), the need to break this 2TB limit is extremely common, and your operating system may not be able to handle it. If you don’t plan ahead, your operating system will only be able to address the first 2 TB and all that extra storage beyond 2 TB will be unusable. Here is an overview of some of the methods you can use to get around the 2 TB limit.

Operating System Requirements:

  • Windows XP 32-bit: If you are using Windows XP 32-bit, you are out of luck for native OS support. XP was designed well before this barrier was approached and was not designed to exceed it.  Some drive manufacturers offer driver software to allows the full capacity of large drives to be accessed under XP 32-but (Hitachi GPT Disc Manager, Seagate Disc Manager), but user beware. This support is not native to the operating system and can lead to reliability and data recovery issues. Your best bet is to upgrade to a modern operating system.
  • Windows XP 64-bit, Windows Vista, Windows 7, Windows 8, or later:  All of these operating systems support GUID Partition Table (GPT), which is a newer drive formatting process required to access beyond 2TB of drive space.
    • If using the disc as an additional data drive, either the 32- or 64-bit versions of Vista/7/8 can be used.
    • The 64-bit versions of Vista/7/8 are required to use large drives as the boot disc (the computer must also be equipped with an newer-style EFI / UEFI BIOS.)
  • Mac OS X 10.6 and later is required to access large volumes. GUID Partition Tables (GPT) are required.
  • Linux requires kernel version 2.6.x and later (32-bit CPU limit of logical volume size limit is 16TB, 64-bit CPI limit is 8EB.) (Also, the kernel must be compiled with CONFIG_LBD enabled, which is almost always the default.) GUID Partition Tables (GPT) are required.

Breaking 2TB Option 1 – Use Appropriate Version of Windows with NTFS and GUID Partition Tables (GPT) partitions. It is possible for Windows to use NTFS partitions larger than 2TB as long as they are configured properly. Windows requires that the GUID Partition Tables be used in place of the standard Master Boot Record (MBR) partition tables. You will need Windows XP x64 Edition or Windows Server 2003 Service Pack 1, Windows Vista, Windows 7, Windows 8, or later for GPT support. (It is possible to mount and read existing GPT partitions under Windows XP and 2000 using GPT Mounter from Mediafour.; however, their MacDrive product does not support GPT partitions.) There are a few stipulations for GPT disks:

  • First, the system drive on which Windows is installed can’t be a GPT disk because it is not possible to boot to a GPT partition unless you have the 64-bit version of Windows Vista/7/8 and a UEFI system BIOS.
  • Secondly, an existing MBR partition can’t be converted to GPT unless it is completely empty. You must either delete everything and convert, but you might as well just create a new GPT partition at this point. Read this Microsoft TechNet article for more details on GPT. To create GPT partitions, use the diskpart.exe command line utility or right click in Disk Management Console (click here for more details.)

Breaking 2TB Option 2 – Use Linux with CONFIG_LBD enabled. Most Linux file systems are capable of partitions larger than 2 TB, as long as the Linux kernel itself is. (See this comparison of Linux file systems.) Most Linux distributions now have kernels compiled with CONFIG_LBD enabled (Ubuntu 6.10 does, for example.) As long as the kernel is configured/compiled properly, it is straight-forward to create a single 4TB EXT3 (or similar) partition. In general, this is applies to Linux kernels 2.6.x and later.

Breaking 2TB Option 3 – Use Standard Partitions and Create Multiple Volume Sets within a RAID array. A RAID array itself can be larger than 2 TB without presenting a volume set larger than 2 TB to the operating system. This way, you can use older file systems (that support only 2TB) and still have RAID 5 protection and more than 2 TB of total storage. To do this, put all 5 drives into a RAID set and create a 2 TB RAID Level 5 volume set — this will leave 2TB of the RAID set unused. Then create a second 2 TB RAID level 5 volume set. Boot into your operating system, create a partition on each of the 2TB virtual drives, and format each of the two 2TB virtual drives. The disadvantage is that there is not one single, large 4TB partition. The advantage is that 1) backwards compatibility for the file system and partitions and 2) they are both part of a RAID 5 array and are protected from single drive failures and only 1 drives worth of storage is sacrificed for RAID parity data.

  • Example: 1 RAID array of five 1TB Drives -> 2 RAID level 5 Volume Sets that are 2TB each -> 2 standard NTFS (or any other) partitions that are 2TB.

Background info on RAID Requirements:

  • Hardware RAID controller capable of 64-bit LBA addressing (for volume sizes greater than 2 TB). For this example, I’ll use an Areca ARC-1230 RAID card.
  • Several hard drives to connect to the RAID controller to create a RAID array. For this example, I’ll assume (five) 1000GB SATA drives.
  • Drives must be configured in a RAID level 5 Volume Set. For the first two examples, I’ll assume all five drives are members the same RAID level 5 volume set. RAID level 5 requires the space of 1 drive to be allocated for parity data, so total available storage space for 5 drives will be 4 drives x 1000GB = 4TB.

RAID Required Background Information: I’m assuming you already have an understanding of RAID 5, its benefits, and requirements. If not, read this Wikipedia article. Now, let’s discuss the difference between RAID sets, Volume Sets, and Operating System Partitions.

  • RAID Sets are groups of drives that a RAID controller groups together to act as one single array. The individual disks are not visible to the operating system but rather are controlled by a hardware RAID controller.
  • Volume Sets are create by the RAID controller and reside on top of RAID Sets. A Volume Set set is presented to the operating system as a single, virtual disk drive. This is a little confusing, but the RAID level (RAID level 5 in this example) is determined when the Volume Set is created (not when the RAID Set is created.) It is possible to have multiple Volume Sets residing on the same RAID set, and the Volume Sets may even use different RAID levels.
  • Partitions are created by the Operating System and reside on top of Volume Sets. (Volume Sets appear as virtual disk drives to the operating system.) You can use have the Operating System create one or more formatted partitions on top of a volume set.

Note 1: RAID Capacity Expansion. If your RAID card supports online capacity expansion, it is possible to expand any of the configurations above. For options 1 and 2, expand the RAID Set, then Expand the Volume Set, then Expand the Operating System partition. For option 3, expand the Raid Set, Create a 3rd Raid level 5 Volume set, and then create a third operating system partition. To learn more about expanding a RAID array on an Areca controller running Windows, read this article.

Note 2: Software RAID. Software RAID adds an additional level of complexity to RAID. For that reason, I recommend using a Hardware RAID controller. Having said that, I think everything mentioned above is technically possible if you are using software RAID, but I’ve never messed with it some I’m not positive.

Written by in: Tech | Tags: , , , , , , , , , | Last updated on: 2014-May-27 |

106 Comments »

  • Torbjørn Moen says:

    Hi, nice article.
    Have you tried any of the options in your list?
    And have you looked into expanding any of these partitions?

  • Carlton Bale says:

    Yes, I did try all three. Unfortunately, I didn’t plan ahead and had to use option 3 (create a second volume set) for my production data. It was a standard NTFS/MBR partition, so I couldn’t exceed 2TB with it. I expanded it from 500GB->1TB->1.5TB->2TB, then added another 500GB volume set for 2.5TB total. I used the instructions I documented in this article about Windows RAID array capacity expansion and it was painless.

    I really wanted to switch to Linux, but I had zero confidence around expanding partitions. GParted is easy to use, but it gave me errors viewing large partitions (2.0TB total, combination of NTFS and ext3), so I gave up on it. I looked into command line tools, but I felt certain I’d be more likely to erase everything than successfully expand the intended partition. Windows does have some ease-of-use advantages.

  • nanashi says:

    XP Service Pack 2 don’t support GPT Disk?

  • Carlton Bale says:

    Nanashi:

    From the original post: “You will need Windows XP Service Pack 2 or later or Windows Server 2003 Service Pack 1 or later for GPT support.”

  • Richard Steven Hack says:

    Suggest you go back and double check that advice about 32-bit Windows XP Service Pack 2. It does NOT support GPT disks.

    Microsoft explicitly said this when they announced the changing of the limits for Windows 2003 Server and Windows XP 64-bit.

  • Carlton Bale says:

    Richard: Thanks for pointing that out. I’ve corrected the post to say Windows XP x64 Edition in place of Windows XP SP2. I’d like to say it was my original intention to say x64 instead of SP2, but to tell you the truth I can’t remember what I was thinking.

    For additional reference, here a Microsoft GPT FAQ document that clearly states XP x64 is the only version of XP that supports GUID Partition Tables.

  • Jonathan says:

    Hi, I could not make this work under linux. I have 4x1TB with Adpatec Raid 5. I created a 200GB RAID volume and left the rest alone and installed CentOS 5 x64 and although it installs, GRUB panics after initial reboot and cannot load the OS. Jonathan

  • Carlton Bale says:

    I think the problem is with GRUB; I’ve had similar problems. I was viewing a 2 TB partition in GRUB and it would not display correctly, much less resize. I’m far from a Linux expert but I do know that large volumes are better managed from a terminal window. I spent some time investigating this but can’t remember the exact command to do so. Personally, I’d much rather have a well-designed GUI that minimizes my chances of doing something really stupid – like deleting all my partitions just because I typed a “b” instead of a “d”. Unfortunately, I don’t think that is an option.

  • Stephen says:

    Thanks for this information; I think you summed everything up nicely. I’ve been meeting a few customers here and there who have hit the 2TB limit, but I imagine we’ll be seeing more.

  • PyroDrake says:

    Hi and thanks for the info. I’m setting up a Linux server with 2.5TB worth of space (6 500GB HDs RAID 5), so it’s always good to have information.

    Jonathan: When I was first installing Linux on my desktop system with a RAID 0 configuration, I had the same issue at first. Just a suggestion, if you are using DMRAID, make sure BEFORE you install GRUB but AFTER you get the OS installed that you first install libdevmapper and DMRAID in a chroot of your system. Also, once GRUB is installed, edit the menu.lst to remove ANY lines of “savedefault”, as that will prevent it from loading as well.

    If you really want or have any problems, email me (pyrodrake1134 {at} gmail {dot} com). I wrote out a tutorial for a friend of mine, and it works fairly well using Ubuntu 7.04.

  • UnknownMan says:

    I heard that you can already use more than 2 terabytes on a single hard disk if it was using NTFS since FAT32 is the one that has the 2 terabyte limit. NTFS has a limit of 256 terabytes. Correct me if I’m wrong.

  • Carlton Bale says:

    UnknownMan: Read the article; you are wrong. NTFS has a huge theoretical limit, but the partition tables used by default for all version of Windows have a much lower limit (2TB). You have to have a version of Windows that supports GUID partition tables and you have to format the “big drive” with the NTFS file system and specify that it use GUID instead of MBR partition tables. Without GUID, the limit for NTFS is 2 TB.

  • Lee Watkins says:

    This thread is EXTREMELY helpful for our situation where we’re producing over 2.5TB of data per week and we’re stuck with Windows on both the client and server side due to lab equipment vendor requirements. It’s not practical for us to keep moving data around and/or have multiple 2TB mount points etc. We have 6 scanners (fancy confocal microscopes) producing data, potentially simultaneously, theoretically continuously, but as a practical matter fewer and not all the time. The Windows workstations controlling these scanners and processing these images expect to be able to write the data to a Windows share. So my question is, if we built a Windows fileserver with >2TB filesystem, could we expose it as a normal Windows share and mount it from the lab workstations as usual? or would there be some complications/barriers that aren’t self-evident? THANKS!

  • Carlton Bale says:

    Lee: In general, there should be an issues sharing a >2TB volume with clients. The server takes care of the file system, which is transparent to the client.

  • anoclon says:

    Yes, one single logical volume is limited to 2 TB. We can have multiple 2 TB logical volumes on one single storage array. What we can do is to use windows “disk management” and use “spanned” or “get together” as many 2TB logical volumes as we want and “extend” as many 2TB volumes as we want in order to have 1 single >2TB unit.

  • Yuhong Bao says:

    Note that to boot from a GPT volume you must have EFI firmware. Intel Macs have it, but even then you must make sure all OS installed on the HD uses EFI to boot, instead of booting through the CSM, which would require a MBR.

    • Bob says:

      No, you can boot from GPT on BIOS. There’s no secret to it, really. Debian for example does that just fine.

  • Richard Unrein says:

    I have a HP server runing 2003 server sp2 on a 32 bit system.

    I have a MSA attached with (8) 750gb sata hardives and want this to be all one drive for stoarage. I’ve went though and tried option 1 but can’t get more than 2tb of hardspace to be recognized……….anyone have the same issues????

    Thanks

  • Scott Z says:

    This is all great info… however if I have a IP-SAN with >2tb shares on them I realize that I can’t see these in XP 32 bit. My next question is if I connect my IP-SAN to a Win2k3R2 server, can I share partitions >2tb out to my XP 32 bit stations?

    I can’t seem to locate this info…
    Thanks for your responses!

  • Kirk Mears says:

    How I broke out with a 2095.55GB NTFS partition (larger than 2TB) in Windows Vista 64 (3x750RAID0):
    Go in to disk management and right-click on the new disk on the left hand side. ->>Convert to GPT<<- (~disk must be empty, unformated~) Please read what you’re doing BEFORE you do this, I CAN’T OVER EMPHASIZE THIS, PERIOD. You can now create larger than two Tib partitions. 🙂 Originally, the drive was split in two with a 2048GB and a smaller almost 50GB partition. I should buy a RAID card. Best of luck

  • Andrew McKay says:

    I have created a server with Vista Ultimate and it has a 250GB boot drive and 15 1TB drives in a RAID5 configuration using a 3Ware RAID card and it works wonderfully. This article and thread was extremely usefull in helping me do this, therefore my thanks to all concerned. I can share the 13.5TB volume in Vista and XP Home clients have no problem in accessing the data. I just thought I’d share my experience here.

  • Mike HH says:

    For anyone battling to do so (as we just have been), it seems that you CAN generate single-drive letter volumes well over 2TB under 32 bit Windows XP – without needing GPT.

    Proceed as follows (this is a specific example of a machine we have just built):

    1. Use a raid controller such as the 3Ware 9650 to handle a number of drives (six x 1TB in our case).

    2. DO NOT use the RAID features in the card, configure it so that each Unit is one physical drive. So as you exit the Raid controller setup it reports (in our example) six individual 1TB units (931Gb in fact).

    3. Enter Logical Disk Manager from Administrative Tools, and the six (our case) logical drives show up as “Basic” (MS parlance) drives. Right click on the lefthand panel beside each and choose “Convert to Dynamic Disc” – you are offered tickboxes, so check all that apply and click on OK.

    4. Once those logical drives are converted to ‘Dynamic’, you can then right-click in the unallocated space of any one of them and elect to create a spanned volume: choose to include all the logical drives you just made Dynamic (5 x 931GB in our case), and one new span volume is created with a single drive letter (5.45TB in our case).

    This one volume then appears in Windows Explorer as a single drive.

    You need current WinXP 32 bit (we used SP3 though SP2 should be fine) to do this.

    Credit to my head of software for spotting this.

    Good luck all.

    • Calvin Thomas says:

      Too bad you did it this way. You will suffer the following problems.
      1. You are throwing away the capability of the 9540 by using Software Raid. Doesn’t hurt, just slows you down.
      2. You are adding load onto your processor to do the parity calculations that are NOT being handeled by the 9650
      3. You could could have saved the price of the 9650 and used 1-3 cheap $20 software raid cards to accomplish the exact same thing.

      What you seem to have missed, is that this article is about how to make the low level hardware (9650 card and bios) talk to the raid array as a single drive. You are talking to them individually (thereby bypassing the MBR problem by running each physical disk seperately) and then using Windows to combine them together in software. As the author pointed out, that Windows limit for NTFS is 256TB. You can add a lot more drives if you want using your method. However, you will still suffer the waste of not using the hardware parity capability of the 9650 and adding that workload onto the CPU.
      Best Luck….

  • Jerry says:

    I like Mike HH’s idea, but I would think that if you combine option 3 with Mike HH’s idea, you would end up with Raid 5 error protection AND a drive larger than 2Tb.

    I’m currently attempting option 3 right now, so we’ll see how it goes. Initalizing 2Tb Raid 5 arrays takes hours.

    ~Regards

  • Carlton Bale says:

    The problem with Dynamic Discs is that a failure of one drive can result in loss of all of the data; there is no redundancy. But if you keep backups or if you just need a very large temp drive, this is an easy option.

  • nick says:

    This is great, but what about windows vista?
    Are there any limitations in ANY of vistas editions, for example, does the home basic or other low end editions support GPT volumes over 2 tb? I can’t seem to find any info on this and I have called microsoft pre-sales and they have proven to be *surprise* stupid.

    Any info would be appreciated about >2TB volume / GPT support in all versions of vista.
    I suspect that vista enterprise / ultimate 32 and 64 do support this, but what about the lower end versions?

  • Carlton Bale says:

    I’m not positive, but I believe all version of Vista support GPT. It’s a part of the core OS.

  • nick says:

    Thanks carlton.

  • hc says:

    Response to #22 and #24:
    Combine hardware/pseudo hardware RAID 5 with Dynamic Disk and you can break 2tb limit in windows xp 32 bit without changing anything.

    You also don’t sacrifice reliability because that’s handled by raid 5.

    Only cost is compatibility liability when you upgrade to vista. I would recommend backup/rebuilding the whole raid stack if you had to upgrade.

    I have this setup. I have 9 x 1TB drives (10 actually, since I set 1 as online hot spare for my RAID card); giving me a total RAID 5 workable space of 8TB. Which I then split into four 2TB RAID5 volumes;

    So XP sees I have 4 2TB drives.

    Then I proceed to convert all 4 into dynamic drives, span them; and viola I get an 8TB single drive letter in 32 bit XP! I broke the 2TB barrier!

    Additionally, my card supports OCE; so if I wanted to; I can add another 2 1TB drives to the stack, do the OCE, and create another 2TB volume for XP to extend dynamic spanned volume into.

    However, I probably won’t go beyond 2 more drives from my 9-drive RAID. RAID5’s maximum reliability starts going down once you get 12 drives or more. In 15 drive+ realm, you’ll need RAID6 and my card doesn’t support that right now.

    Network File Sharing works; FTP works; even “scandisk” works. I even had windows succesfully tried to recover lost clusters from this drive when I didn’t shut down properly (OS crashed). All works.

    Obviously, I can’t upgrade to vista without a huge problem; and I can’t go to Linux either, so you have to be sure before you start this path.

    Additional, if you follow this, know that I now have a *HUGE* problem trying to backup 8TB of data reliably. I don’t have a good solution, so it’s only backing up key data and not the whole volume for me right now.

    🙁 I know, it’s sad, but how do you backup 8TB cost effectively in 2008?

  • Pixels303 says:

    Must Read:
    http://technet.microsoft.com/en-us/library/cc773268.aspx
    For win2K based OS (Likely a hacked XP x86 as well) Dynamic discs can handle no more than 32 2TB drives in a RAID5 or Raid0 array.

  • Sparkster says:

    For those interested…

    One possible idea for volumes larger than 2TB on 32 Bit XP and still maintaining some level of redundancy, when you configure the RAID arrays, break the individual drives up into sections.

    Since all 6 drives are still used in each partition/array, there is no performance loss for read/writes.

    For example: If you have 6 x 1TB drives. Create a Raid 5 array **with spare** using the first 250 GB of each of the 6 drives. Repeat the same process with the next 250 GB of each drive and so forth making sure to set the same physical drive as hot spare for each array.

    At the end of it, you will have 4 partitions in RAID5 showing 1.00 TB free (250 GB used for parity on each partition). You can then use Windows Diskmanager to create a spanned set with all 4 partitions for a total of 4.00 TB. You lose 1TB of space for parity and 1TB for the hotswap spare, but it’s 4 TB of usable space with RAID5 protection. If you need more space, simply use more drives, reduce the amount of space used on each physical drive so that each RAID5 array is under the 2TB maximum.

    If one drive does happen to fail, the RAID5 spare will keep your data in place because a single drive failure will be managed by the RAID5 redundancy across all of the created paritions. The spanned array *might* temporarily become unavailable (haven’t tested that) until the rebuild completes, but once completed, the Windows spanned array should remount as normal.

    Be sure to replace any defective drive ASAP in case a second one decides to die without a spare available.

    Although…if 2 drives fail, you may lose data. But this would occur even if you were able to use the entire physical disks in a RAID5 array.

    Just an idea that might work for those who are using 32 bit version of XP.

  • Eric B says:

    I am having trouble with Option 3. Perhaps this is semantics. My end goal is to have a single RAID 5 Array with 6 x 1TB drives and have 5TB of it total be usable in Windows XP 32 bit. However, I do not need the array all to be a single volume. A single 1TB volume and two 2TB volumes would be acceptable, for example.

    I am using a Highpoint RocketRaid 2320 eight channel card. I have been using them for years. I created the single RAID 5 Array at the bios screen, but have NO idea how to created the “Volume Sets.”

    FROM DEFINITIONS ABOVE:
    Volume Sets are create by the RAID controller and reside on top of RAID Sets. A Volume Set set is presented to the operating system as a single, virtual disk drive. This is a little confusing, but the RAID level (RAID level 5 in this example) is determined when the Volume Set is created (not when the RAID Set is created.) It is possible to have multiple Volume Sets residing on the same RAID set, and the Volume Sets may even use different RAID levels.

    Any idea where to start looking to create these volume sets? When I go to disk manager in Windows XP Pro, it still only sees the 2TB maximum on the single volume that is there. I do not know where to go to create additional volume sets. Can someone provide step by step instructiosn on this? Do only specific controller cards support “Multiple Volume Sets?” If so, can somebody provide a part number for their RAID controller and perhaps some instructions specific to that model?

    I can say for certain there is no way to create multiple volume sets within the RAID Controller’s BIOS. Under the crate menu there are only options for:

    RAID 0
    RAID 1
    RAID 1+0
    RAID 5
    RAID 5+0
    JBOD

    And there is nothing mentioning volumes anywhere else. There is a delete menu for deleting arrays, a view menu for viewing drive / array configuration, and initialize menu that ALWAYS says NO DISK AVAILABLE regardless of RAID configuration, and a settings menu that lets you set the bootable array if you have more than one. That’s it.

    So, essentially, being a bit of a retard (apparently) I’m looking for something akin to this:

    1) Insert Drives
    2) Go into RAID Bios
    3) Create RAID 5 array
    4) Do some magic (maybe?)
    5) Install Windows
    6) Do some other magic here (maybe?)

    Etc… If I see that I cannot complete, say, step 4, then I know I need another RAID controller or another approach.

    Thanks for the help!

    • Calvin Thomas says:

      My older 3ware card is unable to create any raid partions over the original 2TB sise. Not only that, but it could only create the single 2 TB partition and no more. You should check the newer controller cards and verify they can handle more than 2TB…..

  • Carlton Bale says:

    Volume sets are created within the RAID controller firmware before any drive / partition is presented to the operating system. It could be that your RAID controller can’t create multiple volume sets. I performed the setup in the RAID controller BIOS setup prior to booting to Windows.

  • Pete says:

    I have an infotrend external raid, Does anyone know if the Adaptec 29320 card should be able to see the raid5 disk at bios level (under CTL A adaptec utility)?

    It sees IFT device but when I go to disk utils it says this is not a disk

    Thanks.

  • MP says:

    Well, I have read all of the threads and it looks as though I am the only person using a RAID 6…. We just upgrade our NAS boxes 6 x 1Tb drives and I wanted the RAID 6 since our data is too valuable to lose any one or more drives. I have had this happen to me in the past on a RAID 5 and it’s not fun explaining how two drives died and all the data is gone.

    I have already gotten our boxes into this configuration and I was hoping that some one can help me determine which was to go with this, while staying in a RAID 6 config.

    • Carlton Bale says:

      RAID 6 is handled at the hardware level and the whether an array is RAID 5 or 6 should be invisible to the OS. So create the RAID set in the RAID controller BIOS, then create a single 4TB volume set, and then format that in the OS.

  • dataCore says:

    First: Very good article!

    My storage partition is allready with VolumSet RAID 5 and now I wanted to extend it (from 2TB to 3TB) (4 HD -> 1GB each) and so I run into this problem (and found your page g*)

    My question: Is there still no way to convert a MBR Partition to GPT Partition? I mean it’s a hard job to export the 2TB data to another drive first, just to convert the partition logic and copy the whole things back…

    would be great if there is a tool (like acronis diskDirector) or so who can do that…

  • razy says:

    Hello Carlton.

    Thank you for the informative article.

    I just recently setup a new Raid 5 array and I had a few questions as I want to ensure I won’t run into any problems if I try to add new HDDs in the future.

    System Specs:
    ————-

    Motherboard: Asus P5N-T Deluxe (Nvidia 780i Chipset)
    HDDs: Three 1 TB Drives (Western Digital WD1001FALS Caviar Black 1TB SATA2 7200RPM 4.2MS)
    OS: WIN XP x64 (SP2)

    I am using the latest BIOS and Nforce drivers which I have read (not tested yet) support 64-bit LBA addressing so I am hoping I won’t have any problems with trying to get the Array greater than 2 TB.

    I created a new Raid 5 array (set) using all 3 drives (1.81TB in total) from within the Bios pressing F10 to get into the RAID Utility.

    WDCWD1001FALS – 931.51 GB – Channel SATA 0.0
    WDCWD1001FALS – 931.51 GB – Channel SATA 0.1
    WDCWD1001FALS – 931.51 GB – Channel SATA 1.0

    I installed WIN XP x64 successfully onto the C: drive. I used Nlite and slip streamed the Nvidia Nforce Chipset drivers onto the XP x64 disc which was fairly easy to do. (Don’t have a floppy disk installed on this PC)

    Partitions:
    ————
    WIN XP (C:) 45 GB
    WIN 7 (D:) 45 GB
    Data (E:) 1.8 TB

    I only have Win XP installed for now, but in the future I would like to dual boot with Win 7 so I would like to stay away from Dynamic Disks and GPT as I’ve read I won’t be able to dual boot if I do so.

    My question is, what would be the easiest way to add another 2 Drives to my setup.

    Physically install the 2 HDDS, add the 2 drives to the Raid Array (set) and then hopefully the bios will let me create a 2 TB Raid Volume which should show up in Windows as an un-partitioned Logical Drive and then I can just format using NTFS and keeping it as a Basic Disk? I don’t mind that I will have an additional drive letter (partition listed in my computer) as this is my personal data so I don’t have any clients, but I do use this PC as a media server streaming everything to my PS3.

    Or, am I better off converting to dynamic disks and just don’t dual boot?

    I have all my old DATA on 2 500GB Seagate drives right now, but I am holding off on a transferring the data over just in case I have to rebuild this array losing all my data.

    I am also thinking of just ordering another 2 WD 1 TB drives right now instead of doing it 6-12 months from now.

    My main reason for Raid 5 is for redunancy instead of just using the drives separatly gaining 3 TBs instead of 2TBs. I don’t backup my data, I want to rely on this raid setup instead, that way if a Drive does fail, I can just pop in a replacement dirve and I’m good to go.

    Should I buy 2 different HDDs, and create a separate Raid Array for the OS instead? Ex. Maybe I can create a Mirrored Array for the OS, and Create a Raid 5 array for my Data.

    Do you have any suggestions? I am pretty excited about Raid, but extremely worried at the same time.

    Thanks.

    Regards,
    Razy

    • Carlton Bale says:

      I would leave the RAID array formatted as a basic drive for maximum flexibility. There is really no advantage to other options.

      The thing to keep in mind with a RAID 5 array is that data is stripped over multiple discs, so no single drive is readable if the array fails (other than the 1-at-a-time drive failure allowed by RAID 5.) RAID 1 mirrors the data between 2 drives, so either is readable if the other drive or the controller card fails. And you will not be able to expand your RAID 5 array with your built-in controller. All things to consider.

  • razy says:

    Thanks for the info!

  • garret says:

    Hi,

    Interesting discussion here… my issue is that I have bought a 3TB Lacie Big Disk Quad that I want to use with my xp 32-bit workstation. I was advised by Lacie to do the formatting on a Vista machine so this is what I did:

    1) (VISTA) Disk was Mac formatted so I initialised the 3TB drive causing 1 partition of 2TB and 1 partition of 1TB to appear (in Disk Management console). Both partitions were “unallocated” at this time;

    2) (VISTA) I formatted with the first 2TB partition to NTFS with MBR (since I want to use the drive with XP-32, GPT is not an option). It formatted fine.

    3) (VISTA) Now the problem I’m having is that I can’t do anything with the remainder (unallocated) part of the drive. All I can do is get Properties, there is no way to create and format a partition.

    My idea was to create the second partition and then span the NTFS partitions together to get a virtual 3TB drive.

    Any suggestions getting round 3) above would be greatly appreciated.

    Thx.

    • Carlton Bale says:

      Garret: Unfortunately, it doesn’t matter how it’s partitioned, the OS still has to be able to address the entire drive before recognizing the partitions. And if you are not using GPT, you will not be able to address beyond 2TB, no matter how many partitions you have. So use GPT and then setup the partitions how ever you want; that’s the only way to go beyond 2TB in your setup.

  • […] relates to partitions over 2TB not total space, you need to use GPT partitions. Have a look here http://carltonbale.com/2007/05/how-t…-system-limit/ […]

  • EP45-UD3R v.1.1 need help with 64bit and RAID - Page 5 - TweakTown Forums says:

    […] applies to x64, seems like I see contradicting info? Anyway… Blah Blah Blah, hope this helps!!) CarltonBale.com How to Break the 2TB (2 TeraByte) File System Limit You cannot install or start Windows Vista when the volume of the system partition is larger than 2 […]

RSS feed for comments on this post.


Leave a Reply

If you have a comment or question, please post it here!

CarltonBale.com is powered by WordPress | View Mobile Site | © 1996-2017 Carlton Bale