expanding 4+TB GPT partition after expanding hardware raid-5

User avatar
cpg
Administrator
Posts: 2618
Joined: Wed Dec 03, 2008 7:40 am
Contact:

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby cpg » Fri Sep 25, 2009 1:49 pm

makes sense. fedora 10 is now 9 months old, so the tools there are probably from 1 year ago. the live version is fresh. what i heard about xfs is that it has fast delete (like hfs in the mac). for large files, say, a large iso file, ext3 stalls the processor. xfs does not, i hear.

i was hoping to move to ext4 next. though i see support in f11 is now the default, it's not fully complete (makes sense to go out like that).
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 8GB RAM, 1TBx2+3TBx1

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Fri Sep 25, 2009 2:15 pm

Yeah, I read that too. I don't know what they mean by large files. I have some 4 GB DVD ISO images. I noticed that when I deleted one from an Amahi share via Windows that it was slow, like 2 seconds or so for the delete.

Hopefully I can get everything migrated to XFS this weekend and retry. Live and learn.

Thanks for still talking to me...
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Sat Sep 26, 2009 2:07 am

Well no luck there either but this time the error made sense. When I used (live) gprated to create a new XFS partition it refused to use all the space showing as available. Trying to expand the new XFS partition to fill all the space gave me an error that said something to the effect that the partition table needed to be expanded to fit the available space. Which makes since because there's 1.5 TB more than when I first wrote the GPT.

So I guess I need to be asking "How do I expand the GPT after expanding the RAID"? Sheesh.
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

User avatar
moredruid
Expert
Posts: 791
Joined: Tue Jan 20, 2009 1:33 am
Location: Netherlands
Contact:

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby moredruid » Sat Sep 26, 2009 11:47 am

Sorry to come in so late, but how is the raid mounted? as 1 volume? have you set it up as a logical volume?

By default Linux is certainly capable of some very large setups, whether it be with ext3 (or even ext2), ocfs2, gfs, reiser or some other filesystem type. I've run a couple of servers with 48TB+ storage (volumes), the ext3 capacity limit is 16TB, depending on block size (usually 2048k or 4096k blocks, so less than 16TB). For files I thought it was around 16GB - ext3 default at least.

reference: Ext3 size limits

This is apparently one of the things that LVM seems to manage pretty well. I think it's because the disk itself isn't larger than the max size with default block size. LVM handles the scaling for ya.
I've read somewhere that for a Linux kernel 2.6.x running on 64-bit CPU, the maximum LV (logical volume) size is 8EB - the VG (volume group) could be even bigger :o ... that should be enough to fill your need I hope :D
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D2173656C7572206968616D41snlbxq' | dc
Galileo - HP Proliant ML110 G6 quad core Xeon 2.4GHz, 4GB RAM, 2x750GB RAID1 + 2x1TB RAID1 HDD

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Sat Sep 26, 2009 2:53 pm

Heh heh LVM sounds awesome. I definately need to read up on it.

For now though I *finally* solved this.

Finally. Such a pain but I got it.

The magic trick was to just type "#parted /dev/sdb" I'd always been running it like "#parted /dev/sdb1"

When I ran it against "/dev/sdb" and did a print command it gave me the "do you want to fix the GPT?" - I had to tell it twice to fix the GPT and that was that.

Then I was able to install the XFS tools for Fedora, then use Gparted to make an XFS partition taking up just the new space, then used Gparted to combine the space into one. I'm impressed with how fast that went compared to doing anything with ext3.

For anyone that wants to use XFS, here is how to get the tools for Fedora.

First add the RPM Fusion repositories. Amahi has a great all for that although I used the instructions for the commandline, here :

http://www.howtoforge.com/the-perfect-d ... dora-10-p3

Then I was able to use the add/remove software app to go download the xfsprogs rpm and dependencies.

The data is migrating to the new XFS partition now. Finally! At least now when I fill this up and can afford to add more drives it will be a no-brainer.
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Mon Sep 28, 2009 9:25 am

By the way, CPG - large file deletions on the XFS partition are much faster than they were on EXT3.

I tried deleting a 30GB file over the network that disappeared instantly as opposed to deleting the DVD images on ETX3 and staring at the busy-cursor for seconds.
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

Who is online

Users browsing this forum: No registered users and 10 guests