expanding 4+TB GPT partition after expanding hardware raid-5

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Wed Sep 23, 2009 9:42 am

Hi all -

I have an Areca 1220 HW raid controller with 5 1.5 TB drives connected (eventually it will have 8).

The controller supports online capacity expansion.

When I first created the array I created it with 4 drives, RAID-5, GPT. Obviously it's not being used as a boot device. The raid volume contains only the Amahi shares.

So yesterday I added a new 1.5TB drive, and ran the procedure to expand the raid.

So far so good. The array is now 6 TB and shows up as such from Linux.

The problem occurs when I try to grow the single etx3 partition to fill up the space. gparted runs for hours (most of that time checking the file system). But it comes back with an error and it can't grow the filesystem.

What am I forgetting to do? Surely this must be possible.

Thanks in advance for any suggestions.

this is the output from gparted:
*********************************

GParted 0.4.4

Libparted 1.8.8
Grow /dev/sdb1 from 4.09 TiB to 5.46 TiB 02:37:26 ( ERROR )

calibrate /dev/sdb1 00:00:00 ( SUCCESS )

path: /dev/sdb1
start: 34
end: 8789049044
size: 8789049011 (4.09 TiB)
check file system on /dev/sdb1 for errors and (if possible) fix them 01:18:39 ( SUCCESS )

e2fsck -f -y -v /dev/sdb1

Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

16252 inodes used (0.01%)
5227 non-contiguous inodes (32.2%)
# of inodes with ind/dind/tind blocks: 9836/7428/17
421344823 blocks used (38.35%)
0 bad blocks
36 large files

15184 regular files
1059 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
--------
16243 files
e2fsck 1.41.3 (12-Oct-2008)
grow partition from 4.09 TiB to 5.46 TiB 00:00:00 ( ERROR )

old start: 34
old end: 8789049044
old size: 8789049011 (4.09 TiB)
check file system on /dev/sdb1 for errors and (if possible) fix them 01:18:47 ( SUCCESS )

e2fsck -f -y -v /dev/sdb1

Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

16252 inodes used (0.01%)
5227 non-contiguous inodes (32.2%)
# of inodes with ind/dind/tind blocks: 9836/7428/17
421344823 blocks used (38.35%)
0 bad blocks
36 large files

15184 regular files
1059 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
--------
16243 files
e2fsck 1.41.3 (12-Oct-2008)
grow file system to fill the partition 00:00:00 ( SUCCESS )

resize2fs /dev/sdb1

resize2fs 1.41.3 (12-Oct-2008)
The filesystem is already 1098631126 blocks long. Nothing to do!

libparted messages ( INFO )

Unable to satisfy all constraints on the partition.

========================================

Screenshots of gparted just in case:

http://picasaweb.google.com/kclark56/Ar ... 9227608034

http://picasaweb.google.com/kclark56/Ar ... 2344618130
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Wed Sep 23, 2009 6:35 pm

Update - I tried parted from the command line and got this message "Error: File system has an incompatible feature enabled"

[root@localhost amahi]# parted /dev/sdb1
GNU Parted 1.8.8
Using /dev/sdb1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) resize 1 0 6000
Error: File system has an incompatible feature enabled.
(parted) quit
[root@localhost amahi]#


Can someone please help troubleshoot this?

I searched for this error with Fedora and haven't found a single definitive answer anywhere.

Doesn't anyone use large volumes with Amahi?

If you do, do you every resize them?

If so, what filesystem do you use? Did I screw the pooch by choosing ext3?
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

User avatar
cpg
Administrator
Posts: 2618
Joined: Wed Dec 03, 2008 7:40 am
Contact:

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby cpg » Wed Sep 23, 2009 8:47 pm

see what features it has. mine is plain vanilla and it has this:

Code: Select all

$ sudo dumpe2fs /dev/sda1 | grep features dumpe2fs 1.41.4 (27-Jan-2009) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
(sweeet rig, btw! ... says an envious cpg ...)
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 8GB RAM, 1TBx2+3TBx1

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Wed Sep 23, 2009 8:58 pm

Thanks, it's a media server. Holds lots of music and DVD images.

[root@localhost ~]# dumpe2fs /dev/sda1 | grep features
dumpe2fs 1.41.3 (12-Oct-2008)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Thu Sep 24, 2009 6:15 pm

There's a lot of talk about "resize_inodes" being the culprit.

Unfortunately I'm blocked from testing this. When I try to remove that feature there's yet another error.

# tune2fs -O resize_inode /dev/sdb1
tune2fs 1.41.3 (12-Oct-2008)
Setting filesystem feature 'resize_inode' not supported.


I'm beginning to think large file systems are not supported on Linux, or if they are, "it's experimental and use at your own risk".

Amahi guys, Amahi is a great app, but before you release it for a new distro whether it be Fedora 11 or Ubuntu or whatever, *please* test large partitions on a raid, and please test resizing. This is a real world usage case, not a weird isolated lab corner-case.

Not being able to use large partitions is a real limiting factor.

As far as I'm concerned, Amahi can't be certified for large volumes until and unless there's a way to resize reliably. It's too bad. I thought Linux was enterprise ready. But how can it be with this crazy mickey-mouse nonsense.

What I don't get is why would it stop at 4 TB? 2 I understand. But the theoretical limit of ext3 is a lot bigger.

So let me ask again - does ANYONE use Amahi with large file systems??? Surely I can't be the only one to try this. I can't believe that.
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Thu Sep 24, 2009 8:08 pm

There is also a "MSFTRES" flag on the partition. According to google thats often put there as part of a bug in gparted...?

So what about XFS - anyone?

Would I likely have better luck (growing partitions as the raid expands) with XFS? Does anyone here actually use XFS?
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

User avatar
cpg
Administrator
Posts: 2618
Joined: Wed Dec 03, 2008 7:40 am
Contact:

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby cpg » Fri Sep 25, 2009 2:49 am

When I try to remove that feature there's yet another error.

# tune2fs -O resize_inode /dev/sdb1
tune2fs 1.41.3 (12-Oct-2008)
Setting filesystem feature 'resize_inode' not supported.

I'm beginning to think large file systems are not supported on Linux, or if they are, "it's experimental and use at your own risk".
well, certainly there are many many many linux systems out there with very large disks and sophisticated setups. i personally have not dealt with more than 1tb at a time. i personally do not like raid much either.
Amahi guys, Amahi is a great app, but before you release it for a new distro whether it be Fedora 11 or Ubuntu or whatever, *please* test large partitions on a raid, and please test resizing. This is a real world usage case, not a weird isolated lab corner-case.

Not being able to use large partitions is a real limiting factor.

As far as I'm concerned, Amahi can't be certified for large volumes until and unless there's a way to resize reliably. It's too bad. I thought Linux was enterprise ready. But how can it be with this crazy mickey-mouse nonsense.

What I don't get is why would it stop at 4 TB? 2 I understand. But the theoretical limit of ext3 is a lot bigger.

So let me ask again - does ANYONE use Amahi with large file systems??? Surely I can't be the only one to try this. I can't believe that.
i have seen a couple of people using amahi in large volumes, but i do not recall the details of who did this.

now, for your sweeping statements, amahi is not enterprise ready as of yet, and we'd like to support large file systems, of course.

however, your point is well taken. if we had or manage to get the hardware to test it, we would definitely test against this in future releases.

in the mean time, this is not an amahi-specifi issues, however we can try to help in the irc+forum or perhaps you can try in the #fedora channel or even the file system channels in freenode, or in the corresponding forums.
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 8GB RAM, 1TBx2+3TBx1

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Fri Sep 25, 2009 8:39 am

CPG - Sorry about the sweeping statements, let myself get too frustrated.

It's not Amahi's fault that Linux apparently doesn't have mature tools for dealing with large file systems yet.
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

User avatar
cpg
Administrator
Posts: 2618
Joined: Wed Dec 03, 2008 7:40 am
Contact:

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby cpg » Fri Sep 25, 2009 12:35 pm

no problem, i understand how it can be frustrating. i can assure you many many shops, from pixar to startups i have seen can handle large volumes. it's just that the tools in fedora 10 may not be up to date.

why don't you try fedora 11 (with the amahi fedora 10 repo)?

there are some minor issues with f11 and amahi, though we are taking some time to do a full update because we want to roll out a bunch of new features we have ...
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 8GB RAM, 1TBx2+3TBx1

boomerang56
Posts: 29
Joined: Mon Aug 31, 2009 2:36 pm

Re: expanding 4+TB GPT partition after expanding hardware raid-5

Postby boomerang56 » Fri Sep 25, 2009 1:24 pm

Thanks. Amahi itself is working fine so I would treat reinstalling as a last resort. I think I'll wait till Fedora 11 is supported. Amahi is awesome.

I downloaded the "live" GParted iso and followed the instructions to make it run from a USB flash drive.

There is a difference in what the live version supports vs what the version included supports - specifically the "Fedora version" of GParted can't do XFS at all.

I just tried the live version on a test drive that was formatted with 2 NTFS (dual boot) partitions. The live version of GParted was able to shrink/move both of the NTFS partitions and then let me add an XFS partition on the drive with no problems. This was a small 500GB drive though, pretty much the smallest we can buy these days.

When I get home I'll try it on the raid. I'm hoping to shrink down the current EXT3 partition, create an XFS one in the remainins space, move the data to the XFS partition and then delete the EXT3 one and resize the XFSX one one last time so that it fills the 6 TB raid volume.

I expect it will take most of a day to do the migration, I'll post back later.

So I guess the lessons I leaned were 1) don't go off half cocked making unfair sweeping statements and 2) stay away from ETX3 for very large partitions.

Here's a link for anyone needing GParted Live - use "option 2" (Unetbootin). Option 1 didn't work for me.

http://gparted.sourceforge.net/liveusb.php
--production HDA : 2.8GHz core-2 duo, Asus P5BV-M mobo, 2GB RAM, Rosewill R901-P case, 2x iStarUSA BPU-340SATA hot-swap bays, Areca 1220 8-drive raid controller, 5 Seagate ST315005N1A1AS 1.5 TB drives

Who is online

Users browsing this forum: No registered users and 18 guests