Transfer from landing to Pool Insanely slow

nncs20
Posts: 5
Joined: Mon Dec 06, 2010 11:37 am

Transfer from landing to Pool Insanely slow

Postby nncs20 » Mon Dec 06, 2010 11:41 am

I have a fresh install of Amahi that I enabled Greyhole storage pool on. I'm using a 74GB raptor for the OS hard drive and 4x1TB drives for data which the latter 3 have been added to the storage pool. The problem is, all the data is filling up the 74GB raptor and is very very slowly getting moved to the pool. I filled my shares with data (until my transfer errored because the destination drive was 'full') where I looked at my drive usage and my 74GB was full and the storage pool 1TB drives have only maybe 3% used. I decided to leave it overnight to see what would happen. About 12 hours later, I check it again and only about 3GB have transferred from the raptor to the pool. Any ideas?

Thanks!
Kyle

Info Requested:

2.6.32.26-175.fc12.x86_64
samba-3.4.9-60.fc12.x86_64
hda-greyhole-0.6.28-1.x86_64

[root@localhost ~]# yum -y install fpaste; fpaste /etc/samba/smb.conf; fpaste /etc/greyhole.conf
Loaded plugins: fastestmirror, presto, refresh-packagekit
Loading mirror speeds from cached hostfile
* fedora: mirror.steadfast.net
* rpmfusion-free: mirror.hiwaay.net
* rpmfusion-free-updates: mirror.hiwaay.net
* updates: mirror.steadfast.net
Setting up Install Process
Package fpaste-0.3.4-1.fc12.noarch already installed and latest version
Nothing to do
Uploading (4.9K)...
http://fpaste.org/1tCO/
Uploading (1.5K)...
http://fpaste.org/sn1k/
[root@localhost ~]# c^C
[root@localhost ~]# mount; fdisk -l; df -h; greyhole --stats
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/sdb1 on /var/hda/files/drives/drive7 type ext4 (rw)
/dev/sde1 on /var/hda/files/drives/drive9 type ext4 (rw)
/dev/sdc1 on /var/hda/files/drives/drive10 type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
gvfs-fuse-daemon on /home/kurtzk/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=kurtzk)

Disk /dev/sda: 74.4 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xe8a6a63c

Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204799+ 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 9040 72407040 8e Linux LVM
Partition 2 does not end on cylinder boundary.

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x6bf16200

Device Boot Start End Blocks Id System
/dev/sdb1 1 121601 976760001 83 Linux

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x6bf16202

Device Boot Start End Blocks Id System
/dev/sdc1 1 121601 976760001 83 Linux

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x6bf1620e

Device Boot Start End Blocks Id System
/dev/sde1 1 121601 976760001 83 Linux

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000a2850

Device Boot Start End Blocks Id System
/dev/sdd1 1 121602 976760832 83 Linux

Disk /dev/dm-0: 68.0 GB, 67968696320 bytes
255 heads, 63 sectors/track, 8263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 6174 MB, 6174015488 bytes
255 heads, 63 sectors/track, 750 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
63G 54G 5.6G 91% /
tmpfs 1.9G 364K 1.9G 1% /dev/shm
/dev/sda1 194M 24M 161M 13% /boot
/dev/sdb1 917G 75G 797G 9% /var/hda/files/drives/drive7
/dev/sde1 917G 79G 793G 9% /var/hda/files/drives/drive9
/dev/sdc1 917G 77G 794G 9% /var/hda/files/drives/drive10

Greyhole Statistics
===================

Storage Pool
Total - Used = Free + Attic = Possible
/var/hda/files/drives/drive7/gh: 917G - 74G = 796G + 1G = 797G
/var/hda/files/drives/drive9/gh: 917G - 78G = 792G + 1G = 793G
/var/hda/files/drives/drive10/gh: 917G - 77G = 794G + 0G = 794G

[root@localhost ~]# mysql -u root -phda -e "select * from disk_pool_partitions" hda_production
+----+-------------------------------+--------------+---------------------+---------------------+
| id | path | minimum_free | created_at | updated_at |
+----+-------------------------------+--------------+---------------------+---------------------+
| 1 | /var/hda/files/drives/drive7 | 10 | 2010-12-04 18:27:18 | 2010-12-04 18:27:18 |
| 3 | /var/hda/files/drives/drive9 | 10 | 2010-12-04 18:27:22 | 2010-12-04 18:27:22 |
| 4 | /var/hda/files/drives/drive10 | 10 | 2010-12-04 18:27:22 | 2010-12-04 18:27:22 |
+----+-------------------------------+--------------+---------------------+---------------------+
[root@localhost ~]# mysql -u root -phda -e "select concat(path, '/gh') from disk_pool_partitions" hda_production |grep -v 'concat(' | xargs ls -la | fpaste
Uploading (1.8K)...
http://fpaste.org/j4DZ/

User avatar
gboudreau
Posts: 606
Joined: Sat Jan 23, 2010 1:15 pm
Location: Montréal, Canada
Contact:

Re: Transfer from landing to Pool Insanely slow

Postby gboudreau » Mon Dec 06, 2010 3:18 pm

Try upgrading to the latest version:
(as root)

Code: Select all

yum -y install php-mbstring; rpm -Uvh http://greyhole.pommepause.com/releases/hda-greyhole-0.7.5-1.`uname -i`.rpm
Then execute the following migration script (as root again):

Code: Select all

/usr/share/greyhole/db_migration-sqlite2mysql.sh
Data should transfer much faster after that.

Let us know.
- Guillaume Boudreau

ajeffco
Posts: 1
Joined: Mon Dec 06, 2010 6:46 pm

Re: Transfer from landing to Pool Insanely slow

Postby ajeffco » Mon Dec 06, 2010 6:56 pm

I had the same problem... From iostat:

Code: Select all

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 28.00 3112.00 344.00 3112 344 sdb 2.00 0.00 16.00 0 16 sdc 2.00 0.00 16.00 0 16
This was pretty consistent.

Tried installing the new version listed above, got

Code: Select all

php-mbstring is needed by hda-greyhole-0.7.5-1.x86_64
yum took care of that pretty quickly...

Now, I'm seeing rates like this (it's inconsistent though, it doesn't stay that high, it goes up and down):

Code: Select all

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 27.00 6624.00 0.00 6624 0 sdb 2.00 0.00 16.00 0 16 sdc 14.00 0.00 12192.00 0 12192 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 13.37 28.22 3.47 0.00 54.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 42.00 9112.00 496.00 9112 496 sdb 25.00 0.00 19616.00 0 19616 sdc 3.00 0.00 24.00 0 24

nncs20
Posts: 5
Joined: Mon Dec 06, 2010 11:37 am

Re: Transfer from landing to Pool Insanely slow

Postby nncs20 » Mon Dec 06, 2010 9:57 pm

Seems to be running much smoother now! Thanks!

ajeffco, what did you do to get that nice speed chart you have shown? I would like to see what it is actually doing now all I know currently is it is much more acceptable and useable.

User avatar
gboudreau
Posts: 606
Joined: Sat Jan 23, 2010 1:15 pm
Location: Montréal, Canada
Contact:

Re: Transfer from landing to Pool Insanely slow

Postby gboudreau » Tue Dec 07, 2010 3:57 am

He used the iostat command.

Other nice commands:
- iotop -o (yum -y install iotop to install it)
- greyhole --view-queue
- tail -f /var/log/greyhole.log
- Guillaume Boudreau

nncs20
Posts: 5
Joined: Mon Dec 06, 2010 11:37 am

Re: Transfer from landing to Pool Insanely slow

Postby nncs20 » Tue Dec 07, 2010 6:41 pm

Alright so it turns out those updates helped, but not nearly as much as it should have. It takes about 24 hours to transfer all the data off my 74GB raptor to the storage pool. Every time I transfer data to the server, the transfer rate over the network far exceeds the transfer to the pool and I have to wait this 24 hours to restart my transfers. I'm attempting to transfer about 400GB worth of stuff to it. It seems to me that with a 10,000RPM raptor, a Core 2 Duo with 4GB of ram, the internal transfers should be able to far exceed the network transfers. Any thoughts?

User avatar
gboudreau
Posts: 606
Joined: Sat Jan 23, 2010 1:15 pm
Location: Montréal, Canada
Contact:

Re: Transfer from landing to Pool Insanely slow

Postby gboudreau » Tue Dec 07, 2010 7:06 pm

With the above update, the majority of the time that Greyhole takes to process files, it's running rsync to move the files from your raptor of any of your drives in your pool. If that isn't fast enough, the only thing that could help is increasing the process priority of Greyhole, so that it becomes more 'important' than other process. You can use "renice -15 `cat /var/run/greyhole.pid`" to do that.

Have you specified any extra copies in your shares settings? Anything above 0 (-) will have Greyhole copying new files multiple times, which will multiply the time it takes to process files...

Check greyhole.log (see above).
Look for a big file getting processed (anything above 500MB should do), and paste here all the lines about that processing, starting at "Now working on ..." and ending at that same line for the next file.
- Guillaume Boudreau

nncs20
Posts: 5
Joined: Mon Dec 06, 2010 11:37 am

Re: Transfer from landing to Pool Insanely slow

Postby nncs20 » Tue Dec 07, 2010 8:12 pm

Alright so here is the plan. I did have some file duplication already setup (1 duplicate) but have since turned it off for all shares. So for all my initial file transfers, I will leave it at 0 (-) and then turn it on once all the files are in place for the ones I want it on. Also, I upped the priority to -15 as you recommended. That is one thing I had never checked before. Also, does it only use one process to copy files? It seems like maybe spawning a child process or 2 would help but then again I don't really know the underlying workings of this yet. Lastly, I am not going to post any of my logs just yet until I let everything get to where it should be for now and then start a new file transfer of a large file. Right now all the log really shows is a bunch of Copy and Move commands for a bunch of small text files. I didn't think that would be real helpful. Sound like a start?

VirtualMe
Posts: 13
Joined: Sun Nov 28, 2010 5:27 pm

Re: Transfer from landing to Pool Insanely slow

Postby VirtualMe » Wed Dec 08, 2010 9:18 pm

As another data point, I just transferred 5GB to Amahi. It was to one share and is set to create 1 copy of the files.

Windows said it was transferring at about 25MB/s. I checked about 20 minutes later and the transfer had completed. I then checked Greyhole and it was still shuffling files around. Checked again about 20 minutes after that and Greyhole was still transferring files. iotop showed it transferring at anywhere from 60KB/s to 230KB/s. greyhole --view-queue has not gone below 900 for the write column over the past 1 1/2 hours.

I am running Greyhole version 0.7.5-1. I don't know if I actually need to run the migration script from SQLite to MySQL, but I'm going to run that script and then re-run the test again after erasing the two hard drives.

User avatar
sgtfoo
Posts: 419
Joined: Sun Jul 18, 2010 8:27 pm

Re: Transfer from landing to Pool Insanely slow

Postby sgtfoo » Wed Dec 08, 2010 11:01 pm

just an FYI to the less savvy like me...

iostat requires to install sysstat...

as root
for 64-bit systems (all I found)

Code: Select all

yum install sysstat.x86_64
SgtFoo
HDA: VM inside oVirt FX-8300 95w (2 cores for HDA), 32GB RAM (2GB for HDA)
My PC: FX-8300, 16GB RAM, 3x 1TB HDDs, Radeon HD6970 2GB video; Win10 Pro x64
Other: PC, Asus 1215n (LXLE), Debian openZFS server (3x(2x2tb) mirrors)
Modem&Network: Thomson DCM475; Asus RT-AC66U; HP 1800-24G switch

Who is online

Users browsing this forum: No registered users and 25 guests