First, I wanted to say that GH is a great piece of functionality and while it is a work in process and there's some things to work through, I very much appreciate the effort of everyone here.. (and if I can participate in any way, I would be glad to).
Currently, I'm slowly migrating from WHS after spending a few months testing Amahi (GH) on an old box with 3 drives. My WHS box has 9 drives and I just installed Amahi and I'm mounting each WHS data drive and using rsync to copy data into my Amahi test box until I get enough empty storage on the new Amahi install to rsync files directly to it. My plan is to then rsync everything remaining from the test box to the new production install to finalize the migration.
In view of this, I have emulated the share structure from my WHS box to facilitate the rsync copy process. I have added the following shares:
-public
-videos
-users
-software
-movies_home
My most recent rsync was from one of the movies shares on one of the old WHS drives and was a little over 200g. It failed last night due to storage limits on the receiving Amahi box. My landing zone drive, (not in the pool) is 100gb, so I figured it was just a bottleneck and I need to restart the rsync on that folder to complete it. I checked available storage on the receiving box in the pool hoping to see that greyhole had caught up and pushed all last nights incoming off the landing zone into the pool, but the drive still shows roughly half full. I then ran a --view-queue on the box and it showed no pending jobs... Then, it struck me that only the default shares are listed in the --view-queue function. So, technically, it's possible there are still some pending jobs in the queue... Or, it's potentially possible that the data intended for these new shares isn't making it into the queue at all.. or something else I haven't thought of.
Anyway, I hope this provides enough information to go on.. I've also pulled the data requested in the sticky and placed it below... Thanks for any assistance on this that can be provided:
1. What version of Fedora, Samba & Greyhole are you running?
Code:
uname -r; rpm -q samba hda-greyhole
2.6.31.5-127.fc12.i686.PAE
hda-greyhole-0.6.28-1.i386
2. The content of the /etc/samba/smb.conf & /etc/greyhole.conf files (provide paste URLs):
Code:
yum -y install fpaste; fpaste /etc/samba/smb.conf; fpaste /etc/greyhole.conf
http://fpaste.org/fss3/
http://fpaste.org/WU9I/
3. The result of the following commands:
Code:
mount; fdisk -l; df -h; greyhole –stats
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/sdb1 on /var/hda/files/drives/drive1 type ext4 (rw)
/dev/sdc1 on /var/hda/files/drives/drive2 type ext4 (rw)
/dev/sdd1 on /var/hda/files/drives/drive3 type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x7c618e6f
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 14593 117013441 8e Linux LVM
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x514f55d2
Device Boot Start End Blocks Id System
/dev/sdb1 1 30402 244197376 83 Linux
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc87ee8a3
Device Boot Start End Blocks Id System
/dev/sdc1 1 121602 976760832 83 Linux
Disk /dev/sdd: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xfebc94f2
Device Boot Start End Blocks Id System
/dev/sdd1 1 30402 244197376 83 Linux
Disk /dev/dm-0: 118.2 GB, 118241624064 bytes
255 heads, 63 sectors/track, 14375 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 1577 MB, 1577058304 bytes
255 heads, 63 sectors/track, 191 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-1 doesn't contain a valid partition table
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
109G 57G 47G 56% /
tmpfs 375M 0 375M 0% /dev/shm
/dev/sda1 194M 22M 163M 12% /boot
/dev/sdb1 230G 123G 96G 57% /var/hda/files/drives/drive1
/dev/sdc1 917G 415G 456G 48% /var/hda/files/drives/drive2
/dev/sdd1 230G 152G 67G 70% /var/hda/files/drives/drive3
Greyhole Statistics
===================
Storage Pool
Total - Used = Free + Attic = Possible
/var/hda/files/drives/drive1/gh: 229G - 122G = 96G + 0G = 96G
/var/hda/files/drives/drive2/gh: 917G - 415G = 455G + 2G = 457G
/var/hda/files/drives/drive3/gh: 229G - 151G = 67G + 2G = 68G
4. The list drives in your storage pool (per Amahi platform):
Code:
mysql -u root -phda -e "select * from disk_pool_partitions" hda_production
+----+------------------------------+--------------+---------------------+---------------------+
| id | path | minimum_free | created_at | updated_at |
+----+------------------------------+--------------+---------------------+---------------------+
| 1 | /var/hda/files/drives/drive1 | 10 | 2010-11-23 01:59:05 | 2010-11-23 01:59:05 |
| 2 | /var/hda/files/drives/drive2 | 10 | 2010-11-23 01:59:05 | 2010-11-23 01:59:05 |
| 3 | /var/hda/files/drives/drive3 | 10 | 2010-11-23 01:59:06 | 2010-11-23 01:59:06 |
+----+------------------------------+--------------+---------------------+---------------------+
5. A list of the directories on the root of the drives included in your storage pool, obtained with the following command (provide a paste URL):
Code:
mysql -u root -phda -e "select concat(path, '/gh') from disk_pool_partitions" hda_production | grep -v 'concat(' | xargs ls -la | fpaste
http://fpaste.org/e62J/
6. The Greyhole work queue:
Code:
greyhole --view-queue
Greyhole Work Queue Statistics
==============================
This table gives you the number of pending operations queued for the Greyhole daemon, per share.
Write Delete Rename
Books 0 0 0
Docs 0 0 0
Movies 0 0 0
Music 0 0 0
Pictures 0 0 0
Torrents 0 0 0
========
Total 0 + 0 + 0 = 0
SOLVED - New shares do not show in --view-queue display
SOLVED - New shares do not show in --view-queue display
Last edited by jbmia on Wed Dec 08, 2010 6:49 am, edited 1 time in total.
Re: New shares do not show in --view-queue display
I'm not sure how you created your shares, but while they appear in smb.conf, they do not appear in greyhole.conf
When creating a share using the Amahi Dashboard, the share should be added to both files, provided that you checked the Uses Pool checkbox for those shares! (My guess is that you missed that checkbox on your new shares.)
You can check the num_copies lines in greyhole.conf to see what shares Greyhole is configured to handle.
When greyhole.conf is fixed, you'll need to run a --fsck to have GH start moving the data from the LZ to your drives:
When creating a share using the Amahi Dashboard, the share should be added to both files, provided that you checked the Uses Pool checkbox for those shares! (My guess is that you missed that checkbox on your new shares.)
You can check the num_copies lines in greyhole.conf to see what shares Greyhole is configured to handle.
When greyhole.conf is fixed, you'll need to run a --fsck to have GH start moving the data from the LZ to your drives:
Code: Select all
greyhole --fsck
- Guillaume Boudreau
Re: New shares do not show in --view-queue display
C'est Vrait!
(palm forcibly applied to forehead)
I could have sworn I ticked those off when I created them...
Fixed those and re-ran greyhole --fsck
Will provide a final update, but it sounds like we can mark this one solved..
Merci!!
(palm forcibly applied to forehead)
I could have sworn I ticked those off when I created them...
Fixed those and re-ran greyhole --fsck
Will provide a final update, but it sounds like we can mark this one solved..
Merci!!
Who is online
Users browsing this forum: No registered users and 20 guests