Unable to exclude root '/' from the Greystone Pool

amielgoldberg
Posts: 34
Joined: Tue Mar 14, 2017 7:30 pm

Unable to exclude root '/' from the Greystone Pool

Postby amielgoldberg » Mon Apr 17, 2017 6:55 pm

I need assistance configuring my drive pooling on my HDA server. The problem is that the root directory (‘/’) is being used in the pool even though I have specifically NOT selected it to be in the pool. I suspect this is because the shares are set up on the ‘/’ but can’t confirm this to be the case. What thought do have on resolving this issue?

Background: I’ve installed the Greyhole UI app with the goal of pooling two 1.8G hard drives (/var/hda/files/drives/Drive1 & Drive2) with the ‘/home’ space to be used by my shares. I have NOT selected the ‘/’ (root) directory in the pool; however, the ‘/’ is being used when I copy files to the HDA server.

See attachment for screen shots of HDA shares, drives, and pooling screens.

Greyhole Troubleshoot Q&A:
Q1: What version of OS, Samba & Greyhole are you running?
A1: 4.8.13-100.fc23.x86_64
samba-4.3.12-1.fc23.x86_64
amahi-greyhole-0.10.6-1.x86_64
-----
Q2: The content of the /etc/samba/smb.conf & /etc/greyhole.conf files (provide paste URLs):
https://da.gd/Sts6r & https://da.gd/aZ4U
-----
Q3: The result of the following commands:
A3: sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1007468k,nr_inodes=251867,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sdc5 on / type xfs (rw,relatime,attr2,inode64,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14744)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/sda1 on /var/hda/files/drives/Drive1 type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /var/hda/files/drives/Drive2 type ext4 (rw,relatime,data=ordered)
/dev/sdc1 on /boot type ext4 (rw,relatime,data=ordered)
/dev/sdc2 on /home type xfs (rw,relatime,attr2,inode64,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
none on /var/spool/greyhole/mem type tmpfs (rw,relatime,size=4096k)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=203884k,mode=700,uid=1000,gid=100)
----
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xffd0c17d

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 3907028991 3907026944 1.8T 83 Linux


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x2b6ced1f

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 3907028991 3907026944 1.8T 83 Linux


Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcb2e495b

Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 2048 1026047 1024000 500M 83 Linux
/dev/sdc2 1026048 867717119 866691072 413.3G 83 Linux
/dev/sdc3 867717120 871913471 4196352 2G 82 Linux swap / Solaris
/dev/sdc4 871913472 976773119 104859648 50G 5 Extended
/dev/sdc5 871915520 976773119 104857600 50G 83 Linux

----
Filesystem Size Used Avail Use% Mounted on
devtmpfs 984M 0 984M 0% /dev
tmpfs 996M 0 996M 0% /dev/shm
tmpfs 996M 1000K 995M 1% /run
tmpfs 996M 0 996M 0% /sys/fs/cgroup
/dev/sdc5 50G 15G 36G 30% /
tmpfs 996M 32K 996M 1% /tmp
/dev/sda1 1.8T 73M 1.7T 1% /var/hda/files/drives/Drive1
/dev/sdb1 1.8T 16G 1.7T 1% /var/hda/files/drives/Drive2
/dev/sdc1 477M 139M 309M 31% /boot
/dev/sdc2 414G 16G 398G 4% /home
none 4.0M 0 4.0M 0% /var/spool/greyhole/mem
tmpfs 200M 0 200M 0% /run/user/1000
----

Greyhole Statistics
===================

Storage Pool
Total - Used = Free + Trash = Possible
/var/hda/files/drives/Drive2/gh: 1834G - 15G = 1725G + 15G = 1740G
/var/hda/files/drives/Drive1/gh: Offline
/home/gh: 413G - 16G = 397G + 15G = 413G
==========================================
Total: 2247G - 31G = 2122G + 31G = 2153G
-----
Q4: The list drives in your storage pool (per Amahi platform):
A5: id path minimum_free created_at updated_at
1 /var/hda/files/drives/Drive2 10 2017-04-12 20:33:13 2017-04-12 20:33:13
2 /var/hda/files/drives/Drive1 10 2017-04-12 20:33:14 2017-04-12 20:33:14
3 /home 10 2017-04-12 20:33:15 2017-04-12 20:33:15
-----
Q5: A list of the directories on the root of the drives included in your storage pool, obtained with the following command (provide a paste URL):
A5: https://da.gd/m3BeS
------

Q6: The Greyhole work queue:
A6: Greyhole Work Queue Statistics
==============================

This table gives you the number of pending operations queued for the Greyhole daemon, per share.

Write Delete Rename Check
Books 0 0 0 0
Docs 0 0 0 0
Movies 0 0 0 0
Music 0 0 0 0
Owncloud9 0 0 0 0
Pictures 0 0 0 0
Public 0 0 0 0
TV 0 0 0 0
Videos 0 0 0 0
=============================================
Total 0 0 0 0
Write Delete Rename Check

The following is the number of pending operations that the Greyhole daemon still needs to parse.
Until it does, the nature of those operations is unknown.
Spooled operations that have been parsed will be listed above and disappear from the count below.

Spooled 0
-----
Q7: Greystone log:
Apr 16 04:11:53 INFO fsck: Starting fsck for /var/hda/files/tv
Apr 16 04:11:53 INFO fsck: fsck for /var/hda/files/tv completed.
Apr 16 04:11:53 INFO fsck: Now working on task ID 63: fsck /var/hda/files/videos/
Apr 16 04:11:53 INFO fsck: Starting fsck for /var/hda/files/videos
Apr 16 04:11:53 INFO fsck: fsck for /var/hda/files/videos completed.
Apr 16 21:42:05 WARN debug: Found a share () with no path in /etc/samba/smb.conf, or missing it's num_copies[] config in /etc/greyhole.conf. Skipping.
Apr 16 21:42:29 WARN debug: Found a share () with no path in /etc/samba/smb.conf, or missing it's num_copies[] config in /etc/greyhole.conf. Skipping.
Apr 16 21:54:54 INFO daemon: Greyhole (version 0.10.6) daemon started.
Apr 16 21:54:54 INFO daemon: Optimizing MySQL tables...
Apr 16 21:54:55 WARN daemon: Warning! It seems the partition UUID of /var/hda/files/drives/Drive1/gh changed. This probably means this mount is currently unmounted, or that you replaced this drive and didn't use 'greyhole --replace'. Because of that, Greyhole will NOT use this drive at this time.

User avatar
bigfoot65
Project Manager
Posts: 11924
Joined: Mon May 25, 2009 4:31 pm

Re: Unable to exclude root '/' from the Greystone Pool

Postby bigfoot65 » Tue Apr 18, 2017 5:45 am

All the data you provided is not needed. In future, request you please attach as a file.

As for your issue, The root (/) partition is the LZ and that is while files are copied there initially. Those in shares that are Greyhole enabled will eventually be moved and changed to a symbolic link when Greyhole picks them up.

Recommend you see Greyhole Landing Zone (LZ). It's important you understand how Greyhole works to manage expectations.
ßîgƒσστ65
Applications Manager

My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2

Who is online

Users browsing this forum: No registered users and 20 guests