I have my LZ mounted to its own partition, and it is for some reason full.
I have searched the /var/hda/files mount point extensively and can find nothing but symlinks, yet it still says that I have used up all 120GB of the partition.
I have no idea what is taking up the space, or where else to look.
I just recently removed a drive from my greyhole pool (and the system completely) and had all the copies moved to other drives and then did a balance, but again, I have successfully run FSCK and can't find anything but symlinks in the mount
[root@new-host files]# du -sh *
24K applications
63M backups
4.0K books
8.3M docs
5.9T drives
16K lost+found
3.6M movies
27M music
106M pictures
4.0K public
6.1M tv
1.6M videos
As you can see, other than the "drives" directory, which are all mounted on other filesystems, nothing is taking up any space that should add up to 120GB.
[root@new-host drives]# du -sh *
2.0T drive01
8.0K drive02
2.0T drive03
2.0T drive04
28K drive05
And you can see that the drive that I removed ("drive02") does not have anything left in it's directory that could be eating up the space.
I am at a loss, anywhere else for me to look?
Thanks,
Eric
SOLVED: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Sorry, should have added Amahi 8 Fedora 21.
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Code: Select all
[root@new-host drives]# du -sh *
2.0T drive01
8.0K drive02
2.0T drive03
2.0T drive04
28K drive05
http://ur1.ca/odg7s
http://ur1.ca/odg7t
Code: Select all
[root@new-host drives]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=6040456k,nr_inodes=1510114,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sde5 on / type ext4 (rw,relatime,data=ordered)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/sde1 on /boot type ext4 (rw,relatime,data=ordered)
/dev/sde2 on /var/hda/files type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /var/hda/files/drives/drive01 type ext4 (rw,relatime,data=ordered)
/dev/sdc1 on /var/hda/files/drives/drive04 type ext4 (rw,relatime,data=ordered)
/dev/sdd1 on /var/hda/files/drives/drive03 type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /var/hda/files/drives/drive05 type ext4 (rw,relatime,data=ordered)
none on /var/spool/greyhole/mem type tmpfs (rw,relatime,size=4096k)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1210148k,mode=700,uid=1000,gid=100)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
Code: Select all
[root@new-host drives]# fdisk -l
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D52FCA80-097B-4414-9B78-B100F33F62E6
Device Start End Sectors Size Type
/dev/sda1 2048 5860532223 5860530176 2.7T Microsoft basic data
Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: DB073851-92F2-4242-9A56-E0D957986837
Device Start End Sectors Size Type
/dev/sdc1 2048 5860532223 5860530176 2.7T Linux filesystem
Disk /dev/sde: 186.3 GiB, 200049647616 bytes, 390721968 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00016332
Device Boot Start End Sectors Size Id Type
/dev/sde1 * 2048 1026047 1024000 500M 83 Linux
/dev/sde2 1026048 269072383 268046336 127.8G 83 Linux
/dev/sde3 269072384 306821119 37748736 18G 82 Linux swap / Solaris
/dev/sde4 306821120 390721535 83900416 40G 5 Extended
/dev/sde5 306823168 390709247 83886080 40G 83 Linux
Disk /dev/sdb: 4.6 TiB, 5000947524096 bytes, 9767475633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CAD6C287-4B15-4F36-A3B1-B6B921BD515D
Device Start End Sectors Size Type
/dev/sdb1 2048 9767475199 9767473152 4.6T Linux filesystem
Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 221F524A-6184-4AF3-B36B-77C4BB4BCEE9
Device Start End Sectors Size Type
/dev/sdd1 2048 5860532223 5860530176 2.7T Microsoft basic data
Code: Select all
[root@new-host drives]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.8G 0 5.8G 0% /dev
tmpfs 5.8G 4.0K 5.8G 1% /dev/shm
tmpfs 5.8G 724K 5.8G 1% /run
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
/dev/sde5 40G 5.8G 32G 16% /
tmpfs 5.8G 32K 5.8G 1% /tmp
/dev/sde1 477M 93M 355M 21% /boot
/dev/sde2 126G 120G 0 100% /var/hda/files
/dev/sda1 2.7T 2.0T 617G 77% /var/hda/files/drives/drive01
/dev/sdc1 2.7T 2.0T 618G 77% /var/hda/files/drives/drive04
/dev/sdd1 2.7T 2.0T 657G 75% /var/hda/files/drives/drive03
/dev/sdb1 4.6T 63M 4.3T 1% /var/hda/files/drives/drive05
none 4.0M 0 4.0M 0% /var/spool/greyhole/mem
tmpfs 1.2G 0 1.2G 0% /run/user/1000
Code: Select all
[root@new-host drives]# greyhole --stats
Greyhole Statistics
===================
Storage Pool
Total - Used = Free + Trash = Possible
/var/hda/files/drives/drive03/gh: 2751G - 1955G = 656G + 0G = 656G
/var/hda/files/drives/drive01/gh: 2751G - 1994G = 617G + 0G = 617G
/var/hda/files/drives/drive04/gh: 2751G - 1993G = 618G + 0G = 618G
/var/hda/files/drives/drive05/gh: 4621G - 0G = 4388G + 0G = 4388G
==========================================
Total: 12872G - 5941G = 6279G + 0G = 6279G
Code: Select all
[root@new-host drives]# mysql -u root -phda -e "select * from disk_pool_partitions" hda_production
+----+-------------------------------+--------------+---------------------+---------------------+
| id | path | minimum_free | created_at | updated_at |
+----+-------------------------------+--------------+---------------------+---------------------+
| 2 | /var/hda/files/drives/drive03 | 10 | 2015-12-20 00:07:58 | 2015-12-20 00:07:58 |
| 3 | /var/hda/files/drives/drive01 | 10 | 2015-12-20 00:07:59 | 2015-12-20 00:07:59 |
| 5 | /var/hda/files/drives/drive04 | 10 | 2016-01-01 00:05:55 | 2016-01-01 00:05:55 |
| 6 | /var/hda/files/drives/drive05 | 10 | 2016-01-02 03:22:54 | 2016-01-02 03:22:54 |
+----+-------------------------------+--------------+---------------------+---------------------+
Code: Select all
[root@new-host drives]# greyhole --view-queue
Greyhole Work Queue Statistics
==============================
This table gives you the number of pending operations queued for the Greyhole daemon, per share.
Write Delete Rename Check
Applications 0 0 0 0
Backups 0 0 0 0
Books 0 0 0 0
Docs 0 0 0 0
Movies 0 0 0 0
Music 0 0 0 0
Pictures 0 0 0 0
Public 0 0 0 0
TV 0 0 0 0
Videos 0 0 0 0
================================================
Total 0 0 0 0
Write Delete Rename Check
The following is the number of pending operations that the Greyhole daemon still needs to parse.
Until it does, the nature of those operations is unknown.
Spooled operations that have been parsed will be listed above and disappear from the count below.
Spooled 0
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
It appears there is something in /var/hda/files that's using space. Look for hidden files.
ßîgƒσστ65
Applications Manager
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2
Applications Manager
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Yes, that is certainly what it seems, but I can't identify any hidden files to speak of. I have scoured everything in that mount with ls -la and they returned nothing but symlinks and running find -type f doesn't return anything.
I might just move my LZ temporarily, format this partition and then move my LZ back. Something odd has happened and mucked with this partition.
I might just move my LZ temporarily, format this partition and then move my LZ back. Something odd has happened and mucked with this partition.
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Odd issue with Greyhole. Our support is limited since this is an add on for Amahi.
You might try contacting the Greyhole author through his support site.
You might try contacting the Greyhole author through his support site.
ßîgƒσστ65
Applications Manager
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2
Applications Manager
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Thanks for the help. Just to follow up. I did in fact move my LZ to another location. There is now nothing visible in my /var/hda/files and yet it still shows it as 100% full. So weird.
Anyway, I will just format the partition and the move it back. I have to think somehow adding two new drives to the server and the pool, and removing one smaller one corrupted the LZ partition.
I am up and running so perfectly happy, but certainly a weird thing.
Thanks again.
Anyway, I will just format the partition and the move it back. I have to think somehow adding two new drives to the server and the pool, and removing one smaller one corrupted the LZ partition.
I am up and running so perfectly happy, but certainly a weird thing.
Thanks again.
Re: LZ full, root and boot are fine, but LZ partition is full despite successfully running FSCK
Glad to hear it's all good now.
Happy to help. Closing this one as solved.
Happy to help. Closing this one as solved.
ßîgƒσστ65
Applications Manager
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2
Applications Manager
My HDA: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz on MSI board, 16GB RAM, 1TBx1+2TBx2+4TBx2
Who is online
Users browsing this forum: No registered users and 48 guests