I'm curious on how Greyhole can accumulate that much backlog. Unless it's fake / duplicate entries...
And I don't see how can duplicate entries could be inserted...
Did you make that much (1,000,000+) file operations (file creates, updates, deletes) on your shares, since you enabled 'Uses pool' on them?
I'd like to get the following:
Code: Select all
# Choose a work dir with enough free space...
WORKDIR=~/greyhole_debug
mkdir -p ${WORKDIR}
cp /var/cache/greyhole.sqlite ${WORKDIR}/
grep -i greyhole /var/log/messages* > ${WORKDIR}/var_log_messages
cp /var/log/greyhole.log* ${WORKDIR}/
cp /var/log/monit* ${WORKDIR}/
cd ${WORKDIR}/..
tar -zcf greyhole_debug.tar.gz ${WORKDIR}
Send me the resulting greyhole_debug.tar.gz file here:
http://pub.pommepause.com
If you need more detailed instructions, feel free to ask.
You can also reach me in the
IRC channel; my alias is Mouton.
We have an updated greyhole RPM (version 0.6.8) that will prevent such big databases by forcing the greyhole daemon to stop parsing the /var/log/messages file after 10k rows, and will allow it to work on those operations for a while before continuing to parse the log file.
This should prevent such situations from happening in the future.
But before I ask you to upgrade, I'd like to see how this happened, and also how best to get you back on track.
Thanks.