[SOLVED] fsck.ext4: Unable to resolve 'UUID......" ??
Posted: Sun Nov 07, 2010 12:36 pm
Hi all,
I'm working with a fresh install of F12 and overall the install itself goes fine
till I try to get Fedora to mount 2 sata drives that are on a Syba SD-SATA150R
at boot time. I've following the instructions here:
http://wiki.amahi.org/index.php/Adding_ ... o_your_HDA. The
only thing I have to mention with the install that a found a bit different is
that at "hda-diskmount", the UUID provided does not end in ".....
var/hda/files/drives/sdb1 ext4 defaults 1 2", but rather,
var/hda/files/drives/drive1 ext4 defaults 1 2. Not sure if thats
important...
Now, I added 3 drives total, 1 is a PATA drive (legacy mboard with PATA only) and 2
are SATA via the SD-SATA150R card (a SIL3512 card). The PATA drive appears to be unaffected by the issue. All 3 of these were added to /etc/fstab as follows:
FSTAB file:
"
#
# /etc/fstab
# Created by anaconda on Sun Nov 7 08:21:56 2010
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1
UUID=a86f7cc3-ba1b-4f59-b49d-ca8ce8850f5d /boot ext4 defaults 1 2
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
UUID=6e046a2c-2914-40b3-bf0e-913147a72121 /var/hda/files/drives/drive1 ext4 defaults 1 2
UUID=704011cc-9087-425e-8686-eadef780c3b0 /var/hda/files/drives/drive2 ext4 defaults 1 2
UUID=b67ffade-dbf0-4130-a094-5c53bf56f671 /var/hda/files/drives/drive3 ext4 defaults 1 2
"
The 2 sata drives were partitioned in gparted and mount fine with the "mount -a" command at the tail end of the instructions. Unfortunately, at boot I get a filesystem error. The preceeding screen detail and error I get at boot are as follows: "
....
Setting hostname localhost.localdomain [ok]
ERROR: no RAID set found
failed to stat() /dev/mapper/no
failed to stat() /dev/mapper/raid
failed to stat() /dev/mapper/sets
Setting up Logical Volume Management: 2 logical volume(s) in volume group "Vol
Group" now active
Checking filesystesm [Ok]
/dev/mapper/VolGroup-lv_root: clean, 138027/2334720 files, 1005173/9332736
blocks
/dev/sda1: clean, 36,51200 files, 28532/204800 blocks
/dev/sdb1: clean, 11,15269888 files, 1006278/61049000 blocks
fsck.ext4: Unable to resolve 'UUID=704011cc-9087-425e-8686-eadef780c3b0'
fsck.ext4: Unable to resolve 'UUID=b67ffade-dbf0-4130-a094-5c53bf56f671'
[FAILED]
*** An error ocurred during file system check.
*** Dropping you to a shell; the system will reboot when you leave the shell"
At that point my only option is remount the file system with read write priviliges and comment out those 2 lines in /etc/fstab and reboot. That's where I"m at right now.
I've been messing with Linux on an off for around 10 years, my last foray being around 4 years ago.. I'm not overly technical, but I"m not a newb either. I've been running Windows Home Server (WHS) primarily as a media server for a couple years and I'm hoping I can run Amahi on this legacy machine in parallel for a year or so in hopes that it will eventually replace WHS as the greyhole drive pooling function matures a bit more. Unfortunately I ran into this stumbling block which might be a deal killer... I run 9 drives on my WHS server, several of which are on a SATA controller card. I need to be able to have Fedora (or whichever distro Amahi is running on reliably recognize these drives at boot).
Apologies for being a bit verbose with my explanation and the system dialog, I just want to make sure the issue was presented thoroughly and clearly. Assistance would be immensely appreciated!
Regards,
jbmia
*** Update****
This ended up being an issue with corrupt raid bios data on the drives I was installing... The drives came from another machine.. After searching around I found someone with a similar issue that file a bug with Fedora and the devs recommended using the dmraid -E command to erase the meta data on the drives. This solved my problem here...
I'm working with a fresh install of F12 and overall the install itself goes fine
till I try to get Fedora to mount 2 sata drives that are on a Syba SD-SATA150R
at boot time. I've following the instructions here:
http://wiki.amahi.org/index.php/Adding_ ... o_your_HDA. The
only thing I have to mention with the install that a found a bit different is
that at "hda-diskmount", the UUID provided does not end in ".....
var/hda/files/drives/sdb1 ext4 defaults 1 2", but rather,
var/hda/files/drives/drive1 ext4 defaults 1 2. Not sure if thats
important...
Now, I added 3 drives total, 1 is a PATA drive (legacy mboard with PATA only) and 2
are SATA via the SD-SATA150R card (a SIL3512 card). The PATA drive appears to be unaffected by the issue. All 3 of these were added to /etc/fstab as follows:
FSTAB file:
"
#
# /etc/fstab
# Created by anaconda on Sun Nov 7 08:21:56 2010
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1
UUID=a86f7cc3-ba1b-4f59-b49d-ca8ce8850f5d /boot ext4 defaults 1 2
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
UUID=6e046a2c-2914-40b3-bf0e-913147a72121 /var/hda/files/drives/drive1 ext4 defaults 1 2
UUID=704011cc-9087-425e-8686-eadef780c3b0 /var/hda/files/drives/drive2 ext4 defaults 1 2
UUID=b67ffade-dbf0-4130-a094-5c53bf56f671 /var/hda/files/drives/drive3 ext4 defaults 1 2
"
The 2 sata drives were partitioned in gparted and mount fine with the "mount -a" command at the tail end of the instructions. Unfortunately, at boot I get a filesystem error. The preceeding screen detail and error I get at boot are as follows: "
....
Setting hostname localhost.localdomain [ok]
ERROR: no RAID set found
failed to stat() /dev/mapper/no
failed to stat() /dev/mapper/raid
failed to stat() /dev/mapper/sets
Setting up Logical Volume Management: 2 logical volume(s) in volume group "Vol
Group" now active
Checking filesystesm [Ok]
/dev/mapper/VolGroup-lv_root: clean, 138027/2334720 files, 1005173/9332736
blocks
/dev/sda1: clean, 36,51200 files, 28532/204800 blocks
/dev/sdb1: clean, 11,15269888 files, 1006278/61049000 blocks
fsck.ext4: Unable to resolve 'UUID=704011cc-9087-425e-8686-eadef780c3b0'
fsck.ext4: Unable to resolve 'UUID=b67ffade-dbf0-4130-a094-5c53bf56f671'
[FAILED]
*** An error ocurred during file system check.
*** Dropping you to a shell; the system will reboot when you leave the shell"
At that point my only option is remount the file system with read write priviliges and comment out those 2 lines in /etc/fstab and reboot. That's where I"m at right now.
I've been messing with Linux on an off for around 10 years, my last foray being around 4 years ago.. I'm not overly technical, but I"m not a newb either. I've been running Windows Home Server (WHS) primarily as a media server for a couple years and I'm hoping I can run Amahi on this legacy machine in parallel for a year or so in hopes that it will eventually replace WHS as the greyhole drive pooling function matures a bit more. Unfortunately I ran into this stumbling block which might be a deal killer... I run 9 drives on my WHS server, several of which are on a SATA controller card. I need to be able to have Fedora (or whichever distro Amahi is running on reliably recognize these drives at boot).
Apologies for being a bit verbose with my explanation and the system dialog, I just want to make sure the issue was presented thoroughly and clearly. Assistance would be immensely appreciated!
Regards,
jbmia
*** Update****
This ended up being an issue with corrupt raid bios data on the drives I was installing... The drives came from another machine.. After searching around I found someone with a similar issue that file a bug with Fedora and the devs recommended using the dmraid -E command to erase the meta data on the drives. This solved my problem here...