GNU bug report logs -
#77086
Filesystems not unmounted on reboot
Previous Next
Full log
View this message in rfc822 format
[Message part 1 (text/plain, inline)]
Hello guys,
so I started looking into this a bit (not promising any results though),
and I can pretty confidently say that there is indeed an issue in the
unmount.
I have created a VM system, tried rebooting a few times and it was fine,
however then I tried reconfiguring, and for that run I got an error upon
reboot. Not only that, I can't boot it anymore :) the filesystem got
corrupted in a way that prevents boots. Welp.
I can't really say at current moment what is causing this, but the
problem is that the root device is busy so it can't be unmounted.
I have a hypothesis given the log I see: that the root filesystem
is being unmounted first rather than last like it should be.
Could the reconfigure throw shepherd off? I am also CCing Ludovic,
I hope he won't mind.
I would like to also point out an e-mail in the
guix-devel sent recently that someone got a timer service running after
reconfigure, but not after reboot, where after reboot the timer module
is not imported by default unless put to service's modules, but after
reconfigure it works, so this leaves me with yet another point for the
impression that shepherd behaves differently on reboot as opposed to
reconfigure reload.
I will try digging more, but I am not that knowledgeable about
shepherd yet, so it will take longer time.
I am attaching both log (starting after reboot command)
and the configuration used for the vm.
[reboot_log (text/plain, attachment)]
[simple.scm (text/plain, attachment)]
[Message part 4 (text/plain, inline)]
Regards,
Rutherther
This bug report was last modified 101 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.