Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Pages: 1
Hi yall,
Up until a few days ago, it was believed by me and I'm sure many others that a safe shutdown with Cleanboot was only possible using the 'Shutdown', 'Reboot' or 'Halt' commands within a shell session and that the hardware power button could not do this simple task.
Then I later read the other article here where the guy did the raidstop.patch file to add some lines into the raidstop script (which is called when the power button executes a shutdown). I tried that at first but I didn't really like the careless way in which all processes were terminated and then it allows the raidstop script to finish running on its own. This might be fine for some, but I really liked and prefer better to use the Cleanboot code as it is more well thought out and takes a much more graceful approach to shutting down all of the processes before it then calls the raidstop script near the end of it (see the /ffp/share/cleanboot/cleanboot.sh for more detail of the shutdown script that Cleanboot uses).
But the other night, it dawned on me of a VERY SIMPLE way to allow Cleanboot to stay installed and continue to control the manner in which shutdown occurs, EVEN WHEN USING THE POWER BUTTON.
Here's how I do it on my DNS-323:
First, I made a NEW file called: raidstop and I insert it here: /ffp/share/cleanboot/ (contents below):
------------------------------------------------------------------------------------------------------------------------------
#!/bin/sh
#Run shutdown command using power button through raidstop command
echo "Please wait while the system prepares for safe shutdown..."
sleep 1
shutdown
-------------------------------------------------------------------------------------------------------------------------------
Now, go to /ffp/share/cleanboot/cleanboot.sh in vi and go to line 146 (or whatever line you find the one line with this path in it): /usr/sbin/raidstop md0 md1 f
Then rename raidstop in that path so that the whole line looks like this: /usr/sbin/raidstop_new md0 md1 f
Now, copy the entire contents of the original dlink /usr/sbin/raidstop file and save the contents into a new file and put it here: /ffp/share/cleanboot/raidstop_new
####################################################################
#Ignore this section, and skip to the EDIT I added in below as a replacement of this section in the how-to.#
#########################################################################################################
#The next step, is to open the /ffp/start/cleanboot.sh file and add these lines into it (I put them near the top below the #!/ffp/bin/sh but it shouldn't really matter as #long as you don't put it in the center of some other code in there). So add these lines in there and save it:
#
#rm -f /usr/sbin/raidstop
#cp -a -f /ffp/share/cleanboot/raidstop_new /usr/sbin/raidstop_new
#cp -a -f /ffp/share/cleanboot/raidstop /usr/sbin/raidstop
#########################################################################################################
Once you finish placing all the files in the correct locations, then just run the /ffp/start/cleanboot.sh start command and it will update all the files to where they need to be in the system and set the proper permissions.
So by now most of you see what I'm doing, but for those who don't, it is relatively simple. Normally, when the power button is pressed for xx number of seconds and it triggers the shutdown sequence via chkbutton, then at some point, it calls to run the raidstop script. Well, I delete the original dlink version of that file (occurs at bootup when Cleanboot is started from the /ffp/start folder), and so then I copy a new version of that SAME EXACT dlink file into that same path, but I rename it to raidstop_new. Then, in the main cleanboot.sh file (NOT the startup file in the /ffp/start/ folder, but the actual shutdown run script in Cleanboot also called cleanboot.sh which is here: /ffp/share/cleanboot/cleanboot.sh not to be confused), in that shutdown file called cleanboot.sh, you change the path on line 146 to /usr/sbin/raidstop_new since when that script is run it will eventually want to run the original raidstop file from dlink, which we still effectively have but is now called raidstop_new. Now, we are simply using the NEW file called raidstop to be a simple trigger to run the shutdown command, which will trigger Cleanboot to run its shutdown script 'Cleanboot.sh' which has a nice process of terminating gracefully, all programs, and pids that are occupying resources and could prevent any devices from being umounted. Then when that script gets to line 146 it will now call the replacement of the dlink script which is now called raidstop_new, which will continue to umount and disassemble the RAID array. See, I told you it was simple. I tried it and it is a working wonder. Absolutely no e2fsck errors of any kind after powering down unit directly using the power button.
This should be it. I'm really tired, but I think I put all the steps in here right. Hopefully someone could try it and confirm that it works for them based on the above instructions. I know it is working fine for me now.
By the way, I'm not a big expert on Linux, so if a Linux expert or somebody else here who was a part of writing Cleanboot 2.1 sees this and thinks I didn't do things in the best way possible, then I'm all ears to getting some refinement of my methodology. I just did things in the only way I could figure to do the task at hand, which was to get Cleanboot to shutdown by pressing the power button.
.
.
-------------------------------------------------------------------------------------------
EDIT: 2-1-2010
-------------------------------------------------------------------------------------------
.
I did some cleanup on my cleanboot.sh startup file (located in /ffp/start/cleanboot.sh), and it is much better now. Instead of just adding some lines into wherever in it, I have studied the correct commands and locations that ought to be in it and below is an attachment of it.
Now, since I fixed it, if you ever make changes to any of the files like /ffp/share/cleanboot/raidstop, or ffp/share/cleanboot/cleanboot.sh for example, with this new /ffp/start/cleanboot.sh you can simply do: /ffp/start/cleanboot.sh start and it will reload all the files in the right way. I also added the chmod commands for the new scripts that are copied over to ensure that they are correct after copying. I basically added in my stuff in the same format that they added in theirs into the 'start' portion of the script. Looks cleaner too. See attachment. Other than that portion of this tutorial, everything else listed above is the same as I said before.
I've also done more testing last night to verify that these scripts are actually being called and running, and they are. I temporarily installed echo>to>file marker commands in various positions of the chain like in the raidstop script and in the /usr/sbin/cleanboot.sh script and then I pressed the power button for 5 seconds and released, and then when I turned the unit back on, I could open my log file and verify that my markers were present, indicating that the code ran through. Obviously, once it got to a certain spot in the cleanboot.sh shutdown script, they stopped since that script was busy shutting down pids so at some point the echo markers died, but after turning the unit back on, dmesg hasn't reported any abnormalities with any of the file systems, and I even have 3 USB partitions on an internal USB drive that I soldiered into the secondary Marvel USB Host controller which seem to be umounting just fine. Also, all of my HDD storage and USB partitions are ext3. But I think this thing is working good so far.
EDIT: To see the attachment, please go down to post #5 in this thread. I removed the attachment that was in this post since it has now become outdated. However, please read this posting for informational purposes, but then also read my latest post which contains the newest information to get the latest instructions for installing these scripts
Last edited by ojosch (2010-02-15 12:36:10)
Offline
i get a Can't open /etc/ffp.subr error. when i looked on the source code, it's missing "/ffp"
#!/bin/sh # PROVIDES: cleanboot # BEFORE: LOGIN . /etc/ffp.subr
so i'd change those lines to this
#!/ffp/bin/sh # PROVIDES: cleanboot # BEFORE: LOGIN . /ffp/etc/ffp.subr
Offline
Oh, I'm sorry about that. Here's my long-winded response to why I use that path. It's because at the bottom of my fun_plug script, I have this copy command which moves that file to the ramdrive: cp -a -f /ffp/etc/ffp.subr /etc/ffp.subr . I am still working at moving all things I use, to run solely off of the slash-root ramdrive space. I want to be able to unmount all drives while being fully up and running. I currently have an internal USB drive that has three ext3 partitions that are identical to each other. I can choose in the .bootstrap/setup.sh file which one I want to load ffp from, but in order to run e2fsck on the partition I'm currently running on, I now have to go into setup.sh, change it to start from a different partition, then reboot, then run e2fsck on that partition. I have 3 partitions so that I have ample backups in case flash cells on one partition become corrupted then I can still run the system using another. This unit is for my Mother who lives 2 states away, so I am trying to build this thing to be very reliable with solutions available in case I have problems which are not easily fixed from 2 states away. It's like sending a probe to the moon, it must have ample backup systems available so that if and when failure occurs, the NASA team can still implement a backup system and it wont terminate the mission. I will at least be able to log into this unit over VPN router into the home network to service it remotely but this does not help if I have a hardware failure. So my next step is to make all tools I use, anything ffp-related, to load into the ramdrive space and then execute from there, so as to minimize any activity or unnecessary writes to the USB stick, and I believe this will help to prolong the life of the drive (since I have no idea how reliable it is). With the exception of a few binaries and misc scripts that run as dependencies, I have most everything running off of the ramdrive currently and when I view the memory usage, it reports that I am still not even using swap space yet, and that I still have free ram available, so if I can figure out what all the rest of the dependencies are that are still running off of the ffp stick and move just them too, then I will accomplish my goal. By the way, I still cannot dismount my ffp drive, so I know that some of the bins or scripts still have dependencies on /ffp . If I knew all of what bins I needed to run only what I use or will need to run my apps, then I would just thin down the whole ffp drive to bare bones and just copy over all the basic files I need to run just those things to the ramdrive and nothing more. So then if it were thinned down like this, I could just make a copy command in the fun_plug script to copy the entire ffp directory over to the ramdrive (instead of sym-linking it to the USB stick) without it being too big on the ramdrive. But right now, it is way to large. If I copied the whole ffp folder it would then run into the swap space because of it's large size. I only run NUT UPS tools, SSH, Cleanboot, some modified scripts and a few custom bins and that's about it. I don't run anything too fancy, as I just want to be able to SSH to it and I need it to shut itself off when UPS reports LOW BATTERY. So by the time I'm done, I want to do away with the sym-linking, and just copy the entire ffp directory once I get it small enough. But if I can get it to do this like I want (to run ffp from ramdrive space), then I can easily make a simple script to stop the SMB / NMB services, and dismount all partitions gracefully (without forcing or lazy dismount), and run an e2fsck scan on all partitions while the basic system is still running, all through this automatic script, and I can then create a crontab task to make it check itself once a week or something similar.
But anyhow, I apologize for not fixing that problem you described in the script, that normal people run. But that is correct, anyone running ffp in the conventional way must use the path you have shown which was: /ffp/etc/ffp.subr
.
.
Last edited by ojosch (2010-02-06 09:40:43)
Offline
Hi ojosch,
Thanx for sharing your solution. I'm very interesting in making the power button work in combination with the funplug.
I tried your solution on a single-disk configuration on the Conceptronic CH3SNAS, but it does not seem to call 'raidstop' when pressing the button. Are you sure this solution works in a non-raid configuration?
I question this, because the /ffp/share/cleanboot/cleanboot.sh script shows the following on line 144:
# umount hard drives
if [ -e "/tmp/raidup" ]; then
/usr/sbin/raidstop_new md0 md1 f
else
/etc/hotplug/sataumount
fi
Shouldn't your solution do something similar for 'sataumount' just like you did for with the 'raidstop' file?
Offline
##EDIT: OK, this thread is getting real messy now which was not my intention but nevertheless has occured. I sometimes wish I could just start over competely but, I guess life is full of patches and bandades, so here it goes. Please read this whole thread to understand the full dialogue of events that brings me to this point. I have only done full testing on the RAID 1 configuration so this is the only real mode of this mod that I can support. I mistakenly made a presumption of things that led to a buggy script as a result of the previous post before this. I have corrected that bug in the cleanboot.sh script that is in the bottom of this post. I have digressed back to my original mod which is to only tap into the raidstop file and divert the code to cleanboot's shutdown command which will then follow the cleanboot shutdown and then run dlink's raidstop script afterwords. If you have RAID, then this should work. If not, then you may have to do some of your own testing to get this to work for you. Just read this whole thread and it should make sense at that point. Thanks for your understanding.
##PROCEDURE KNOWN TO WORK FOR ME:##
Make a script with the text shown below but w/o the dashes above and below it) called raidstop and place it into /ffp/share/cleanboot/:
------------------------------------------------------------------------------------------------------------------------------
#!/bin/sh
#Run shutdown command using power button through raidstop or sataumount command
echo "Please wait while the system prepares for safe shutdown..."
sleep 1
shutdown
-------------------------------------------------------------------------------------------------------------------------------
Now, you must go to the /ffp/share/cleanboot/cleanboot.sh file and go to the line 146 area and identify the code snip-it shown below and change the path shown below to reflect /usr/sbin/raidstop_new:
# umount hard drives
if [ -e "/tmp/raidup" ]; then
/usr/sbin/raidstop_new md0 md1 f
else
/etc/hotplug/sataumount
fi
Now do this command: cp /usr/sbin/raidstop /ffp/share/cleanboot/raidstop_new which will copy the original dlink version of this file to the /ffp/share/cleanboot/ folder.
Then download the cleanboot.sh start script attachment from this post here (below), and place that into your /ffp/start/ folder to replace the original cleanboot.sh start script. chmod a+x /ffp/start/cleanboot.sh it so that it is executable when all of the /ffp/start/ scripts are ran. If you don't want to reboot after doing all this, just do command: /ffp/start/cleanboot.sh start and it will update all the changes needed to make this all work on the next power button shutdown.
As a measure of what gets called for debugging purposes, I created my own path called /ffp/var/log/ (if it does not exist already), and I added in the following echo command into my modified raidstop script (anywhere in there), and into my /ffp/share/cleanboot/cleanboot.sh script (near the top, before all your processes kill) that look like this:
echo "The raidstop script has now been called... \"/mnt/HD_a2/ffp/var/log/messages2\"." 1>>/mnt/HD_a2/ffp/var/log/messages2
and to put in the /ffp/share/cleanboot/cleanboot.sh file:
echo "We now made it to the cleanboot shutdown script... \"/mnt/HD_a2/ffp/var/log/messages2\"." 1>>/mnt/HD_a2/ffp/var/log/messages2
Either of these commands above will create a file called messages2 in that path (you can change the path to point wherever you prefer), and it will copy the echo statement you place into that file. If the messages2 file already exists, then the command will simply open it and amend the new statement to the existing file without erasing any of the old statements that had been printed there from before.
This is a simple way to know that these scripts have been successfully called at shutdown. After reboot, simply open that file messages2 and verify that your echo statements exist, and that means your code was ran. Just a simple form of debugging. Once you know everything works right, you can either comment out those echo commands or just remove them from your scripts, and delete the messages2 file afterwords.
Hopefully this is all of what's needed to make this work for everybody now.
.
.
Last edited by ojosch (2010-02-15 12:32:23)
Offline
The first version of your script was working for 'shutdown', however now I am getting an error despite following your new instructions. Unfortunately I am getting all kinds of errors on bootup:
**** fun_plug script for DNS-323 (2008-08-11 tp@fonz.de) **** Fri Feb 12 19:05:20 GMT 2010 ln -snf /mnt/HD_a2/ffp /ffp * Running /ffp/etc/fun_plug.init ... * Running /ffp/etc/rc ... rcorder: file `/ffp/start/cleanboot.sh' is before unknown provision `LOGIN ' rcorder: requirement `usb-storage.ko' in file `/ffp/start/usbmount.sh' has no providers. rcorder: requirement `locations)' in file `/ffp/start/part_table.sh' has no providers. rcorder: requirement `to' in file `/ffp/start/part_table.sh' has no providers. rcorder: requirement `modification' in file `/ffp/start/part_table.sh' has no providers. rcorder: requirement `(or' in file `/ffp/start/part_table.sh' has no providers. rcorder: requirement `05.' in file `/ffp/start/part_table.sh' has no providers. rcorder: requirement `fun_plug' in file `/ffp/start/part_table.sh' has no providers. rcorder: requirement `fons' in file `/ffp/start/part_table.sh' has no providers. * /ffp/start/usbmount.sh inactive * /ffp/start/syslogd.sh inactive * /ffp/start/SERVERS.sh inactive * /ffp/start/portmap.sh inactive * /ffp/start/unfsd.sh ... Linking /etc/exports ... Starting /ffp/sbin/rpc.portmap Starting /ffp/sbin/unfsd -e /ffp/etc/exports * /ffp/start/transmission.sh ... Starting transmission-daemon * /ffp/start/nfsd.sh inactive * /ffp/start/ntpd.sh inactive * /ffp/start/smbshares.sh ... $Shutting down SMB services: $Shutting down NMB services: $Starting SMB services: $Starting NMB services: * /ffp/start/LOGIN.sh inactive * /ffp/start/telnetd.sh inactive * /ffp/start/sshd.sh ... Starting /ffp/sbin/sshd * /ffp/start/rsyncd.sh inactive * /ffp/start/part_table.sh inactive * /ffp/start/mediatomb.sh inactive * /ffp/start/kickwebs.sh inactive * /ffp/start/lighttpd.sh inactive * /ffp/start/inetd.sh inactive * /ffp/start/cleanboot.sh ... /ffp/etc/rc: line 45: /ffp/start/cleanboot.sh: not found * OK
Sorry I am new to this, I know I should have kept a copy of the original before doing the edits... Is it maybe something simple I am missing? I did understand and follow the instructions in the previous post for the new revision of cleanboot.sh
Last edited by caust1c (2010-02-13 05:29:29)
Offline
Not sure what when so wrong with the updated cleanboot.sh, I removed cleanboot-2.1-ffp05 and reinstalled it, everything working fine now. Shutdown works and bootup of FFP works without errors in the ffp.log.
Maybe I'll get brave again and try the new script as I am using 2 independent discs/volumes on my DNS-323 and wish to make sure they unmount properly.
Question: I haven't been brave enough to test the built-in UPS monitoring with fw 1.07. I don't understand the process of cleanboot well, is there any chance if my UPS kicks in, the DNS will gracefully unmount FFP and shutdown with cleanboot-2.1-ffp05 running?
Last edited by caust1c (2010-02-13 21:53:28)
Offline
If you want, you can just use your own cleanboot.sh script, but just study mine and add in the things that get deleted, copied and chmodded. Or, backup your old cleanboot.sh, and try mine again, then if it fails, let me know and I'll check it over for problems. It works on mine though.
Also, when you install NUT UPS tools package, you can go into the upsmon.conf file and tell it any shutdown command you want to run, so I made mine do 'shutdown' which will just run the cleanboot shutdown script anyway, and by default it will shutdown on 'low batt' status which is around 9% battery left. All those parameters are adjustable though.
Last edited by ojosch (2010-02-14 08:26:47)
Offline
Hi ojosch,
I tried the solution with the sataumount / sataumount_new trick just as you described, but it didn't solve the problem. I'm still getting those "mounting unchecked fs" warnings after using the shutdown button. Cleanboot still works fine on reboot or shutdown from the commandline.
Something interesting: I did some debugging using the 'echo' lines you described some posts above. (I prefixed them with the current date/time) The debug lines are in /usr/sbin/raidstop, /etc/hotplug/sataumount and /ffp/share/cleanboot/cleanboot.sh. It turns out that:
- both of the scripts /usr/sbin/raidstop AND /etc/hotplug/sataumount scripts are called on shutdown via the button. The sataumount trick seems unnessecary.
- cleanboot.sh is invoked twice...?
- Some echo debug lines i added to the respective lines just before the calls to '/usr/sbin/raidstop_new md0 md1 f' and '/etc/hotplug/sataumount_new' do NOT show up. Does this mean that umounting does not happen, or that the log file could not be updated?
The following lines were added to my messages2 log file after pressing the button:
Sun Feb 14 19:39:33 GMT 2010 The raidstop script has now been called... "/mnt/HD_a2/pkg/logs/messages2".
Sun Feb 14 19:39:34 GMT 2010 The sataumount script has now been called... "/mnt/HD_a2/pkg/logs/messages2".
Sun Feb 14 19:39:34 GMT 2010 We now made it to the cleanboot shutdown script... "/mnt/HD_a2/pkg/logs/messages2".
Sun Feb 14 19:39:35 GMT 2010 We now made it to the cleanboot shutdown script... "/mnt/HD_a2/pkg/logs/messages2".
Im a little puzzled here. My current guess is that the shutdown command issued by the power button and the shutdown command issued by the raidstop (or sataumount) script interfere with each other in some way. my next step is to add additional debug lines in raidstop_new and cleanboot.sh to see what gets called in more detail.
Offline
Hmm, well now that I think about it, I never was able to actually TEST mine by doing a power button shutdown, with the new scripts that invoke the use of both sataumount and raidstop yet, since I'm in Mexico on vacation. I added in all the new changes and ran the /ffp/start/cleanboot.sh from remotely (that is the only access I have to it at the current time) without any errors running the start up script. I'm still here in Mexico now so I can't hit the power button to see what files get called or what. I don't want to shut it down and not be able to get back into it. Well, my brother is staying at my house I guess I could have him help me with it but really, I just figured I'd mess with it later when I got back home. I had originally just been going on what you said before when you said that it wasn't calling the raidstop script, but now you are suggesting that it DOES call both. If this is the case, then I'm only going to recommend that people with RAID configured to set up the scripts to work with raidstop, but not with sataumount at the same time. I had been wishing for the past few days that I were home when all of this developing / testing were taking place, so I could thoroughly test things out on my own before I posted anything more here about this topic. Oh well, I will just have to update the new findings here as soon as I get it all ironed out. Anyhow, so I digress back to my original idea that I used before in the first post. I have only tested this using the raidstop file since I know that this gets called on my box, so I guess until I were to test this issue for myself, then I can only recommend this method for now. It's up to the individual DNS-323 user to figure out what works for his own setup. Sorry about this confusion people, I just tried to make it work based on the info I had available to me while I'm away from home. I definitely DO know that it works using my procedure on only the raidstop file diverted through the cleanboot.sh and then back to raidstop_new for me.
And also, just so you know rintje, The cleanboot.sh shutdown script kills many processes before even getting back to the original raidstop script from dlink, and I originally placed echo commands throughout that cleanboot script to figure out where they would die out at, and they died about 1/3 the way through the cleanboot.sh, so it would make sense that you wont see any echo commands after that point when the kernel is running on bare bones. The important thing to note though, is that the 'shutdown' command did get called through raidstop since cleanboot.sh did run it's script. By the way, I had just received my serial cable kit the day before I left on my trip and as soon as I get back, I am going to assemble it, install it to the on-board serial connector, so I can have a closer look at things while I'm shutting down, rebooting and stuff, since I can make echo commands print out to the serial console almost right to the end during shutdown sequence.
So for now, just do away with my dual mode script, and just figure out which file gets called when you hit YOUR power button, and add that file to call the shutdown command. This should work for everyone. Then if you change your RAID config, and it breaks your power button shutdown, then you know to figure it out and make the changes you need to make it work.
And rintje, one more thing for you (or anyone), make sure if you get the ext2 - recommend running e2fsck errors, you A: successfully run e2fsck and clear those errors so you have a clean filesystem on ALL partitions, because they wont go away on their own, and B: if you still get a run e2fsck is recommended error, make sure it is valid for your partitions in question, because I converted all my partitions over to ext3 (except for / root) because you can't convert that one since the instructions to make it ext2 come off the flash image from dlink, and when it boots my system, all of my every partition on every drive except / root scan clean, but I always get a dmesg error from ext2 immediately after it mounts the root filesystem, that says system has reached the maximum number of reboots set and recommends me to run e2fsck on / root which is impossible to do. There is a feature to make it stop counting the number of boots and never recommend to routinely run e2fsck which I did to all of the other partitions, but this a DLINK bug and I cannot tell their firmware image to stop counting reboots before it recommends to run it. If I wipe the whole firmware and reformat everything, then that error goes away, but after 20 boots or somewhere thereabouts then ext2 reports that error after it mounts / filesystem. And like I said any other message I get in dmesg from any of the other partition activities is made from ext3 since I've converted everything else to journaling ext3. Well you get the point now. Just make sure the error you see is really a valid error. By the way, I'm running the newest beta 1.08 the current build out as of this date of this posting.
Please note that I am going to edit my cleanboot.sh download attachment and all other material in my last posting before this point, back to the original instructions and let the non-RAID user do his own testing on how to figure out safe powering down with the power button. This thread will be for RAID users, and any other volunteers who want to figure out and contribute their non-RAID setup procedures for this task that I am accomplishing here. I unfortunately, do not have the time to test the non-RAID aspect of this project.
Oh, and by the way, rintje, thanks for what you have contributed so far. It has been very valuable info for the non-RAID group. Hopefully you have enough time to help piece together that end of things.
Last edited by ojosch (2010-02-15 19:15:20)
Offline
After some more testing i finally found out why my harddrive refused to umount!
I run my fun_plug from an USB stick, using the solution described here. The default location specified by the script was in my case '/mnt/HD_a2/usbstorage'. This caused a succesful umount of my harddrive to be dependent on a succesful umount of my USB stick. An unncessary complication.
So, i reconfigured the mount point of the USB stick to be /mnt/USB and wow.. it works! Umounting like a charm
In the meantime, before writing this response, i switched to jcard's chkbutton2 solution for switching the unit off. In short, that solution features a nice frontpanel led sequence enabling you to SEE how long you need to press the button to make the unit shutdown.
To make the daemons/processes shutdown gently on execution of chkbutton2, i addionally copied most of the code of the original '/ffp/share/cleanboot/cleanboot.sh' into the '/ffp/share/chkbutton2/cleanup.sh' script. (See the chkbutton2 thread for details about cleanup.sh)
I prefer the chkbutton2 solution over the solution descibed here, because it does not intervene halfway during the shutdown process via the raidstop script. That feels like a better approach to me.
It is likely that the solution of this thread works for my setup as well, now that i know the problem was caused by something else. I don't bother trying that though..
Last edited by rintje (2010-03-04 23:06:57)
Offline
Pages: 1