Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi all - I (re) applied version 1.05 to my DNS 323 and a short while after I got a nice email from the device saying the left drive had failed.
I tested the unit with one, the other, and both drives in and they both spin up and the unit operates fine.
The DNS lacks any real status page so its hard to see that failure message, or even better, its cause anywhere other than the simple email I got.
All I have is the sync. status of "degraded" - what does this mean exactly??!
2 drives are recognised on the status page, and RAID 1 is still configured...
Thanks in advance!
Offline
My understanding was that degraded likely means that one of the drives has failed. In your case, however, you noted that they both still spin up.
Perhaps one of the drives has failed but is still able to spin up but not read/write. If you have RAID1, I think you can remove one of the drives safely to test if the other is still working, but you would need to check in the forums about what others have done when a drive has failed to be sure.
Since the E-mail told you which drive supposedly failed, you have some information upon which to test the drives.
Offline
It doesn't necessarily indicate a drive failure - consider degraded to mean that the information on the drives is no longer identical or synchronized.
Offline
If you have telnet, you can telnet in, and then send this command for the drive that is "degraded." (This means make sure the left drive is indeed /dev/sdb2. It should be if you booted properly.)
/ # mdadm /dev/md0 -f /dev/sdb2 / -r /dev/sdb2 mdadm: set /dev/sdb2 faulty in /dev/md0 mdadm: hot removed /dev/sdb2 / # mdadm /dev/md0 -a /dev/sdb2 mdadm: hot added /dev/sdb2 / #
What this does is sets the drive to failed status (which it probably already is), removes the drive from the array, then adds it back to the array. Re-sync should begin immediately. If you want, you can run e2fsck on it first to heck the file system. You can also re-format it using mke2fs before adding it back.
Offline
Thanks all for the responses.
BQ: I have putty and tried to telnet: no connection - do I need to be running hacks etc? Im running a stock DNS323..
Fordem - I think this is the case, as nothing has changed.. except applying the 1.05 that the uk support site had (as opposed to the US version I had before - dont ask!)
I think the update caused some de-synchronisation..
Offline
The update does not effect sync, according to D-Link, the F/W posted at each site is identical.
Yes, install ffp. You can remove it later. http://dns323.kood.org/howto:ffp
I use 0.5. It is easy to install, just drop 2 files on the root of your drive and reboot. Removal just involves erasing the files and reboot.
Last edited by bq041 (2008-06-14 23:34:34)
Offline
Thanks for the responses guys.
BQ: it worked, thanks!
I didn't check first (thought I could do it after, but cant) took a while to resynch (81Gb) but at the end it looks fine..
how can I run a check without removing the disk from the pairing and re-synching?
Any idea what does cause (sudden) de-sync?
anyway, issue-solved (I hope) - thanks again to all.
DHD
Last edited by DarkHorseDre (2008-06-16 13:31:28)
Offline
The unfortunate thing is that there is no good way to check them without removing them from the pairing. As long as it is attached to the aray, it will not check correctly. If it were me, I would remove it from the array (it is already in a failed status) check it with e2fsck and then add it back to the array. But, it is your choice. You could boot the unit with 1 drive, then insert the other one after it is booted and run the check program on it (it won't be part of the array at that point), but I'm not sure if it will re-attach to the array by itself on the next reboot. It degraded for a reason.
Offline
Well I did add it back to the pairing before my previous post, so I was reluctant to go through taht again.. but I unpaired and checked it today:
nothing seemingly critical: 2 inode errors and some bitmap error that were all corrected.. I wonder if that could cause the unit to reject a drive... Im thinking that its a harsh outcome either way...
Offline
Once it is synced, remove and check the other drive.
Offline
DarkHorseDre wrote:
Well I did add it back to the pairing before my previous post, so I was reluctant to go through taht again.. but I unpaired and checked it today:
nothing seemingly critical: 2 inode errors and some bitmap error that were all corrected.. I wonder if that could cause the unit to reject a drive... Im thinking that its a harsh outcome either way...
I don't see much point in checking it after it has successfully resynched - with other RAID devices I know it is possible to generate a degraded status simply because the data on the drives differs - something that would be corrected by resynching (and if it's not a data error, I would not expect the degraded condition to be resolved by resynching).
Offline
Ah yes, but by your (collective) admissions, we do not know 'for sure' what causes this spurious message and 'degraded' status... especially when I compare to traditional RAID technologies and their management/status interfaces.
I'd rather be sure about the condition of the drive then let some error (bad cluster?) persist..
Offline