Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hey all -
One of my disk lights went out sometime in the last week or so, not really sure when. I have my nas set up for raid-1.
When I reboot my DNS-323 I can hear both disks spin up, but the light will not turn on for the right disk. I am running firmware 1.03. I set up my box with funplug and telnet in, dmesg says nothing unusual (disks okay, setting up raid device md0, etc). The web page interface says that the array is okay, as well. I never received an email warning, and my mail server is set correctly (the test message gets to me). But if I telnet in via the funplug telnet daemon (which logs you in as root on the firmware filesystem), /mnt contains only HD_a2 (no HD_b2). I might be misremembering them being distinct mount points in raid1?
I hear that the DNS-323 pretty much stinks about disk failures. Is there a prescribed safe method for me to test the disks without killing my data, using the NAS, to be uber-sure that the light out is just a dead LED? Sadly I have no computer with SATA ports in it available to me...
Thanks muchly for any help,
Reid
Offline
Pull the left side drive - if your data is still available - you know the right side drive is functional.
The unit should not prompt you to rebuild when you reinstall the removed disk, but, if it does, if you don't write to the disk, it should not matter which disk it rebuilds from.
Disclaimer - it's been a looooong time since I tried this - so don't blame me for any data loss - or - back up the data just to be safe.
Offline
telnet to the box and type:
cat /proc/mdstat
this will tell the status of the raid array from the mdadm (software raid subsystem) perspective.
You can post your results here.
This command on my box is posted here http://dns323.kood.org/forum/p6680-2007 … html#p6680
Last edited by mig (2007-10-18 07:02:11)
Offline
The array appears okay both looking at /proc/mdstat and finally by the 'yank' test. Thank goodness!
Now that it's getting dark here, I can see that the LED for the second drive is actually lit, but very very very (did I mention very?) dimly. I didn't know that this kind of failure was possible in a diode. I'll have to get a replacement LED and get out my soldering iron (is dlink actually good with their warranty?).
Also a curious side-effect question: I notice dmesg reports that the disks need to be fsck'd. If I run fsck -n /dev/md0 there is quite a lot of junk that needs cleaning up. Is there a method for doing fsck on boot (so the raid filesystem is unmounted) that is documented somewhere? I realize that it's reasonably safe to just shut down all the remote services (smb, etc), run fsck and don't touch it, but I'd prefer the safest option with my data if I can do it...
Thanks a billion,
Reid
Offline