Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
This is the second time I have a white LED on the left drive of my DNS-323. I have 2 Western Digital 400GB drives in my device, and had this white led on 1.03 and now on 1.04B84. The setup is a 320GB Raid1 and a 90GB JBOD. In the webmanagement, it states under status "degraded". I had this problem once before, and after copying everything to another HD I did the 1.04 update, thereby reformatting the drives (I wanted to change the size of the RAID1 volume). After this, the white led was gone for a month or so...
The funny thing is: it is still working! Both the Raid1 volume and the JBOD one are normally useable! I do regular copies from my desktop computer to the DNS323, and my laptop syncs with it (offline files in Windows). The JBOD volume is used sparsely, just for exchanging some files between the computers, or to send something home if I'm not at home by FTP. But it would still be a nuisance to have to reformat the whole bunch again, simply because it costs me a lot of time...
I have a rather simple fun_plug running:
#!/bin/sh
#
# Simple fun_plug
#
dmesg > /mnt/HD_a2/dmesg.out #write current info to a file
/sys/crfs/LPRng/lprm all #empty the printspooler directory
/mnt/HD_a2/starttelnet.sh #start telnet
Can someone help me with this issue? Can I start a manual sync of the Raid1? How? Maybe a fsck?
Offline
hmm,I think I have the same problem - maybe we should "compare notes"
one HDD - always right in my case - is dropping out of RAID1 (while LED)
I first had to add it back manually via command:
mdadm -add /dev/md0 /dev/sda2
so I think it is safe for you to do it as well, but check you setup first! (in your case it may be: mdadm -add /dev/md0 /dev/sdb2)
at least you will have two disks in the array (even if they are resyncing on reboot)
(I did use tune2fs utility to schedule automatic disk checks - but I doubt that this is the cause.....)
the partition setup looks fine to me: (except the comment: Partition table entries are not in disk order)
It occured to me to check the message log:
/mnt/HD_a2/wspolny # dmesg
software to use new ictls.
md: mdadm(pid 1802) used obsolete MD ioctl, upgrade your software to use new ictls.
md: mdadm(pid 1808) used obsolete MD ioctl, upgrade your software to use new ictls.
md: mdadm(pid 1836) used obsolete MD ioctl, upgrade your software to use new ictls.
..... lots of messages
there is more about it here: http://dns323.kood.org/forum/t1476-DNS- … -2007.html
but I am not sure if this is directly relevant to the problem
/mnt/HD_a2/wspolny # fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 66 530113+ 82 Linux swap
/dev/sda2 131 60702 486544590 83 Linux
/dev/sda4 67 130 514080 83 Linux
Partition table entries are not in disk order
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 66 530113+ 82 Linux swap
/dev/sdb2 131 60702 486544590 83 Linux
/dev/sdb4 67 130 514080 83 Linux
Partition table entries are not in disk order
but now after EVERY reboot the DNS is auto-forcing full raid resynch:
/mnt/HD_a2/wspolny # mdadm -D /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sat Feb 9 13:53:03 2008
Raid Level : raid1
Array Size : 486544512 (464.01 GiB 498.22 GB)
Device Size : 486544512 (464.01 GiB 498.22 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Feb 13 08:50:44 2008
State : active, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Rebuild Status : 68% complete
UUID : 6d1cd9f1:f568efed:4a8904db:e3086e15
Events : 0.115361
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
Last edited by ahors (2008-02-13 13:25:10)
Offline
Ardjan wrote:
This is the second time I have a white LED on the left drive of my DNS-323. I have 2 Western Digital 400GB drives in my device, and had this white led on 1.03 and now on 1.04B84. The setup is a 320GB Raid1 and a 90GB JBOD. In the webmanagement, it states under status "degraded". I had this problem once before, and after copying everything to another HD I did the 1.04 update, thereby reformatting the drives (I wanted to change the size of the RAID1 volume). After this, the white led was gone for a month or so...
The funny thing is: it is still working! Both the Raid1 volume and the JBOD one are normally useable! I do regular copies from my desktop computer to the DNS323, and my laptop syncs with it (offline files in Windows). The JBOD volume is used sparsely, just for exchanging some files between the computers, or to send something home if I'm not at home by FTP. But it would still be a nuisance to have to reformat the whole bunch again, simply because it costs me a lot of time...
I have a rather simple fun_plug running:
#!/bin/sh
#
# Simple fun_plug
#
dmesg > /mnt/HD_a2/dmesg.out #write current info to a file
/sys/crfs/LPRng/lprm all #empty the printspooler directory
/mnt/HD_a2/starttelnet.sh #start telnet
Can someone help me with this issue? Can I start a manual sync of the Raid1? How? Maybe a fsck?
Officially - there is no "white' LED, it's either blue for a good drive, or amber for a failed drive - now, in my experience the amber is not really amber, it's a strange color, but I wouln't have called it white either.
BUT - if the LED has changed from blue to some other color AND you have a degraded status in the admin status page, I would assume you have some sort of drive problem.
IF it's the same drive (same side LED, assuming you haven't physically swapped the drives), I would assume a flaky drive and consider a replacement.
Offline
Wow fordem, mentioning possible faulty disks in a mirror without suggesting they make sure they have a complete backup? Are you feeling ok??
Offline
HaydnH wrote:
Wow fordem, mentioning possible faulty disks in a mirror without suggesting they make sure they have a complete backup? Are you feeling ok??
As a matter of fact - no - but I didn't know you cared
Offline
For the record - I doubt that the hard drive is faulty (they are new) - and did I run full fsck on it - no bad sectors.
As far as white color --- hmmm not blue for sure the other drive is blue - maybe very,very light shade of amber (-;
I did have a problem initially when creating the array - the formating stopped at 94% - it was there for a long time - so I rebooted the DNS. reformatted it for two single drives and back for RAID1
Also the RAID1 starts to operate correctly once the synchronization is complete (at least according to mdadm) albeit the while/amber LED stays on
Offline
ahors
It's your drive, it's your money, it's your data, it's your time - it's your decision.
For what it's worth, I've seen new drives fail, so, from my point of view, the fact that the drives are new has nothing to do with them being faulty - but - for some reason, your DNS-323 is sensing a problem with one particular drive, and I'd consider that cause for concern.
The problem could of course be the DNS-323 itself - if you want, you could consider swapping the drives around (with a RAID1 config, that should not cause a problem, but I'd backup the data, just to be safe) and if the white/amber LED follows the drive, then you KNOW, it's the drive.
Offline
To me it really seems white. Maybe an off-white, but I never thought of calling it 'amber': http://666kb.com/i/aw3akk7rexufpazfp.jpg (soory for the bad quality, but with flash it doesn't work... :-)
About backups: yes, I have the originals of all data on the drive on my desktop, and that one also has a Raid1... :-)
The funny thing is: it all works! If I didn't notice the white Led, I never would've known that there is something wrong. The Raid1 volume works, the JBOD volume also. No degrading in speed noticed, no other startup sounds than I am used to.
I am currently moving the contents of the JBOD-drive to the Raid1 volume, but after that I will pull the drive out and test it externally. After that; I will check the rebuild function...
Offline
One suggestion - when you pull the drive, be sure to delete ALL the partitions on it before re-installing it.
Offline
Offline
orbitaudio wrote:
Ardjan wrote:
To me it really seems white. Maybe an off-white
Interesting, mine are orange... I was wondering what all you guys were talking about. I'll have to take a photo of mine
Mine is orange too ... very orange.
Jaya
Offline
Doesn't orange mean amber? And amber means the drive is failing / about to fail / failed?
Offline
mealto wrote:
Doesn't orange mean amber? And amber means the drive is failing / about to fail / failed?
Personally i'd call amber a light orange, anyway yes amber/orange means the drive has failed but i just turned mine on with fonz's DNS323-utils for the purpose of the photo.
Last edited by orbitaudio (2008-02-15 22:31:12)
Offline
I was able to make whitish colour on one of HD LED by doing some experiment. It has two-colour LED and when both Blue and Amber parts are lit, whitish colour.
In my case, this is how I made that happen. I pulled out one HD to simulate one HD failure while it is running. The LED on the HD turned off immediately, but status page says I have two disks and sync completed. Then couple of minutes later, the LED lit amber, although status page still says sync completed.
Push back the HD now the LED lit both blue and amber at the same time and show like whitish colour. Power cycling let status page realize RAID1 is degraded, and the LED colour changed to blue, but re-building did not start. Couple of minutes later, LED went back to white again, still no array rebuilding.
Power off, pull out the bad (?) HD, power on, wait for full reboot, power off, put back HD, power on. Now I can see 323 is rebuilding array. It seems white LED occur when it knows something is not right (amber) but also knows there is a healthy HD (blue) to play with, but not sure what to do or does not want to rebuild array whatever the reason.
I tried this with both side of HD, and the result was same. Although the test was done with only 99GB of RAID 1 array out of two 1TB HDs, (Strangely, I cannot create bigger than 99GB RAID 1 array while I can create 2TB RAID 0 or two 1TB individual disks no problem. But this issue should be on another thread.) I am glad there were no data loss and array rebuild properly.
HW: 1A, FW:1.04, HD: WD10EACS x 2, No mod, yet.
Offline
ahors wrote:
For the record - I doubt that the hard drive is faulty (they are new) - and did I run full fsck on it - no bad sectors.
As far as white color --- hmmm not blue for sure the other drive is blue - maybe very,very light shade of amber (-;
I did have a problem initially when creating the array - the formating stopped at 94% - it was there for a long time - so I rebooted the DNS. reformatted it for two single drives and back for RAID1
Also the RAID1 starts to operate correctly once the synchronization is complete (at least according to mdadm) albeit the while/amber LED stays on
if you want to be sure there arent problems with the drive run SpinRite 6 on it it is the only and I mean only program that can do what it does nothing compares. period.
Offline
How do you do when ou are daltonian ?
Personnaly, i have one LED which is less blue than the other one, during 1 month it was off, then i have updated the firmware with the new 1.04 and it is on now. it is just shining less (half than the other one) and my hard drives are both fine. I don't know what it is happening.
by seing your post with your "amber LED", i am quite happy with my problem
good luck
Last edited by bodbod (2008-02-19 01:45:46)
Offline
I too have this amber light problem, and coincidentally, it was also WD 400 GB disks. But when it is reading or writing stuff, the blue one blinks on and off ( normal hard disk activity I think) but the amber light stays solid.
Last edited by Cazaril (2008-02-22 16:28:09)
Offline