Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi Philpcs,
philipcs wrote:
The RAID volume is synchronizing now. Please wait for 1766.3 minute(s).
The RAID volume is synchronizing now. Please wait for 6476.3 minute(s).
Increased again
Eventually the minutes remaining should go down until the RAID is active again. Let us know if/when that happens and the status of the amber light. You may need to reboot after sync is complete to get rid of the amber light.
I have attached dmesg.out after turned to amber light
I had a look and the relevant lines:
md: md0 stopped. md: bind<sdb2> md: bind<sda2> md: kicking non-fresh sdb2 from array! md: unbind<sdb2> md: export_rdev(sdb2) raid1: raid set md0 active with 1 out of 2 mirrors md: md1 stopped. md: bind<sdb3> md: bind<sda3>
In your case, I expect that you will be able to get back to normal operation after sync is completed. Do keep us informed as to what really happens.
Jaya
Offline
i have let the snyc process run over night, but i still see the amber light. however i have shutdown the NAS. I will reboot again when i back to home.
I have tried 1 thing. i have 2 volumes.
volume 1 = RAID1
volume 2 -JODB
When i unplugged the HDD with amber light, my volume 2 gone and i can't access my data. But when the HDD put it back, i can still access my data.
Doesn't meant my HDD is still healthy even with amber light?
Offline
Hi Philipcs,
philipcs wrote:
I have tried 1 thing. i have 2 volumes.
volume 1 = RAID1
volume 2 -JODB
When i unplugged the HDD with amber light, my volume 2 gone and i can't access my data. But when the HDD put it back, i can still access my data.
Doesn't meant my HDD is still healthy even with amber light?
Volume_1 being RAID1 will continue to work when one member disk is faulty.
Volume_2 being JBOD requires both disks to be online for it to work.
This is just how RAID1 and JBOD are design to work.
Jaya
PS: I gather this is how you ended up with the orange light. In this case, I am sure that it will all get back to normal when you reboot your DNS-323.
Offline
Hello!
I'm not too experienced in mdtools, and also not too experienced DNS-323 user..
For now I've used DNS323 for a very short time. And I've had no problems with RAID-1 partition
(RAID-1 + JBOD)
But I have one ..idea.. After upgrading to 1.04 version it appeared to be two new options
in WEB interface at ADVANCED / NETWORK ACCESS menu: OPLOCKS and MAP ARCHIVE.
If nobody tried to do this, maybe try to change status of OPLOCKS checkbox for RAID-1 partition
and observe, if "amber led" problem will appear again, or not?
Of course this is only theory, I know. Just try it, maybe it solves "amber led" problem...?
Sorry for my english,
Have a nice weekend
/Slash
Last edited by slash (2008-03-15 01:10:06)
Offline
Hello Slash,
slash wrote:
After upgrading to 1.04 version it appeared to be two new options
in WEB interface at ADVANCED / NETWORK ACCESS menu: OPLOCKS and MAP ARCHIVE.
If nobody tried to do this, maybe try to change status of OPLOCKS checkbox for RAID-1 partition
and observe, if "amber led" problem will appear again, or not?
In my case (with three DNS-323's) both OPLOCKS and MAP ARCHIVE are enabled. When the amber light developed on these boxes, I have always traced it to either an improper shutdown, or hot removal followed by hot insertion.
I was able to recover from these by doing mdmadm -a /dev/md0 /dev/sdx2 through CLI, where x is a or b for the RHS or LHS disks, respectively.
Hope this helps.
Jaya
Offline
just back from home town.
turned on my nas, no more amber light, hooray !!!
however, the status still show Sync Time Remaining: 250.2 minute(s)
is this normal?
Offline
slash wrote:
Hello!
I'm not too experienced in mdtools, and also not too experienced DNS-323 user..
For now I've used DNS323 for a very short time. And I've had no problems with RAID-1 partition
(RAID-1 + JBOD)
But I have one ..idea.. After upgrading to 1.04 version it appeared to be two new options
in WEB interface at ADVANCED / NETWORK ACCESS menu: OPLOCKS and MAP ARCHIVE.
If nobody tried to do this, maybe try to change status of OPLOCKS checkbox for RAID-1 partition
and observe, if "amber led" problem will appear again, or not?
Of course this is only theory, I know. Just try it, maybe it solves "amber led" problem...?
Sorry for my english,
Have a nice weekend
/Slash
I don't think the oplocks and or map archive check boxes have anything to do with it - I've seen the white/amber light once, and that was caused by my pulling and then reinstalling a drive in a JBOD configuration (no RAID) whilst working on some "data loss scenarios" - please note - I did not hot plug anything.
Apart from that my unit, currently in a RAID1 configuration with a pair of Seagate 7200.9 Barracudas, does a pretty good job, sitting in the corner, sharing files, and the occasional print job - I have fooled around with both the oplocks and the map archive (which still doesn't work) but no amber lights resulted.
Has anyone considered compiling a list of what drives were in the units that had amber lights? Is there any similarity?
Offline
philipcs wrote:
just back from home town.
turned on my nas, no more amber light, hooray !!!
however, the status still show Sync Time Remaining: 250.2 minute(s)
is this normal?
try unplug the NAS from network to avoid any data writing into the hdd. I suspect this is the reason why the timer keeps on moving up, because of new data being written constantly.
Let the sync complete without any interruption and then check back to see if the drives are running fine again.
Offline
Thanks Adubin, I will try tonight
Offline
Sorry if this took so long: I got a new job and had less time the last two weeks. I also wanted to try a downgrade to 1.03, so I had to do a complete backup of the NAS.
jayas wrote:
Hi Ardjan,
Sorry if I am asking you to repeat what you have already done, but your situation is baffling and I like to have a go at nailing it down. Starting afresh, here is what I like you to do and report if you can:
1/ TELNET and do add the drive back to the raid thus:
mdadm /dev/md0 -a /dev/sdb2
And during this sync the white LED came on again: 'degraded'... :-(
But something is really baffling: The last few days I saw a warning on the logonscreen, that I should reformat the drive as ext2. Well, today I finally did that, and somehow he rebuilt the raid1, keeping all the data!
At the moment I have a DNS-323 without white/amber LED, and both of the volumes working. I put a little strain on it today (copying some GB's around during a virusscan and so on), but nothing changed. It looks all healthy again... Current dmesg output as attachment, relevant part here:
ext3: No journal on filesystem on sda2 VFS: Can't find ext3 filesystem on dev sda3. VFS: Can't find an ext2 filesystem on dev sda3. VFS: Can't find ext3 filesystem on dev sda3. VFS: Can't find an ext2 filesystem on dev sda3. ext3: No journal on filesystem on sdb2 EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended ext3: No journal on filesystem on sdb3 EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended ext3: No journal on filesystem on sda4 EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended ext3: No journal on filesystem on sdb4 EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended md: md0 stopped. md: bind<sdb2> md: md0: raid array is not clean -- starting background reconstruction raid1: raid set md0 active with 1 out of 1 mirrors md: md1 stopped. md: bind<sda3> md: bind<sdb3> EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended ext3: No journal on filesystem on sda4 EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended ext3: No journal on filesystem on sdb4 EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended Link Layer Topology Discovery Protocol, version 1.05.1223.2005 dev is <NULL>
Offline
Position 1 - Firmware 1.03 and below, both LEDs Blue. All okay.
Position 2 - Since 1.04 with no formatting, left LED is off, all functionality good, Web GUI reports all okay with the Raid1 volume.
Position 3 - happened yesterday - Left LED off, and right LED Orange, Even the On/Off switch is Orange. All functionality good Web GUI okay too.
If I reboot device it goes back to Position 2, by next morning I am back at Postion 3.
I am going to fordem the data [ie backup!] and swap the disks from left to right and reformat. if all is not well then I will at least have discovered it the disk with the orange light is faulty or if its the drive bay with the issue. I will revert to firmware 1.03 if necessary.
This is the first trouble I have had with this box, and am reluctant to think its the disks.
What if I end up putting the dodgy disk on the site where the LED does not light, then if things do go awry I will have to rely on the email (which I havent set up...)
Last edited by index monkey (2008-04-10 12:37:03)
Offline
I've encountered similar problems as those described in this thread i.e.
- email alert that a drive as failed
- one of the drive light was pink/amber
- web interface says sync degraded
- data on drives remain accessible
Told DLink Customer Service, asked me to bring the NAS down, they tested it, no prob with the drives and replaced the NAS!
Offline
Bits wrote:
Told DLink Customer Service, asked me to bring the NAS down, they tested it, no prob with the drives and replaced the NAS!
Where did you go to talk to D-Link Customer Service in person?
Offline
I have the same problem.
- I got two 500 Gb WD Caviar SE16 HD's
- Installed 1.04 without reformatting
- Noticed that when I log onto the webinterface it wants me to reformat -> which I skip*
- Got the "amber" (more like pink in my case :-) on the left drive
- All data remains accessible
- Status: "Sync Time Remaining: Degraded"
- - -
I've tried to follow and understand this thread but my UNIX/telnetting abilities are limited. Can I solve this problem without funplugging and stuff like that?
- - -
* I fear having to back my 200 Gb up - that's why I got the NAS in the first place: to use it as a storage I didn't need to back up :-)
Last edited by bagmanden (2008-04-14 12:11:01)
Offline
Oh, I now took the time to actually read the 'reformat message'.
It says:
"Click Next to begin formatting the replacement drive. Re-synch will take place after the restart"
So it's not a complete reformat like I thought.
I guess I could do that without the data on the working drive being erased and then it can start syncing again?
Or would you still advice me to back up? (asking the question I can somehow predict the answer :-)
Last edited by bagmanden (2008-04-14 12:20:24)
Offline
I'm half the world away in S'pore...emailed their support first and got a:
"Thank You for the email.
The Light indicated Pink or ember, such suitation cannot be duplicated at our side.
We are suspecting that the issue could be faulty HDD or end-user did not perform the format of HDD correctly.
May we know what is the Model of the HDD and what is the RAID you did on the DNS-323."
After I told them that I've 2 Seagates ST3750330AS 750Gb running on RAID1 which I re-formatted after upgrading the firmware to 1.04 and that both are less than 1 month old, they told me
"It is best that you bring in to the Main Service centre so that the engineer here can try to duplicate your case."
which I did and there they replaced my DNS-323. Now it's running nicely again with blue links a-twinkling. But I did have to re-format the drives as they became un-RAIDed in the new DNS-323.
Last edited by Bits (2008-04-14 13:06:00)
Offline
bagmanden wrote:
Oh, I now took the time to actually read the 'reformat message'.
It says:
"Click Next to begin formatting the replacement drive. Re-synch will take place after the restart"
So it's not a complete reformat like I thought.
I guess I could do that without the data on the working drive being erased and then it can start syncing again?
Or would you still advice me to back up? (asking the question I can somehow predict the answer :-)
Theoretically - it's quite safe to let if format the replacement drive and resync - practically, it seems to be a different thing.
Personally I have not lost data in RAID failure & re-build simulations, however other forum members have reported problems, and it seems that the common factor has been replacement disks that contained partitions &/or data.
One possible reason why I may never have experienced the problem (with the DNS-323) is that I learned many years ago (mid 90's) in a Novell NetWare lab simulation that a resync could go in either direction if the system was unable to determine which disk had valid data, and the easiest way to ensure a good resync was to eliminate the source of confusion.
I would suggest backing up (just to be on the safe side) and making sure that the replacement disk had no partitions &/or data.
Offline
my 2c.
After upgrading to 1.04 my 2nd drive went JBOD and the print server stopped working.
Offline
jiggysmb wrote:
my 2c.
After upgrading to 1.04 my 2nd drive went JBOD and the print server stopped working.
The print server issue is well documented - but the second drive went JBOD ?!?
What disk configuration did you have before the upgrade and what is your definition of JBOD?
I know some people think of JBOD as each disk being handled separately, but on this device JBOD is specifically a single concatenated volume spanning the two disks.
Offline
fordem wrote:
I would suggest backing up (just to be on the safe side) and making sure that the replacement disk had no partitions &/or data.
I'll back up before doing the format, I was just hoping I didn't have to copy 200 Gb through USB/ethernet...
But the thing about making sure the replacement disk doesn't have data/partitions is that A) I didn't replace the drive, that's just the DNS that thinks so after the firmware upgrade which makes me nervous that something like that could happen again) and B) how do I erase the drive without having access to a motherboard/enclosure to hook the drive into (I'm on mac/laptop)?
Last edited by bagmanden (2008-04-14 18:42:14)
Offline
I can't really comment on the DNS-323, so to speak, losing track of the drive after (or as a result of) the firmware upgrade, I haven't had it happen personally, perhaps because I wasn't running RAID when I did my upgrade - unfortunately, firmware upgrades can be like that, and it may or may not happen again.
As to your second question - several companies make relatively expensive USB to SATA/PATA disk adapters, the last one I bought probably cost me $19.95 - there's no enclosure, just a power adapter and a little box with a USB cable on one side and three disk connectors on the others - you could try one of those, I bought mine for use as a troubleshooting tool.
Offline
I backed up and did the reformat.
The first time around the box rebooted and went into amber/pink from the start aksing me again to reformat, which I did.
This time everything went smooth and the box is now back to normal.
The only difference between the first reformat and the second is that my laptop had mounted the drive the first time and not the second.
- - -
I'm still wondering why the firmware upgrade rendered my second drive unformatted. Could it be some handshake/initialization during booting that became corrupted because only theprimary HD was told it was now running FW 1.04? Like in this layman style dialogue:
A booting before the FW upgrade:
HD1: "Hi, I'm HD1 on a DNS-323-FW1.03"
HD2: "Hi, Im, HD2 on a DNS-323-FW1.03"
HD1: "I know you! You're exactly like me, you must be my RAID buddy Let's syn up"
After the upgrade:
HD1: "Hi, I'm HD1 on a DNS-323-FW1.04"
HD2: "Hi, Im, HD2 on a DNS-323-FW1.03"
HD1: "I have no idea what a DNS-323-FW-1.03 is, but I know it's nothing like me so I'll mark you for termination!"
HD2: "Gulp..."
HD1: "And by termination, I mean a reformat..."
HD2: "Gulp..."
Offline
Hi
I've also the pink/amber light issue with one of my drive. (The left one when looking at the front of my dns323)
However, my two drives and the dns-323 were still under the warranty so I've went back to the store and they test my 2 sata drives and gave me back a new dns-323. Still the same issue.
However, after some testing from the technicians, the 2 drives are stated are healthy (no issue).
On my side, with some testing, the same thing, the two drives seems ok.
In normal mode (not raid 1 or 0 nor JBOD) I never had that issue of one drive failing, weird.
I'm also on firmware 1.04
Formatted into raid 1 after applying the firmware 1.04 (on both dns-323)
my two drives are 500 gigs Seagate ST3500320AS
Hope to find the problem.
Offline
I'm just guessing here - but - this guess is based on fifteen odd years of deploying RAID arrays, mostly hardware RAID (Adaptec & AMI/LSI Logic controllers), a combination of RAID1 & RAID5, mostly Windows environments, including clusters, but also Novell NetWare and SCO Unix.
A failed drive is not the only thing that will cause a degraded array indication - especially in a RAID1 environment - if the system detects that the data on the disks is not consistent, it will typically trigger some sort of error indicator - I've watched the IBM xSeries server I have here detect the drives as out of sync on power up and automatically resync them (this is not a frequent occurence, but more a result of too many short power outages depleting the UPS batteries to the point where they can no longer support the load for the length of time it takes to do an orderly shutdown)
Do we possibly have a scenario here where the DNS-323 is detecting the data as not being consistent and flagging it as such?
Offline
I'm more skeptical now about desynchronize data.
I've made the following test:
Format raid 1
Do nothing (both disk empty)
close computer
Switch off the dns-323
Wait (more than 8 hours)
open computer
Switch on the dns-323
Problem appear anyway
What I've remark also is once the formating is done, drives (raid 1) are usable normally. Closing the dns-323 for a few moments (2-3 minutes) and start it back does not trigger the issue regardless of having data in it. Tried it a few times, didn't trigger the issue. Having a long delay however trigger the issue.
I do believe I've waited long enough (2-3 minutes) between each successful attempts to suppose that both drives in the array stop spinning (to discard that hypothesis of having drives still spinning when switching it on again)
Offline