Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Next step should be correcting the raidtab files.
Current condition:
/ # cat /sys/mtd1/raidtab raiddev /dev/md0 raid-level raid1 nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sda2 raid-disk 0 device /dev/sdb2 raid-disk 1 raiddev null raid-level null nr-raid-disks 0 chunk-size 64 persistent-superblock 1 device null raid-disk null device null raid-disk null Version 1.3 / # cat /sys/mtd2/raidtab raiddev /dev/md0 raid-level raid1 nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sda2 raid-disk 0 device /dev/sdb2 raid-disk 1 raiddev null raid-level null nr-raid-disks 0 chunk-size 64 persistent-superblock 1 device null raid-disk null device null raid-disk null Version 1.3 / # cat /mnt/HD_a4/.systemfile/raidtab raiddev /dev/md0 raid-level raid1 nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sdb2 raid-disk 0 device /dev/sda2 raid-disk 1 raiddev /dev/md1 raid-level linear nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sdb3 raid-disk 0 device /dev/sda3 raid-disk 1 Version 1.3 / # cat /mnt/HD_b4/.systemfile/raidtab raiddev /dev/md0 raid-level raid1 nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sdb2 raid-disk 0 device /dev/sda2 raid-disk 1 raiddev /dev/md1 raid-level linear nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sdb3 raid-disk 0 device /dev/sda3 raid-disk 1 Version 1.3 / # / # cat /mnt/HD_a4/.systemfile/raidtab raiddev /dev/md0 raid-level raid1 nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sdb2 raid-disk 0 device /dev/sda2 raid-disk 1 raiddev /dev/md1 raid-level linear nr-raid-disks 2 chunk-size 64 persistent-superblock 1 device /dev/sdb3 raid-disk 0 device /dev/sda3 raid-disk 1 Version 1.3 / #
The difference is the second section. In the flash it speaks of raiddev null, but on the HDDs of raiddev /dev/md1. The partition described in raiddev /dev/md1 is the result of leaving a trailing space (JBOD) when formatting the RAID1 via Web-UI. It really doesn't matter to me if it will survive or not - I just didn't want the RAID to be spanned over the whole drive, in case I'll buy another "1TB" disk of another manufacturer which might be 10M smaller in size. This area isn't and won't be in use for data. I'm only interested in the big RAID1 area.
So can I just replace it by the raiddev null section for all of the 4 tables?
Last edited by cdk (2008-10-16 02:31:40)
Offline
Oh what a mess! Back to the hd_magic_num files: I found out that those serial numbers which are in use now do not belong to the current drives but to the set of harddisks I used before. But I think re-editing them to the true values won't help since I don't know in which order they should be listed and I further don't know about the matching magic numbers for the current drives.
Unfortunately the hd_magic_num files that I had on disk when I started posting this problem showed different contents, so I don't know which one would match my current drives. Just testing out all possible combinations will be a 2^8 experiment or something like that with unknown side effects...
What about deleting both hd_magic_num in the flash? Will they be recreated after reboot or would this be the mortal blow to my NAS?
Offline
I'm experiencing the same "sync time remaining: degraded". Your answers are above my head, and I'm still not sure if the 'degraded" issue is a problem that dlink should be looking after or if it's one that I have to work out.
Any help and direction would be appreciated...
thanks
Offline
brybat wrote:
I'm experiencing the same "sync time remaining: degraded". Your answers are above my head, and I'm still not sure if the 'degraded" issue is a problem that dlink should be looking after or if it's one that I have to work out.
Any help and direction would be appreciated...
thanks
The "sync time remaining: degraded" message basically indicates that your RAID1 array is degraded and that YOU need to take action to fix it.
I'm not going to say that D-Link should not be looking after it, but I will say that YOU NEED to do something about it, and the sooner the better - what action you take depends on your aprticular circumstances.
There are three possible status messages displayed there ...
- "sync time remaining: completed" which means your array is synchronized and every thing is good
- "sync time remaining: nnn" - where n is a number, which means your array is being rebuilt or resynchronized, and nnn is the number of minutes remaining.
- "sync time remaining: degraded" which means your array is not synchronized and nothing is being done to synchronize it
What D-Link may need to look at is why the array is not synchronized - there appears to be a problem that allows the DNS-323 to incorrectly detect a failed drive - which may or may not be the cause of your problem.
You may in fact have a failed drive, in which case you need to remove the failed drive and replace it with a new one - the DNS-323 should prompt you to format the new drive after which it will then rebuild the array.
The instructions provided here are more about forcing the DNS-323 to rebuild the array to the existing disks on the assumption that there has been an erroneous disk failure detection - if they are "above your head" (and they are - not above mine, but in an area I lack experience) you could simply backup your data, reformat your disks and restore your data.
Offline
I just courious:
I had a degraded raid1 and resynced it with mdadm.. no magic about that.
_but_:
before syncing, I logged into the device, clicked 'skip' on the format/create raid dialog and went for the status page.
the section 'disk info' showed 'degraded' or something and, there was a button labeled 'resync' below it.
has enyone ever used this button?
it might be the hidden 'repair my broken raid for gods sake!'-feature.
(ch3snas 1.04rc5)
Last edited by quattro (2008-11-20 21:13:55)
Offline
quattro wrote:
I just courious:
I had a degraded raid1 and resynced it with mdadm.. no magic about that.
_but_:
before syncing, I logged into the device, clicked 'skip' on the format/create raid dialog and went for the status page.
the section 'disk info' showed 'degraded' or something and, there was a button labeled 'resync' below it.
has enyone ever used this button?
it might be the hidden 'repair my broken raid for gods sake!'-feature.
(ch3snas 1.04rc5)
That button may be there on the CH3SNAS, but I've never seen it on the DNS-323
Offline
I once tried that button in CH3SNAS / 1.04RC6 but nothing happened, really nothing. No HDD activity, no new processes on the NAS, no change of RAID status.
Offline
bq041 wrote:
No problem. It looks like I made an error in the script and it coppied the config files into a file called .systemfiles instead of a directory called .systemfiles--sorry about that.
Anyway, I hope you learned a little about it and had fun trying something new. Any more questions, feel free to ask.
Did you ever post a corrected script? In any case, now there is no script but I would be curious to see what is in your script if you still have it.
Thanks.
Offline
I d not know where it is, off hand. Fi you check out this link: http://dns323.kood.org/forum/t2444-Wiza … -1.05.html There is a script there for upgrading the RAID1 array from F/W 1.03 format to 1.04 format. I tried to make it very modular, so I could take individual subroutines and make new scripts with little work. That is how I made the script that you are asking for, here. Check it out. I have not worked on it in months, so it is far from perfect, but it does work. My ultimate goal was to remove all the reboots, possibly tying it into fonz' ffp reloaded, but have not had the time (new baby on the way). Anyway, check it out; I'm sure there are easier ways to do some of the things I did. Let me know.
Offline