Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi,
I wonder if anybody is facing the same problem as me?
I keep getting prompted to select a RAID configuration when logging into through the web. But when I SSH into the box, I can see that my RAID1 is set up properly.
/ # mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Thu Jan 24 07:53:58 2008
Raid Level : raid1
Array Size : 486544512 (464.01 GiB 498.22 GB)
Device Size : 486544512 (464.01 GiB 498.22 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Jun 17 23:49:21 2008
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 60fbd487:5d4425ca:f1babd8e:f8b4583c
Events : 0.1674037
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
Any help is appreciated.
Last edited by bbiggie (2008-06-17 18:50:47)
Offline
What firmware version was the RAID array created with?
There is another thread in here that suggests that this message results from the disk partition structure being created by an older firmware revision not being recognized by the new firmware.
Offline
If memory serves, I believe it was created with 1.03.
Does this mean that I would have to do a full backup, re-create the array and then a restore?
Offline
You are missing the configuration partitions required for F/W 1.04 and 1.05. I've walked a couple of people through converting. It can be done without losing your data, but it takes a little bit of time. Essentially, these would take place:
1) Break array
2) Repartition and format 1 disk
3) Copy data over to fresh disk
4) Repartition and format other disk
5) Recreate the array
It's not too dificult to do, and you will not lose your data (though I always recommend a backup anyway).
Let me know if you are interested, and I can walk you through it.
Offline
bq401,
Please do....very much appreciate it.
By the way, is there a way to check what are the missing configuration partitions required by F/W 1.04 and 1.05?
bq041 wrote:
You are missing the configuration partitions required for F/W 1.04 and 1.05. I've walked a couple of people through converting. It can be done without losing your data, but it takes a little bit of time. Essentially, these would take place:
1) Break array
2) Repartition and format 1 disk
3) Copy data over to fresh disk
4) Repartition and format other disk
5) Recreate the array
It's not too dificult to do, and you will not lose your data (though I always recommend a backup anyway).
Let me know if you are interested, and I can walk you through it.
Last edited by bbiggie (2008-06-18 11:21:27)
Offline
The missing partitions are partition 4 on each drive and physically are located from blocks 67-130.
Offline
By the way, I'm writing you a script to make the changes. It is a little too long to write out the instructions, unless you are pretty familure with linux, mdadm, and fdisk.
Offline
hi bq041,
I have seen several posts where you refer to your array repair (break & recreate) scripts, and one sayng you are near ready to release them. How near are you to this as I would like to have a look, can you put them up on the wiki? Brief comments in the scripts outlining what they do and a couple of words explaining each script 'section' would be useful for linux newbies (like me) too!
thanks
lu
Offline
Here is the thing. I have working ones for F/W 1.04 and F/W 1.05, but they have no safety nets in them, yet. I will let people use them (you can download them from some of the posts) on an individual basis as they need them, but I do not want them up on the wiki until I have some safeguards in place. One such safeguard is the F/W level. Nothing prevents them from running on the wrong F/W. I have written the code for this in the newest version, but have not gotten to put it in the older ones. The wrong F/W or even having disks that were set-up in a different F/W can cause some really bad things to happen.
It comes down to until I can make it idiot proof, I don't want them on the wiki. I certainly do not want to be blamed for someone losing their data.
Also, my scripts are fully commented.
The script I mention in this post is specific to upgrading RAID1 arrays from F/W 1.03 to run on 1.04 and 1.05. It has some stipulations, such as having no JBOD at the end of a smaller array. I will eventually add this feature, but it will take time. I'm hoping to have the basic upgrade script done by tomorrow, but testing usually involves unexpected bugs.
Offline
bq041 wrote:
It comes down to until I can make it idiot proof, I don't want them on the wiki. I certainly do not want to be blamed for someone losing their data.
Just a suggestion - when you're through idiot proofing them (I prefer the term making them "layer 8 compatible"), be sure to include a disclaimer.
Offline
Already has one, thanks.
Once I'm ready, I would be interested in some of the more advanced users to test them for me, though. I'm sure I will have left things out, or there may be easier ways of accomplishing things.
Last edited by bq041 (2008-06-18 22:49:46)
Offline
bq041 wrote:
I will let people use them (you can download them from some of the posts) on an individual basis as they need them, but I do not want them up on the wiki until I have some safeguards in place. ...
It comes down to until I can make it idiot proof, I don't want them on the wiki. I certainly do not want to be blamed for someone losing their data.
Sure that is fair enough, safeguards will save some users from making a catastrophic mistake, but I agree a disclaimer is a good idea too).
What I am actually looking for is a script to rebuild a raid1 array (without data loss) after factory reset. IN my case the raid was created on f/w 1.04 then upgraded to 1.05 then downgraded to 1.04. I haven't reset the 323 yet though!
One problem I find with this (really fantastic) forum is having to search multiple threads on different topics to find the latest version of a script or file - that is why I mentioned putting them up on the wiki. Obviously this is not a 'fault' and not something unique to this forum, I just find it much easier to find what I am looking for in the wiki (just a personal opinion).
thanks
lu
Offline
I plan on having them on the wiki, but in the stages they are in now, really bad things can happen.
If you built your array with 1.04 or 1.05, resetting using the button on the back should not break it. I have done it several times, including today and my RAID1 is just fine.
Offline
Almost there. I'm going through the last stages of testing now. I should post it tomorrow if everything finishes well with the tests. Sorry it took so long, but each test lasts a couple of hours.
Offline
Okay, here it is. * SCRIPT FOR UPGRADING RAID1 ARRAY FROM F/W 1.03 TO 1.04 OR 1.05 WITH LIVE DATA *
(Experienced guys, please check out the script and help me improve it. I plan on adding prompts for array sizes and for breaking and building arrays without the upgrade.)
Make sure you follow the instructions carefully and exactly. It will require a few reboots and you will run the program 3 times, each time with a different option. Due to the difficulty of stopping both drives at any given time on a running unit, the reboots are required.
I have tested it with both F/W 1.04 and 1.05, but only with ffp 0.5. It should work with earlier versions.
This program is designed to upgrade the partitions for RAID 1 only. If you have both RAID 1 and JBOD, the JBOD data and partition WILL be lost.
Lastly, when it copies data, it may or may not copy hidden directories (ones that start with . ). I have not tested that, yet.
Anyway, drop the attached file onto your root share, telnet in, and execute it with the following command:
/ # /mnt/HD_a2/raid_upgrade_1.03_1.04-5.sh
Below are my telnet logs of the whole thing:
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2008.06.20 11:18:12 =~=~=~=~=~=~=~=~=~=~=~= / # /mnt/HD_a2/raid_upgrade_1.03_1.04-5.sh Upgrade RAID 1 from F/W 1.03 for use on F/W 1.04 - 1.05 What would you like to do? (1) Step 1, Break array on drive 0 (right bay) (2) Step 2, Break array on drive 1 (left bay), reconfigure and begin array (3) Step 3, Complete array (x) exit Enter Selection: 1 Killing iTunes, FTP, and UPNP servers... Removing drive 0 from array... mdadm: set /dev/sda2 faulty in /dev/md0 mdadm: hot removed /dev/sda2 Mounting flash... Creating raidtab and raidtab2web files... Updating flash... Unmounting flash... Cleaning up... Exit telnet, shutdown the unit using the web interface, remove drive 1 (left bay), turn on the unit and run this program selecting option 2. / # exit
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2008.06.20 11:21:00 =~=~=~=~=~=~=~=~=~=~=~= / # /mnt/HD_a2/raid_upgrade_1.03_1.04-5.sh Upgrade RAID 1 from F/W 1.03 for use on F/W 1.04 - 1.05 What would you like to do? (1) Step 1, Break array on drive 0 (right bay) (2) Step 2, Break array on drive 1 (left bay), reconfigure and begin array (3) Step 3, Complete array (x) exit Enter Selection: 2 Insert Drive 1 (left bay), wait for the light to stop flashing, and press enter. Killing iTunes, FTP, and UPNP servers... Partitioning drive... Formatting drive... mke2fs 1.40.6 (09-Feb-2008) mke2fs 1.40.6 (09-Feb-2008) Warning: 256-byte inodes not usable on older systems Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 24354816 inodes, 97416151 blocks 974161 blocks (1.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 2973 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. Mounting drive 1... Copying data. This may take several hours depending on quantity of data... Unmounting drive 1... Starting RAID1 array on drive 1... mdadm: /dev/sdb2 appears to contain an ext2fs file system size=389664604K mtime=Fri Jun 20 07:26:38 2008 mdadm: array /dev/md0 started. Creating hard drive config file... Mounting flash... Creating raidtab and raidtab2web files... Updating flash... Unmounting flash... Mounting configuration partition... Copying config files... Unmounting config partition... Cleaning up... Exit telnet, shutdown the unit using the web interface, remove drive 0 (right bay), turn on the unit and run this program selecting option 3. / # exit
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2008.06.20 11:32:19 =~=~=~=~=~=~=~=~=~=~=~= / # /mnt/HD_a2/raid_upgrade_1.03_1.04-5.sh Upgrade RAID 1 from F/W 1.03 for use on F/W 1.04 - 1.05 What would you like to do? (1) Step 1, Break array on drive 0 (right bay) (2) Step 2, Break array on drive 1 (left bay), reconfigure and begin array (3) Step 3, Complete array (x) exit Enter Selection: 3 Insert Drive 0 (right bay), wait for the light to stop flashing, and press enter. Killing iTunes, FTP, and UPNP servers... Partitioning drive... Formatting drive... mke2fs 1.40.6 (09-Feb-2008) mke2fs 1.40.6 (09-Feb-2008) Warning: 256-byte inodes not usable on older systems Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 24354816 inodes, 97416151 blocks 974161 blocks (1.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 2973 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 33 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. Mounting configuration partition... Copying config files... Unmounting config partition... Adding drive 0 to the RAID array... mdadm: hot added /dev/sdb2 Cleaning up... Please press enter to reboot. / # $Shutting down SMB services: $Shutting down NMB services: umount: /mnt/HD_a4: not mounted umount: /mnt/HD_a4: not mounted Refresh Shared Name Table version v1.04 mdadm: fail to stop array /dev/md0: Device or resource busy mdadm: stopped /dev/md1
During the last step, one light may turn amber while the unit is waiting for you to hit enter to reboot. This is OK and will correct itself as soon as the unit reboots.
Last edited by bq041 (2008-06-20 20:41:13)
Offline
A strange thing happened to me, it seems to be working now.
Below are what I did:-
1. Shutdown my DNS 323.
2. Remove drive 0 (the leftmost drive) and connect it to an external 3.5" enclosure.
3. Started Fedora9 Live on my notebook with 2. above connected to it.
4. Mounted the drive with the command " mount -t ext2 /dev/<path>".
5. I can see the contents of my files in the drive. Copied one file over to a tmp directory and verified that I can see the file.
6. Shutdown my notebook.
7. Remove HDD from 3.5" enclosure and put it back to original location in DNS 323.
8. Turn on my DNS 323.
9. SFTP over the script raid_upgrade_1.03_1.04-5.sh to /mnt/HD_a2
10. Ran the script but chose (x) instead to exit out of the script because I wanted to kill the bittorent process (/mnt/HD_a2/Nas_Prog/BT/bt) myself as the script does not kill this process.
11. Log into web interface. The wizard does not prompt anymore to set up RAID configuration.
Below is the status from the web interface (previously the total drives was 0 but now it shows 2):-
HARD DRIVE INFO :
Total Drive(s): 2
Volume Name: Volume_1
Volume Type: RAID 1
Sync Time Remaining: Completed
Total Hard Drive Capacity: 490402 MB
Used Space: 380874 MB
Unused Space: 109527 MB
Running mdadm --detail /dev/md0 gave me almost the same output as in my first post:-
/ # mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Thu Jan 24 07:53:58 2008
Raid Level : raid1
Array Size : 486544512 (464.01 GiB 498.22 GB)
Device Size : 486544512 (464.01 GiB 498.22 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Jun 23 01:01:12 2008
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 60fbd487:5d4425ca:f1babd8e:f8b4583c
Events : 0.1763295
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
/ # cat /proc/mdstat also indicated that my RAID1 is up and running properly.
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda2[0] sdb2[1]
486544512 blocks [2/2] [UU]
unused devices: <none>
How can I verify that this is indeed working properly? I rebooted a few more times and from the web interface, it all seems ok now.
Offline
The promt by the web admin is mainly because of mismatched configuration files on the drive and in flash. 1.04 and 1.05 can still see the ones on the old format stle, but they seem to only work intermitently. This is most likely due to the fact theat the unit tries to update these on restart and I don't think it always gets it right. This is why I would ultimately recommend going to the new partition scheme. I had my 1.03 raid running for quite a while before I got this problem, but I have yet to have it happen with the new partitions.
Offline
bq041, thanks for the awesome upgrade script. Ran it on my DNS323 which I'd upgraded from 1.03 to 1.05 without reformatting. Worked a treat and the web interface is much happier.
Offline
bqo41,
Thanks for the script. It worked flawlessly.....
It took me a bit of time as I needed to get an extra external HDD to backup just in case.
Offline
No problem. Sometime when I get time, I will work on a version to do it without the reboots, but it will be a while. I was pressed on time for this one, as I knew you guys wanted to get your machines going.
Offline
bq041 - nice work. Perhaps you can shed some light for me on my situation and offer some advice
I was running 1.04. Drives were formatted in 1.04. My configuration was 2 drives as individual disks (i.e. not RAID0/1 or JBOD)
I upgraded to 1.05, and now I'm getting the Wizard at the web login. The only share that is now available via Windows networking is drive 2 (showing up as Volume_1, which was previously Volume_2).
Will your script work for me? Thoughts?
Offline
It may, but would require some modification, as it was written specifically for RAID 1. What you first need to find out is why the drive is not showing up. do you have any fun_plug installed? Your drives, what position are they in? (i.e. - the drive that was volume_2 and is now Volume_1, what bay is it in, left or right?) Second, which wizard are you getting? The reformat wizard? What happens if you boot up the unit with only the drive that does not work in it?
As far as the problem, I would guess that you magic numbers and/or the raidtab files have been messed up. They are not too hard to fix. If you can try out the things abouve, that would be great. The next thing to do after that would be to boot the unit with only the "good" drive in it with ffp on it, then install the "bad" drive with the power on. The you can telnet in an see what shape the drive is in.
Offline
bq041,
This may sound silly but after running your script, I now have 3 mount points and a swap partition, is this correct?
/mnt/HD_a2
/mnt/HD_a4
/mnt/HD_b4
and a swap partition.
Last edited by bbiggie (2008-07-16 08:53:13)
Offline
That is correct. F/W 1.04 and 1.05 require the 4th partition of each drive to store configuration files on.
Offline
Thanks, bq041. You've been veryhelpful.
Offline