DSM-G600, DNS-3xx and NSA-220 Hack Forum

Unfortunately no one can be told what fun_plug is - you have to see it for yourself.

You are not logged in.

Announcement

#1 2009-03-28 14:06:14

antovint
Member
Registered: 2009-03-28
Posts: 9

Swith from Raid 1 to Standard configuration

Hill all, I'm new user and this is my first post here.

I have a DNS 323 with fw 1.06 and ffp 0.5 whit some packages up and running (ssh, mldonkey, etc..).
After readind lot of post about that I decided to swith from Raid 1 to Normal configuration, and syncronize my 2 x 1 TB disks every nigth.

My question is about the right way to change configuration:
first option:
1) backup all data
2) without unnistal before FFP and packages, change configuration from Raid to Standard from nas web interface,I think this will format disks,
3) restore data, restore also FFP and packages forders, do you think ffp and packages do work?

second option:
1) backup all data
2) unistall FFP and all packages, reset to factory default (or not), change configuration from nas web interface
3) restore data, reinstall FFP and packages.

I'm linux newbie, so obviously all your other suggetions are welcome!

Thanks
Anto

Offline

 

#2 2009-03-28 15:18:41

luusac
Member
Registered: 2008-04-29
Posts: 360

Re: Swith from Raid 1 to Standard configuration

antovint wrote:

1) backup all data

However you go about it, do the above!

If it were me, and not having many packages as you describe, I would backup the data (*and verify that the backup is a good backup* - by restoring it somewhere and checking that everything is intact - many people over the years have probably made a backup to say one or more DVDs or CDs (but the media isn't that important) and then haven't been able to restore it at all or not restore 100% of their data), pull the drives connect them to a PC and delete all partitions - in effect restoring the drives to the state they were when you got them.  Then reinsert, format via web interface, then reinstall ffp and your packages etc and then restore your data.  The 323 does remember your drive serial numbers though, so maybe a factory restore with the pinhole button at the back or via web interface (assuming that either will remove the serial numbers from flash memory) wouldn't be a bad idea too.  I am sure there will be lots of other suggestions though, so wait a while before doing it - somebody may have a better suggestion.
HTH

Offline

 

#3 2009-03-28 15:36:14

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

Thanks luu,
Iagree with you about doing a good data back up. I think your suggestion is the safest for having a clear situation after format. The only issue for me is that I have many ffp packages configured, many user account for my friends configured on nas, so I'm trying if possible I'd like do reduce my effort to reconfigure all settings. DNS 323 has itself the option to change configuration from Raid 1 to Standard, so my hope is that, is it this feature works fine, some problems like remember HD serial number could be automatically solved.

Hope there is a minus effort way than yours,otherwise I'll follow your suggestions.

TKS
Anto

Offline

 

#4 2009-03-28 17:39:37

bq041
Member
From: USA
Registered: 2008-03-19
Posts: 709

Re: Swith from Raid 1 to Standard configuration

I built a tool for doing this.  It is attached.  Place this script in the root of your Raid array (same place as fun_plug is), then run it.  You may want to look through the code, first.  Make sure you follow all the on screen instructions.

This is assuming you are using FW 1.04 or newer with the 1.04 or newer partition structure.  It is NOT for the 1.03 or older!!!

As always, back-up your data first, but this script is designed to give you 2 independant drives each with all your data.


Attachments:
Attachment Icon break_raid_1.04_1.05.sh, Size: 2,336 bytes, Downloads: 247

DNS-323     F/W: 1.04b84  H/W: A1  ffp: 0.5  Drives: 2X 400 GB Seagate SATA-300
DNS-323     F/W: 1.05b28  H/W: B1  ffp: 0.5  Drives: 2X 1 TB  WD SATA-300
DSM-G600   F/W: 1.02       H/W: B                Drive:  500 GB WD ATA

Offline

 

#5 2009-03-29 20:41:56

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

Tks Bq, tomorrow I'll have a new hard disk do backup all data, so i'll try your script. I've only a doubt, I've fw 1.06 and ffp 0.5, do you think your script could have any problem with fw 1.06? I read that it's done tor fw 1.04 and 1.05.
Tks
Anto

Offline

 

#6 2009-03-30 21:48:30

bq041
Member
From: USA
Registered: 2008-03-19
Posts: 709

Re: Swith from Raid 1 to Standard configuration

No, it is based on the partition structure which was changed at 1.04.  I have not heard of any changes in 1.06.  The F/W level only applies for doing the updates to the config files, because their locations changed.  The actual process to stop the raid is the same.  Eventially I will make this into a package that does not require reboots, but that involves getting ffp running from either a USB source or from memory.  Then it would involve shutting down all disk dependant processes to allow the array to be unmounted.  I may use something similar to ffp reloaded, I do not know, yet.  Time is just not something I have right now. 

If you know linux a little, or just want to see how it works, open up the script and look at it.  I tried to comment it so you can see what is going on.


DNS-323     F/W: 1.04b84  H/W: A1  ffp: 0.5  Drives: 2X 400 GB Seagate SATA-300
DNS-323     F/W: 1.05b28  H/W: B1  ffp: 0.5  Drives: 2X 1 TB  WD SATA-300
DSM-G600   F/W: 1.02       H/W: B                Drive:  500 GB WD ATA

Offline

 

#7 2009-03-31 23:49:24

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

Hi bq,
I used your script, and it seems that all is ok. Now I have Volume_1 and Volume_2, and from ssh I find HD_a2, HD_b2 with duplicated files, there are also HD_a4 and HD_b4, I don't know what these are.
Two questions for you, the fire  st one is: when you say left bay, you mean my left watching at the front of the nas? I have this doubt becasuse now when I browse Volume_1 it blinks right bay led.
The second question is about the messages I had from nas after running break2.sh and before nas reboot itself, there are some errors, do you think there is something went wrong:

root@MyNAS:~# sh /mnt/HD_a2/break2.sh start
mdadm: Couldn't open /dev/sdb2 for write - not zeroing
root@MyNAS:~# kill process
rmmod: lltd: No such file or directory
$Shutting down SMB services:
$Shutting down NMB services:
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
umount: /mnt/HD_c*: not found
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04
Refresh Shared Name Table version v1.04

Offline

 

#8 2009-04-02 19:07:42

bq041
Member
From: USA
Registered: 2008-03-19
Posts: 709

Re: Swith from Raid 1 to Standard configuration

Yes, the left bay is from watcing the front.  The error you got is because you did not insert the drive back in before running the second part of the script.  So you actually did not zero out the superblock on the drive. 

Also, do not confuse Volume_1 with being a drive.  Volume_1 refers to a share.  That is usually the drive in the right bay (drive # 0), but it also could be the one in the left (in the case of swapped drives, or when booting off 1 drive only in the left bay).  Drive # 1 is in the left and Drive # 0 is in the right bay -- always.

HD_a4 and HD_b4 are the partitions used to store your config files, which may not be properly up to date if the procedure was done incorrectly.


DNS-323     F/W: 1.04b84  H/W: A1  ffp: 0.5  Drives: 2X 400 GB Seagate SATA-300
DNS-323     F/W: 1.05b28  H/W: B1  ffp: 0.5  Drives: 2X 1 TB  WD SATA-300
DSM-G600   F/W: 1.02       H/W: B                Drive:  500 GB WD ATA

Offline

 

#9 2009-04-02 20:06:25

fordem
Member
Registered: 2007-01-26
Posts: 1938

Re: Swith from Raid 1 to Standard configuration

For the sake of completeness - Volume_1 could also be both drives, if your are running your disks in a RAID0, RAID1 or JBOD configuration

Offline

 

#10 2009-04-02 23:31:34

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

Thanks bq and fordem.

I did this procedure:
- run the script
- shutdown using web admin tool,
- remove drive 1 (left bay),
- reboot,
- shutdown using web admin tool
- then re-insert drive
- run /mnt/HD_a2/break2

I'm sure about that.
Anyway, what do you think I've to do to repair errors, maybe it's enough running the second part of your script with both disk inserted?

Thanks in advance
Anto

Offline

 

#11 2009-04-03 00:27:10

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

Bq, maybe I tought that the trouble should be due to a peculiarity of my nas, I have two new Hithachi hard disks that aren't into the compatibility list of dns323, so when I shutdown the nas, and then power on from nas front panel button, hard disk are not recognized from nas and the two HD leds continue blinking, I have to do a reboot from telnet or from web interface and nas recognizes disks. So when I did your procedure, after I re-inserted disk in left bay, after power on the nas, I did another reboot in order to nas recognize disks and then I ran second part of your script. Maybe the last rebood cause that the procedure went wrong?

Offline

 

#12 2009-04-03 01:57:49

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

I ran break2 with both disks inserted and I have the same message:

root@MyNAS:~# sh /mnt/HD_a2/break2.sh start
mdadm: Couldn't open /dev/sdb2 for write - not zeroing
root@MyNAS:~# kill process
rmmod: lltd: No such file or directory
$Shutting down SMB services:
$Shutting down NMB services:
Refresh Shared Name Table version v1.04
umount: /mnt/HD_c*: not found
Refresh Shared Name Table version v1.04

here the break2 I used:

#!/bin/sh
grep -w sdb /proc/partitions >/dev/null 2>/dev/null
if [ ! $? -eq 0 ]; then
echo "Insert drive and run program again..."
else
mdadm --zero-superblock /dev/sdb2
rm /mnt/HD_a2/break2.sh
do_reboot
fi

Offline

 

#13 2009-04-03 19:59:48

bq041
Member
From: USA
Registered: 2008-03-19
Posts: 709

Re: Swith from Raid 1 to Standard configuration

you should be okay.  There was obviously something wrong with the superblock, but those are only for RAID, so no problem.


DNS-323     F/W: 1.04b84  H/W: A1  ffp: 0.5  Drives: 2X 400 GB Seagate SATA-300
DNS-323     F/W: 1.05b28  H/W: B1  ffp: 0.5  Drives: 2X 1 TB  WD SATA-300
DSM-G600   F/W: 1.02       H/W: B                Drive:  500 GB WD ATA

Offline

 

#14 2009-04-04 12:52:40

antovint
Member
Registered: 2009-03-28
Posts: 9

Re: Swith from Raid 1 to Standard configuration

Tks bq!

Offline

 

Board footer

Powered by PunBB
© Copyright 2002–2010 PunBB