Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Being hunted with the prospect of incorrectly created file system on the DNS-323 especially when using the Seagate 1TB 702.11 hard drives and the consequent loss of the data I have decided to resort back to the old trusty linux PC and a command line to verify what is going on. And indeed I found discrepancy in disk partition size.
The two disks from D-link enclosure are mounted as /dev/sda and /dev/sdb on the PC.
The inconsistency shows up in fdisk:
# fdisk /dev/sdb
The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris
/dev/sdb2 131 121404 974133405 83 Linux
/dev/sdb4 67 130 514080 83 Linux
Partition table entries are not in disk order
Note, that the /dev/sdb2 ends at cylinder 121404 while the disk has 121601 cylinders. So there are 198 cylinders wasted or approximately 1.6 GBytes.
Since we have an exact copy of sda on sdb I am going to delete partition sda2 and create it again this time using the full disk capacity.
#fdisk /dev/sda
Command (m for help): d
Partition number (1-4): 2
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (131-121601, default 131):
Using default value 131
Last cylinder or +size or +sizeM or +sizeK (131-121601, default 121601):
Using default value 121601
Command (m for help): p
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris
/dev/sda2 131 121601 975715807+ 83 Linux
/dev/sda4 67 130 514080 83 Linux
Partition table entries are not in disk order
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Interestingly the type of the file system is remembered between deleting and creating the partition.
Next I will create two raid arrays, one with the properly sized disk and another one with the old disk as md0 and md1 respectively.
mdadm --create --verbose /dev/md0 --level 1 --raid-devices=2 missing /dev/sda2
mdadm: /dev/sda2 appears to contain an ext2fs file system
size=974133312K mtime=Sat Feb 23 00:32:10 2008
mdadm: size set to 975715712K
Continue creating array? y
mdadm: array /dev/md0 started.
mdadm --create --verbose /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2
mdadm: /dev/sdb2 appears to contain an ext2fs file system
size=974133312K mtime=Sat Feb 23 00:32:10 2008
mdadm: /dev/sdb2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Jul 12 12:15:45 2008
mdadm: size set to 974133312K
Continue creating array? y
mdadm: array /dev/md1 started.
Note the difference in size is showing already.
Then I will clean format /dev/md0, which will become the data transfer target.
# mke2fs -v -m 0 /dev/md0
mke2fs 1.40.4 (31-Dec-2007)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
121978880 inodes, 243928928 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
7445 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Note the parameter "-m 0" which overrides default 5% of the file system reserved for the root user.
I have previously created directories /mnt/source and /mnt/target which will be used for copying the data that was stored on the DNS-323
With a couple of commands I am going to mount both RAID arrays
# mount /dev/md1 /mnt/source
# mount /dev/md0 /mnt/target
and verify that it was done correctly:
#df
/dev/md1 958846504 756342136 202504368 79% /mnt/source
/dev/md0 960404232 73364 960330868 1% /mnt/target
Finally I can kick-start the data transfer process:
cp -av /mnt/source/* /mnt/target/
and watch the progress of the copy job:
watch -n1 df -h
The copy process runs at 4GBytes/min.
When finished we can take down /dev/md1, rebuild the partition table on /dev/sdb2 and add the fixed disk to /dev/md0
# umount /dev/md1
# mdadm --remove /dev/md1
# fdisk /dev/sdb
I am not going to repeat the commands for deleting and creating the second partition, follow the same as above for /dev/sda.
# mdadm --add /dev/md0 /dev/sdb2
mdadm: added /dev/sdb2
You can watch the progress of syncing RAID 1 using:
watch -n1 cat /proc/mdstat
It runs at 90 MBytes per sec or 5.4 GBytes/minute. Since it syncs the entire disk (not only the data) it will always take approximately 3 hours on 1 Terabyte.
Finally after shutting down the linux PC you can insert the disks back into DNS-323, turn the D-link on and after a bit of negotiations the RAID1 is fully recognized as being clean and fully in sync.
For some reason it can take up to 10 minutes for DNS-323 to boot up from the PC created RAID array, all subsequent boots are within 90 seconds, which is the normal startup speed.
- SD
Last edited by skydreamer (2008-07-12 23:22:26)
Offline
Hi,
I haven't looked in the sticky thread about hdd compatability yet as I haven't decided on which model to buy, but I was thinking of getting 2 x 1TB seagate drives for my 323, but then read your post:
skydreamer wrote:
Being hunted with the prospect of incorrectly created file system on the DNS-323 especially when using the Seagate 1TB hard drives and the consequent loss of the data
I have seen some other people are using this make/size of drive on this forum.... I don't want to get the 94% or any other problem if it can be avoided! Is there considered to be a general problem with Seagate 1TB drives on the 323?
thanks.
lu
Offline
I have 8 Seagate 702.11 1TB disks running in various RAID1 arrays and they all perform fine. However I have noticed that especially the old Sil 3114 PCI controllers choke on the new Seagate disks so I would conclude that even DNS-323 could have issues with these hard drives which may be fixed in future firmware releases.
To make matters worse I never succeeded in adding these disks albeit zeroed to the DNS-323 RAID 1 array, it always stopped at 94%. I have ignored that the format has not finished (see my previous post in the 94% thread) only to find out that after copying 500 GBytes of data the D-link will not accept more files and e2fsck discovered disk errors- which conceived this thread.
Then again these issues might be confined to RAID1.
Offline
skydreamer wrote:
I have 8 Seagate 702.11 1TB disks running in various RAID1 arrays and they all perform fine. However I have noticed that especially the old Sil 3114 PCI controllers choke on the new Seagate disks so I would conclude that even DNS-323 could have issues with these hard drives which may be fixed in future firmware releases.
To make matters worse I never succeeded in adding these disks albeit zeroed to the DNS-323 RAID 1 array, it always stopped at 94%. I have ignored that the format has not finished (see my previous post in the 94% thread) only to find out that after copying 500 GBytes of data the D-link will not accept more files and e2fsck discovered disk errors- which conceived this thread.
Then again these issues might be confined to RAID1.
Damn, I wanted to use them in RAID1, but now I know I will make sure to read the sticky carefully and see who is running what and whether they have had problems!
cheers,
lu
Offline
Use Internet Explorer to admin DNS-323. The 94% problem will goes away.
Offline
Just an update on my own thread :-)
This procedure works only when the new disk pair to be inserted in DNS-323 is assembled as /dev/md0 (not /dev/md1, etc). For some reason beyond my comprehension the DNS-323 declares both disks as failed if they were not assembled/mounted on the linux PC as /dev/md0
Offline