Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Pages: 1
Here's the setup:
DNS-323 w/ 1.03 firmware.
2xWD 500 GB, 16 MB Cache, 7200 RPM.
The networking is going through DIR-655; all devices on the LAN are 1G Ethernet.
I've ran some benchmarks while drives were formated as standard, i.e. Volume_1, Volume_2.
Then I've re-formatted them as RAID 0, so I only get Volume_1.
The results are rather disappointing:
I'm getting the following:
SiSoftware Sandra
Benchmark Results for RAID 0.
Drive Index : 16 MB/s
Results Interpretation : Higher index values are better.
Random Access Time : 25 ms
Results Interpretation : Lower index values are better.
Performance Test Status
Run ID : HELIUM on January 6, 2008 at 1:46:30 PM
Platform Compliance : Win64 x64
System Timer : 2.4GHz
Operating System Disk Cache Used : No
Use Overlapped I/O : No
Test File Size : 4GB
Block Size : 1MB
Detailed Benchmark Results
Buffered Read : 27 MB/s
Sequential Read : 19 MB/s
Random Read : 13 MB/s
Buffered Write : 22 MB/s
Sequential Write : 17 MB/s
Random Write : 15 MB/s
Random Access Time : 25 ms
Drive
Drive Type : Network
Total Size : 913GB
Free Space : 913GB, 100%
Cluster Size : 1kB
Now, the trick is that the drives in "Standard" configuration come out very close to the above results - average of 0.5 MB/s reading below the ones above.
Another issue is drive access latency which is 25 ms. I've checked to make sure that this isn't network latency.
Any comments? Should I just switch back to "Standard" configuration of drives, just in case if one drive "goes"?
Offline
lewekleonek wrote:
Any comments? Should I just switch back to "Standard" configuration of drives, just in case if one drive "goes"?
I haven't actually run throughput tests on RAID0 (maybe it's time I did) but the throughput tests that I did run indicated that the bottleneck seemed to lie in the silicon - if that were the case I would not expect a RAID0 configuration to show any improvment, since the advantage of RAID0 comes from eliminating the bottleneck of the disk mechanics.
Switching back to standard is a decision you need to make for yourself - just be aware that in a RAID0 configuration, failure of either disk pretty much guarantees a total loss of data.
Offline
I don't think raid0 will give u much speed boost since the bottleneck is most likely lie on the processor of the DNS.
You may want to try jumbo frame on the DNS, which is by now the only way I know that can give you some noticeable read/write speed gain(about 30% increase in reading speed and 10-15% write). You need to have a PC network card and a gigabit switch that both supports jumbo frame of course.
Last edited by dickeywang (2008-01-07 00:51:41)
Offline
Alrighty then
Scenario #1 - single 250 GB Seagate Barracuda 7200.9 drive - WRITE 125866 kb/s, READ 157312 kb/s
Scenario #2 - 2x250 GB Seagate Barracude 7200.9 drives in a RAID0 array - WRITE 124332 kb/s, READ 158215 kb/s
The other endpoint in these tests was a 250 GB Maxtor drive attached to a Siig SATA card in my IBM xSeries 206, network is gigabit and this combination is known to be good for something in the region of 290 mb/s throughput - the numbers come from the SNMP counters on the network switch, and the data is a single 2GB test file. To be certain I really should do perhaps ten transfers each way and calculate an average, but I'm not really keen on spending the time and in any case the results appear as I expected, given previous tests - the silicon appears to be the limiting factor - the throughput numbers are so close to one another that there would be no significant performance advantage to using RAID0.
Edit - Just for the heck of it, I thought I'd round the figures out.
Scenario #3 - 2x250 GB Seagate Barracude 7200.9 drives in a RAID1 array - WRITE 120682 kb/s, READ 150128 kb/s
Scenario #4 - 2x250 GB Seagate Barracude 7200.9 drives - two separate volumes WRITE 149926 kb/s, READ 169958 kb/s
Scenario #5 - 2x250 GB Seagate Barracude 7200.9 drives - two separate volumes WRITE/READ 162719 kb/s, WRITE 80752 kb/s, READ 80,247
For scenario #3, there is slight performance degradation, compared to a single drive, I can understand it on the write, since the data must be written twice, but I have no explanation for why it should occur on the read - especially given the results of scenarios #4 & #5
For scenario #4, I'm running two simultaneous data streams, one to/from each drive - both drives are either being written at the same time or read at the same time
For scenario #5, I'm running two simultaneous data streams, when one drive is being written, the other drive is being read - the WRITE/READ number is the aggregate total and yes, I know the numbers don't add up, probably something to do with the way the SNMP counters are polled.
Last edited by fordem (2008-01-07 03:03:22)
Offline
Thank you! Much appreciated!
I'm switching back to "Standard" - in case of a drive loss it's one at a time.
Cheers!
Offline
One interpretation of these data points is simply that disk drives themselves have become awfully good.... if you're putting this year's drives into last year's enclosure, then there may be no benefit to using RAID 0, because the drives are so much faster than the silicon, especially in a real-world situation where smart controllers and big caches make a difference
Offline
ummm - can't speak for anyone else, but at least in my case, these are year before last's (Sept 2006) drives in last year's (Jan 2007) enclosure. You perhaps need to recognize that this is a low cost, consumer grade NAS, and even if it weren't, I don't see a valid reason, in all honesty, to use RAID0 on a NAS box - there's just too much in between the application and the data.
It is, with today's hardware, fairly easy to saturate a 100 mbps LAN connection - I've managed to hit a sustained 98 mbps with a Dell Inspiron 1100, purchased in 2003 and at the time their cheapest laptop - making the network the probable bottleneck, and even on gigabit, unless you select your components with care ($$$) you're going to max out at ~ 300 mbps or so.
If you need (rather than want) RAID0, you've got to be doing some sort of graphic or video processing - put the RAID0 in the workstation and use the NAS for backup (there's the B word again )
Last edited by fordem (2008-01-07 05:57:42)
Offline
fordem wrote:
ummm - can't speak for anyone else, but at least in my case, these are year before last's (Sept 2006) drives in last year's (Jan 2007) enclosure. You perhaps need to recognize that this is a low cost, consumer grade NAS, and even if it weren't, I don't see a valid reason, in all honesty, to use RAID0 on a NAS box - there's just too much in between the application and the data.
I agree completely - my previous comment "this years' drives" etc. was sort of a flippant way to point out that the drives you can walk into the computer store and buy are very good - not really very far removed from the highest-end drives available. The DNS323 NAS, on the other hand, is a different story - pretty damn good for a US$200 Linux server, but probably not up to the task of saturating a RAID0 array.
Offline
Isn't it true that jumbo frames need to be supported by all devices? As far as I know the DNS-323 does not support jumbo frames.
dickeywang wrote:
I don't think raid0 will give u much speed boost since the bottleneck is most likely lie on the processor of the DNS.
You may want to try jumbo frame on the DNS, which is by now the only way I know that can give you some noticeable read/write speed gain(about 30% increase in reading speed and 10-15% write). You need to have a PC network card and a gigabit switch that both supports jumbo frame of course.
Offline
xo-vision wrote:
Isn't it true that jumbo frames need to be supported by all devices? As far as I know the DNS-323 does not support jumbo frames.
dickeywang wrote:
I don't think raid0 will give u much speed boost since the bottleneck is most likely lie on the processor of the DNS.
You may want to try jumbo frame on the DNS, which is by now the only way I know that can give you some noticeable read/write speed gain(about 30% increase in reading speed and 10-15% write). You need to have a PC network card and a gigabit switch that both supports jumbo frame of course.
The DNS's NIC does support jumbo frame. You can't find the jumbo frame settings in the web interface that came with the official 1.03 firmware, but you can still set the frame size with a telnet connection. Look at my previous post for details:
http://dns323.kood.org/forum/t921-Jumbo … 21%21.html
Last edited by dickeywang (2008-01-08 00:44:10)
Offline
I'll try that - how can I make "ifconfig egiga0 mtu 4500" permanent, so it survives a reboot?
Offline
xo-vision wrote:
I'll try that - how can I make "ifconfig egiga0 mtu 4500" permanent, so it survives a reboot?
Script it in funplug
Offline
Hi fordem!
I added "ifconfig egiga0 mtu 4500" to the fun_plug file - after a reboot I could not access the shares on the DNS anymore. Can you be more specific how to make jumbo frames permanent?
Thanx,
Chris
Offline
It's been awhile since I played it with it and I forgotten the exact syntax - but - whatever entries you can make from the telnet cli prompt can be add to the fun_plug script - so if that entry works from the telnet cli prompt it should work fine.
Offline
There could be some path or environment information in the CLI that
is different from the fun_plug script environment? Try specifying the
complete path to the ifconfig executable.
Offline
just my 2c
I have raid0 in my dns323
and have raid 0 on my dell dimension 9150 2.4ghs duo on 1Gbs lan
transfer rates seem constant at about 118 - 120 Mbs
I also have a shuttle amd 1800 also with 1Gbs lan and 3 x 320 GB seagate standard IDE drives ( ie no raid )
transfering the same data on the same network gets me about 150-155mbs.
so I can say from these figures it is the dlink cpu doing the striping that would be the bottle kneck
it is a shame they don't use the same hardware raid has the Lacie
I have to 1Tb raid0 on a firewire 800 off my dell and I get a fairly constant 400 Mbs transfer over that
(4.20GB file took 1min 23secs on my dell firewire800 same file took 4mins 44sec on my dell to dns323)
I hope that helps somebody
Last edited by buggymonkey (2008-01-09 17:21:56)
Offline
buggymonkey wrote:
it is a shame they don't use the same hardware raid has the Lacie
Which Lacie device are you referring to?
Offline
a lacie big disk d2 extreme triple interface usb firewire 400/800,
origionally 2 maxtor 250gb in raid0 (hardware) but there was a thermal fault on one so I replaced the with 2 seagate 500gb ide drives and imporove cooling by by drilling holes top an bottom in the metal case so I now have 1TB firewire 800 drive .
I took the 2 mator 250gb drives to work an found no less than 32 dry joints on the main chip on the controller board on one of the drives , resoldered them and the drive has been perfect (if not a little hot ) in a cheap usb caddie since
Offline
http://www.smallnetbuilder.com/content/view/29936/79/
I came across this article - which compared the throughput with different speed drives in the DNS-323 - it basically shows that disk speed does not make much of a difference to the DNS-323 performance, which - by extrapolation - would also show that since the disk is not the bottleneck, RAID0 would not provide much of an enhancement.
It also compares the throughput of another, more powerful NAS product with the same disks - and that I found even more interesting - I used to say that the bottleneck in the DNS-323 was in the silicon, because I had no way to pin down it's exact location - one of the charts in the article shows throughput against file size and it looks like there is a correlation to cache memory - that kind of points to the disk interface as being the slow link.
Interesting reading - if nothing else.
Offline
Pages: 1