Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi!
Yesterday I finally received my HDDs for the CH3SNAS. I had ordered a Gbit switch along with them (D-Link DGS-1005D). When I used it instead of my Level 1 FSW-0508TX 100 Mbit switch, first thing I noticed was that the NAS was my only unit that supports Gigabit LAN. Stupid me, I had expected my desktop's NIC to support it too. I still tried to copy a folder of 4 GB *from* the NAS to my desktop. While it had taken me about thirteen minutes *to* copy it to the NAS with my old switch, it now took nearly fifteen minutes. Read performance was actually worse than write performance. A result I hadn't expected.
So I switched back to the 100 Mbit Level 1 switch, and guess what? Copying the same folder in the same direction took only 08:28 min. So...
Does anyone have an idea, why the transfer rate from the CH3SNAS to my desktop took 50% longer when using a Gigabit switch? I'd have expected it to be as fast as the 100 Mbit-switch.
I didn't change anything in the network topography, just exchanged the switch. Same cables & everything.
Greetings!
Ben
Offline
I would have expected the gigabit switch to be at least as fast as your 100 mbit switch also - however - what you describe is not that unusual - cables that work fine at 100 mbps will sometimes cause problems at gigabit speeds, especially those that may have been assembled by persons not adhering strictly to the IEEE specifications.
Offline
As fordem said, I would look at the cables. I have seen exactly this many times. Also, I don't know where you live, but I can get decent Gbit NICs for the desktop for around $9-$20. It is worth the investment.
A last note, the device you have, DGS-1005D, has been discontinued from D-Link. I also checked out the specs on it and this device has some issues with it's autonegotiation of port speed. This may be why it was discontinued. I am using D-Link DGS-2208 switches with no problem.
Last edited by bq041 (2008-09-07 20:31:46)
Offline
@fordem: "(...) cables that work fine at 100 mbps will sometimes cause problems at gigabit speeds (...)"
Well, the cables might not be the optimum, but they are not home made. Plus, the network should "only" be as fast as with the 100 mbps switch, that would already make me quite happy. It could actually not be much faster than before, because the CH3SNAS is the only gigabit-capable device, but it should not be significantly slower... :-(
@bq041: I would buy a gigabit NIC for my desktop, but as my notebook can't be upgraded that way, I'd like the network to perform fine in 100 mbps too. Here in Germany, the DGS-1005D does not appear to be discontinued, on the German D-Link homepage I could not find any other 5-port gbit swich. Where did you find the information about issues with it's autonegotiation of port speed? That would be interesting to read.
Offline
In the FAQ on the USA D-Link site.
...because the CH3SNAS is the only gigabit-capable device, but it should not be significantly slower... :-(
Not neccessarily. If the CHSNAS is connecting at Gbit, the network and the cables are sub standard for the Gbit, it can actually transfer slower. Gbit is using all 4 pair in the cable where 100mbit only use 2 pair. For example, if you connect a Gbit devices with a cable that only has the 2 pair in it to a Gbit switch, it will not do anything at all. But that same cable from 1 Gbit device to a 100mbit switch will work fine. I'm not implying that there are only 2 pair in your cable, but assuming that a Gbit network will be at lease as fast as a 100mbit network using the same cables is only that, an assumption. The quality of the products needs to be higher for Gbit.
Last edited by bq041 (2008-09-08 16:18:04)
Offline
Just to elaborate on bq041's post.
All four pairs are required for gigabit ethernet, if only two pairs are present, as may be the case with some 100 mbps cables, the network should function at 100 mbps speeds.
If all four pairs are present and are miswired, regardless of the grade (even if it is CAT5e or CAT6) a cable that functions well at 100 mbps may function poorly or not at all on a gigabit LAN.
If all four pairs are present and correctly wired and the cable is substandard - ie not CAT5e or CAT6 - depening on the length of the cable and the environment in which it is installed, a cable that functions well at 100 mbps may function poorly or not at all on a gigabit LAN.
Offline
fordem wrote:
... if only two pairs are present, as may be the case with some 100 mbps cables, the network should function at 100 mbps speeds.
Not necessarily true. My tests have shown it will do nothing at all in this case. The devices will attempt to sync at Gbit, but will fail. I have not personally had 2 Gbit devices work at 100mbit with only 2 pair. I'm not saying that some devices will not do it, I just have not seen them.
Last edited by bq041 (2008-09-08 20:09:47)
Offline
Guess I'll fire up the crimper tonight and do some validation....
Results to follow.
Regards,
BSPvette
Offline
Well - I just built me a cable - about a foot of CAT5, with two pairs (12/36) connected - pulled the patch cord between the network switch (Netgear FS728TS ) and my desktop (Dell Optiplex 270 - Broadcom NetXtreme 57xx Gigabit controller) and inserted the two pair patch cord, and I'm posting this across the resultant 100 mbps link.
Ixia's QCheck reports 94.118 mbps across the 100 mbps link and 888.890 mbps across the normal gigabit link (it also returns a warning that this last test ran too quickly to give reliable results and recommends that I run the test again after increasing the data size - unfortunately, this was with the maximum data size selected.)
Offline
Fordem got the results I expected....
<sigh>
Man is he quick!
BV
PS: Fordem, do you have qcheck running on the DNS-323?
Last edited by bspvette86 (2008-09-08 23:18:04)
Offline
I know from my Dell notebook (I believe it's a Broadcom onboard NIC, but I don't have it turned on right now to see) to my DLink DGS-2208 it does not work. I also didn't get it to work from the DGS-2208 to the DIR-655, but that one does not really count as it had 3 pair (12,36,78) not 2 pair.
Offline
bspvette - no QCheck was run between the desktop and an IBM server - and feel free to conduct your own tests, the more the merrier.
All of the gigabit switches I have used were Netgear products - GS716T, GS108T, and the gigabit ports on FS726T and FS728TS switches - bq041 has mentioned that he has been unsuccessful with DLink switches - you might also have similar (or different) results depending on the brand of your switch.
Offline
Fordem,
Bummer, I thought you found/compliled a Qcheck endpoint for the DNS... I'm still trying to figure out a network only test to run against the DNS.
I have a mix of cheap gigabit switches. A netgear GS105, Dlink DGS-2208 , and a DIR-655. They all work and play well together and they all perform the same. I have run tests with cat-5, 5e, and 6 with similar results with mostly 5+ year old equipment. Everything within 3 feet of the network gear is 3 foot cat-6 patch cables. Best performing box network wise is a Mac Mini. Never seen anything over 366 Mbps with Qcheck except when running via Loopback. Got 888+Mbps running on loopback with the warning that this last test ran too quickly to give reliable results.
Need to pick up some more modern gear but haven't had the need. (ok so maybe I don't after the results below....)
Cheers!
BSPvette
Last edited by bspvette86 (2008-09-09 06:22:47)
Offline
fordem wrote:
bspvette - no QCheck was run between the desktop and an IBM server - and feel free to conduct your own tests, the more the merrier.
WOOO FRICKIN HOOO. ixia has an ARM endpoint on their site that works on the DNS-323! It is the Statically linked version. Here is the link.
http://www.ixiacom.com/endpoint_library … sl_660.tar
Mac Mini runing OSX endpoint --> DGS-2208 switch --> DNS-323 running ARMsl endpoint = 444Mb/s
Mac Mini runing OSX endpoint --> DIR-655 switch --> DNS-323 running ARMsl endpoint = 470Mb/s
Mac Mini runing OSX endpoint --> Straight throug CAT-6 cable --> DNS-323 running ARMsl endpoint = 444Mb/s
Mac Mini runing OSX endpoint --> Netgear GS105v1 switch --> DNS-323 running ARMsl endpoint = 266Mb/s
(GS105v1 does not support jumbo frames. v2 does. Had to shut jumbo frames off for test to work.)
Mac Mini runing OSX endpoint --> DGS-2208 switch --> Asus A7N8XE-Deluxe w/overclocked xp2500-M = 800Mb/s
It appears I never ran Qcheck with jumbo frames before..... The best I've gotten in the past between the Mac and the Desktop is 366Mb/s.
CHEERS!
bspvette86
Last edited by bspvette86 (2008-09-09 16:21:06)
Offline
Ok - how do I get this installed and running - presumably I have to call it from fun_plug?
444 mbps - that certainly eliminates the network side of things as a potential bottleneck.
Last edited by fordem (2008-09-09 05:15:26)
Offline
Fordem,
Download it, untar it, run the ./endpoint executable in the temp directory that is extracted.
Yes you need telnet... unless you want to run it from funplug....
BTW use 1000KB for the packet size on the throughput test. Gits rid of the "too fast" messages.
Also, make sure the DNS-323 is the target. Does not seem to work with the DNS-323 as the source for the test.
Regards,
BSPVette
Last edited by bspvette86 (2008-09-09 06:00:52)
Offline
fordem wrote:
I didn't know Linksys made the GS105 - did you get around to crimping a two pair cable to see if it would connect?
Tee hee hee. Ooops. I fixed that. It is a Netgear....
I passed on the 2 pair test and did a direct connect of the Mac to the DNS-323 via a straight through CAT-6 1000-BT4 cable. No need to mess around with a crossover cable if it works with a straight up cable.
Cheers!
BSPvette
Last edited by bspvette86 (2008-09-09 15:30:05)
Offline
OK - I got it working.
285.7 mbps without jumbo frame and 400.0 mbps with - I'll use it to fine tune jumbo frame settings later today when I have a little more time to fiddle.
This does however bring up another issue - there is another thread where we have been discussing the location of the "bottleneck" on this device - obviously with network throughput like this, it's not the network subsystem, and the processor would appear to be capable of "calculating checksums" a little faster than one might guess from the file transfer tests done with NASTester, although we do see a significant increase in throughput with jumbo frame (internet discussions have suggested that jumbo frame provides enhancements to the order of 10%, rather less than the 30% seen here and as much as 55% that I have recorded using NASTester)
If I am correct - your tests on disk throughput suggest that the disk subsystem is not the bottleneck, so we're right back to what's in between the disk and the network - the processor and i/o busses.
oh - now that I have reinstalled fun_plug and have telnet access, I'll take a look at the disk i/o tests you had suggested.
Offline
Lack of memory and inefficient drivers would be good candidates as well. It would be nice to get someone who has done the memory mod to run some of these tests as well.
BV
Offline
fordem wrote:
I would have expected the gigabit switch to be at least as fast as your 100 mbit switch also - however - what you describe is not that unusual - cables that work fine at 100 mbps will sometimes cause problems at gigabit speeds, especially those that may have been assembled by persons not adhering strictly to the IEEE specifications.
I use XBMC on 2 different XBOX 1 units.... I replaced my dlink 10/100 switch with a 10/100/1000 switch - and I get jitters now while playing video images. Both XBOX units have a 10/100 switch next to them to connect the multi boxes etc I have at that location. I get 1000 from dns to dns box (I have 2) because they are on the 10/100/1000 switch - which is expected.
If I replace the 10/100/1000 back to the 10/100 the video jitters goes away - All my cables are factory cat6e spec'd cables - I think it has something to do with the samba buffering from the dns boxes....
I am not sure tho... but it is a pain in the butt
Myk
Offline
I am in the process of upgrading my HD from 500 GB to 1 TB. I am seeing a load average of 3.32 when all that is running on the box is the format of one drive. Top processes are pdflush pdflush and mke2fs. mk2fs is using 33 MB of memory. System is using 18MB for disk buffer. Would be interesting to see what impact reducing the number of pdflush processes would have on performance
Pretty good indication of the bottleneck being:
1) CPU
2) Memory
3) Inefficient drivers
BSPvette
<edit>
PS: and CPU=100%
Mem: 60456K used, 1492K free, 0K shrd, 17252K buff, 1120K cached
Load average: 3.04 2.86 2.02 (Status: S=sleeping R=running, W=waiting)
PID USER STATUS RSS PPID %CPU %MEM COMMAND
50 root SW 0 5 37.8 0.0 pdflush
3014 root R 33700 3013 35.8 54.3 mke2fs
49 root SW 0 5 23.3 0.0 pdflush
51 root DW 0 1 1.9 0.0 kswapd0
2070 root R 164 1897 0.3 0.2 top
1543 root S 1208 1 0.0 1.9 webs
1615 root S 304 1 0.0 0.4 sh
1 root S 296 0 0.0 0.4 init
1603 root S 296 1 0.0 0.4 op_server
2047 root S 288 1 0.0 0.4 crond
3013 root S 280 2299 0.0 0.4 sh
2298 root S 280 1543 0.0 0.4 sh
1688 root S 272 1 0.0 0.4 lpd
1525 root S 260 1 0.0 0.4 chkbutton
1239 root S 248 1 0.0 0.3 atd
1551 root S 208 1 0.0 0.3 fancontrol
2299 root S 160 2298 0.0 0.2 format_ide
1897 root S 128 1891 0.0 0.2 sh
1891 root S 28 1 0.0 0.0 utelnetd
203 root SW 0 1 0.0 0.0 mtdblockd
231 root SW< 0 1 0.0 0.0 loop0
<\edit>
Last edited by bspvette86 (2008-09-10 21:41:11)
Offline
fonz wrote:
bspvette86 wrote:
3) Inefficient drivers
On IRC, maligor noted that disk I/O is about 50% faster with a recent vanilla kernel.
Fonz
What would be the easiest way with the least risk to get a vanilla kernel running on the 323?
Regards,
BSPvette
Offline
bspvette86 wrote:
What would be the easiest way with the least risk to get a vanilla kernel running on the 323?
Offline
Fonz,
Thanks... I'll give it a try and redo my tests....
CHEERS!
BSPvette
Offline