Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
feedback + comments welcome
after some experiments i found the following settings helpful to increase (internet) throughput
you must set both on your router and dns-343
Jumbo Frames OFF (with jumbo frames, fragmentation happens and it kills throughput since the CPU is not fast enough to compensate)
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 5 > /proc/sys/net/ipv4/tcp_keepalive_probes
Offline
Good job on this.
could you explain why the TCP settings need changing, and your metric(s) for testing?
Offline
Remember that if your upstream provider doesn't support jumbo, then it's worthless.
I don't think comcast here supports jumbo frames.
Neither does my firewall (it max's out at 1500bytes/frame being only 100base-T)
jumbo frames are great, but you typically only have control over your local network.
last, I'm not sure 100base-t supports jumbo frames (although there might be some that might)
-Ben
Offline
Just an anecdotal observation, I haven't tried these optimizations, but it seems that the 343 is pretty speedy on it's own. I stream much data over NFS but with just simple observations with Win7 to DNS-343 using samba over gigabit LAN. My DNS-343 is running RAID-5 with 4x2tb 7200rpm drives:
With no other network or device concurrent usage:
I have seen a sustained max of 78mbps (9.5MB/sec) writing on a 6gb file.
Reading off DNS-343 for that same file, sustained max of 204mbps (25.5MB/sec) on that same 6gb file.
Without being able to test right now, I'd expect NFS would be slightly faster. Unfortunately, I cannot for the life of me get NFS working with Win7 Ultimate. Anytime I mount my NFS shares, off of a DNS-323 or DNS-343, the command shell locks up.
This seems like about the best I should realistically expect. Not sure what these optimizations could do to improve settings... My router and switch do not support jumbo frames though.
Offline
I have jumbo frames turned on with my local hosts also set and there's a big difference with the NIC as to what speed can be attained.
Oddly enough I have 2 Dell GX270's with Intel 1000 Pro/MT NIC's that read at about 320Mb/s (that's little b peoples) with P4-HT 3.2GHz CPU's where my quad AMD (Asus MB with Realtek NIC) maxes out at about 100Mb/s.
Weird but true.
Linux kicks butt.
-Ben
Offline
this will shorten the headers and also use less CPU
it's a 1% improvement with tcp_timestamps off,
i am able to max out my subscribed speed, else i hover at about 98-99%
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
these will lower the number of opened connections (if you have lots of connection hits)
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 5 > /proc/sys/net/ipv4/tcp_keepalive_probes
robinson wrote:
Good job on this.
could you explain why the TCP settings need changing, and your metric(s) for testing?
Offline
yes... my operator do NOT support jumbo frames,
hence if i turn it on, there will be lot of fragmentation, the CPU is just not fast enough to compensate this extra load...
bkamen wrote:
Remember that if your upstream provider doesn't support jumbo, then it's worthless.
Offline