Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Pages: 1 2
I am in the process of upgrading my HD from 500 GB to 1 TB. I am seeing a load average of 3.32 when all that is running on the box is the format of one drive. Top processes are pdflush pdflush and mke2fs. mk2fs is using 33 MB of memory. System is using 18MB for disk buffer. Would be interesting to see what impact reducing the number of pdflush processes would have on performance
Pretty good indication of the bottleneck being:
1) CPU
2) Memory
3) Inefficient drivers
BSPvette
PS: and CPU=100%
Last edited by bspvette86 (2008-09-10 17:06:12)
Offline
Now that I have my new drives in and formatted, I did some checking with top as my restore was taking place via FTP. CPU was pegged at 100% while I was getting 22.5MB/s transfer rate on a single threaded restore. Pure-ftpd and pdflush were consuming the time for the file transfer and using 97.3% of the CPU.
Cheers!
BSPvette
Mem: 60764K used, 1184K free, 0K shrd, 10756K buff, 38220K cached
Load average: 2.05 1.36 0.68 (Status: S=sleeping R=running, W=waiting)
PID USER STATUS RSS PPID %CPU %MEM COMMAND
1916 daddy R 1328 1658 89.3 2.1 pure-ftpd
50 root SW 0 5 8.0 0.0 pdflush
51 root DW 0 1 1.6 0.0 kswapd0
1899 root R 192 1778 0.2 0.3 top
1564 root S 2360 1 0.0 3.8 webs
1597 root S 2084 1 0.0 3.3 smbd
1605 root S 2076 1597 0.0 3.3 smbd
1906 daddy S 1272 1658 0.0 2.0 pure-ftpd
1601 root S 1260 1 0.0 2.0 nmbd
1658 root S 1136 1 0.0 1.8 pure-ftpd
1667 root S 556 1 0.0 0.8 lpd
1212 root S 324 1 0.0 0.5 crond
1618 root S 304 1 0.0 0.4 sh
1 root S 296 0 0.0 0.4 init
1544 root S 296 1 0.0 0.4 chkbutton
1607 root S 296 1 0.0 0.4 op_server
1177 root S 256 1 0.0 0.4 atd
1574 root S 228 1 0.0 0.3 fancontrol
1778 root S 208 1772 0.0 0.3 sh
1772 root S 60 1 0.0 0.0 utelnetd
49 root SW 0 5 0.0 0.0 pdflush
Last edited by bspvette86 (2008-09-10 22:59:01)
Offline
I have decided to revive this thread with some new findings- under the latest firmware v1.06 the disk subsystem indeed is not the bottleneck.
I had to resync a RAID1 built on 1TB Seagate disks and it ran at a full speed of nearly 100MBytes/sec with 150 minutes to finish, which is exactly the time it would take on a Quad core server.
So my conclusion is that the efficiency of the disk IO system is fine and since there is not much difference between nfs and samba in speed tests it all points to CPU hungry network driver, which could be down to limited NIC hardware features....
Offline
Under 1.05 the network interface was capable of 50 MB/sec or 400 mbps transfers - this was determined using Ixia's QCheck which transfers data from memory without using the disk subsystem.
For this test I was running the DNS-323 as one end point, my IBM xSeries server (Intel PRO/1000) as the other end point and a Netgear GS108T linking the two, jumbo frame was set at 9000 bytes.
I'll "rerun" that test later today, along with a few others to see if there were any changes.
Offline
skydreamer wrote:
I have decided to revive this thread with some new findings- under the latest firmware v1.06 the disk subsystem indeed is not the bottleneck.
I had to resync a RAID1 built on 1TB Seagate disks and it ran at a full speed of nearly 100MBytes/sec with 150 minutes to finish, which is exactly the time it would take on a Quad core server.
So my conclusion is that the efficiency of the disk IO system is fine and since there is not much difference between nfs and samba in speed tests it all points to CPU hungry network driver, which could be down to limited NIC hardware features....
Not certain how you came by the numbers - but a rebuild of a 250 GB RAID1 array took me 73.5 minutes which would be roughly half the speed you describe. Disks were Seagate Barracuda 7200.9. Are my disks that much slower than yours?
Offline
Skydreamer,
It doesn't matter how many times you revisit this thread, which firmware you use, how much memory you free up, or how many alien probes you have in your skull, the DNS-323 is CPU bound PERIOD. I gave up on the DNS-323 after finding the Intel D945GCLF2 motherboard last fall. Dual core 1.6Ghz Atom 300 with GigE, up to 2GB ram, 1 free PCI slot, onboard SATA and PATA. See the following link for full specs:
http://support.intel.com/Products/Deskt … erview.htm
Got the Retail board kit for $80 and a 2GB dim for $20. Threw it in an old ATX case with power supply I had laying around and BAM! kicked up my file transfer rates like I thought the 323 should have had out of the box. Then went out and got a FAKERAID controller to throw in the PCI slot and now have 6 sata hard drives all running full blast at 65 watts. I am now free to run any OS/services and have one heck of a home File server that blows the under-featured DNS-3## boxes out of the water. Sure, I'm using a bit more power than the DNS, but in the grand scheme of things I am actually SAVING power by not having another box up for RELIABLE printing, UPnP, etc.
Cheers!
BSPvette86
PS: Anyone need another DNS-323..... mine is just gathering dust...
Offline
bspvette86 wrote:
Skydreamer,
It doesn't matter how many times you revisit this thread, which firmware you use, how much memory you free up, or how many alien probes you have in your skull, the DNS-323 is CPU bound PERIOD. I gave up on the DNS-323 after finding the Intel D945GCLF2 motherboard last fall. Dual core 1.6Ghz Atom 300 with GigE, up to 2GB ram, 1 free PCI slot, onboard SATA and PATA. See the following link for full specs:
http://support.intel.com/Products/Deskt … erview.htm
Got the Retail board kit for $80 and a 2GB dim for $20. Threw it in an old ATX case with power supply I had laying around and BAM! kicked up my file transfer rates like I thought the 323 should have had out of the box. Then went out and got a FAKERAID controller to throw in the PCI slot and now have 6 sata hard drives all running full blast at 65 watts. I am now free to run any OS/services and have one heck of a home File server that blows the under-featured DNS-3## boxes out of the water. Sure, I'm using a bit more power than the DNS, but in the grand scheme of things I am actually SAVING power by not having another box up for RELIABLE printing, UPnP, etc.
Cheers!
BSPvette86
PS: Anyone need another DNS-323..... mine is just gathering dust...
bpsvette86 you have made a number of good posts but you are off topic here. This is about DNS-323 and not building custom servers or storage solutions.
There is a number of different discussion forums that would cater for your achievements.
Offline
Still BSPvette86 has a point. I have three DNS-323 now: the added power consumption of using three devices is much less power efficient than let's say a PC with a Via C7 CPU.
All UPS challenges, automatic restart issue after power failure etc. are not relevant to a PC - Using an XP, system administration can be done via RDP, the newest version of Twonky will work, performance is better, more disks can be used, etc. etc. etc.
I'm am also seriously thinking about selling my DNS-323 and replacing the DNS-323 even though I like it otherwise (I would miss NFS3!).
In the DNS-323 forum alternativs should be discussed as well. After all, discussion means: exploration of alternatives!
Offline
skydreamer wrote:
bpsvette86 you have made a number of good posts but you are off topic here. This is about DNS-323 and not building custom servers or storage solutions.
There is a number of different discussion forums that would cater for your achievements.
The point is the device is CPU bound. If you want better performance, you need different hardware.
Regards,
BSPvette86
Offline
NIC:
I went through Marvell's Alaska datasheet and also various discussions at the development forums and there seems to be a fair bit of tweaking needed before the chip 88E1111 achieves its full potential.
The chip is able to run over copper and also fiber optic. It has a CRC checker and a packet counter built in, which offloads Layer2 tasks. Layer 3 is obviously software emulated but this should not result in heavy CPU load.
Tests:
The tests confirmed that the disk I/O subsystem and NIC TCP/IP engine are surprisingly efficient.
The bottleneck is in smbd and unfsd. smbd is currently better- less CPU load and a faster transfer, the user space nfs daemon has 7% higher overhead which translates into 37% drop in transfer speed.
There is not much room for improvement in smbd I guess? but I would be quite interested in running the same test with a kernel space nfs daemon.
Results below:
NFS copy to DNS-323:
Mem: 60548K used, 1356K free, 0K shrd, 10592K buff, 31436K cached
CPU: 10% usr 54% sys 0% nice 0% idle 19% io 0% irq 15% softirq
Load average: 1.26 0.81 0.39
PID PPID USER STAT VSZ %MEM %CPU COMMAND
1853 1 root R 6336 10% 76% /ffp/sbin/unfsd -e /ffp/etc/exports
Time to transfer 2.56GB: 3minutes 44sec, 11.5 MB/s
Samba copy to DNS-323
Mem: 60808K used, 1096K free, 0K shrd, 10564K buff, 32156K cached
CPU: 5% usr 68% sys 0% nice 2% idle 10% io 0% irq 13% softirq
Load average: 1.47 0.64 0.47
PID PPID USER STAT VSZ %MEM %CPU COMMAND
10680 1639 nobody R 5092 8% 69% /usr/sbin/samba/smbd -D
Time to transfer 2.56GB: 2minutes 21sec, 18.1 MB/s
Samba copy to a reference FC8 quad core server 60 seconds, 42.6 MB/s, CPU load 33% on one core running @2.66GHz
Offline
Pages: 1 2