Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi all,
Where I work we have bought three of these units to use them for the backup of four servers, one Windows, two Linux FC, and one FreeBSD. All the data goes to the same unit, and we rotate units weekly.
We have been doing some tests to see what network protocol is more efficient to mount the unit from the Linux and FreeBSD servers, and have arrived at very strange numbers that I would like to share, and about which I would appreciate any comments.
Tests are presented by server. All tests consists in transfering a 1 GiB file of random data, actually generated from /dev/random previously.
== Linux 2.6.14-1.1656_FC4, network interface connects @ 100 Mbps
MOUNT TYPE | DIRECTION | SPEED smbfs | push to the device | 30.36 Mbps cifs | push to the device | 31.56 Mbps nfs | push to the device | 72.91 Mbps nfs | pull from the device | 77.02 Mbps
== Linux 2.6.16-1.2115_FC4smp, network interface connects @ 1 Gbps
MOUNT TYPE | DIRECTION | SPEED smbfs | push to the device | 41.89 Mbps nfs | push to the device | 78.75 Mbps nfs | pull from the device | 73.32 Mbps nc | push to the device | 58.36 Mbps
For whatever reason, we cannot mount the DNS-323 in this server using CIFS.
Also, transfering data through NFS results in weird behaviour: it is the only transfer type that is not constant in speed, going up to around 300 Mbps and down to 20 Mbps, resulting in the average presented in the list. Also note how pulling from the DNS-323 device using this protocol is actually slower that pushing data.
== FreeBSD 6.2-RELEASE, network interface connects @ 1 Gbps
MOUNT TYPE | DIRECTION | SPEED smbfs | push to the device | 111.09 Mbps nfs | push to the device | 32.13 Mbps nfs | pull from the device | 56.03 Mbps nc | push to the device | 87.43 Mbps
For whatever reason, this machine does not have mount_cifs, and reinstalling SAMBA has not helped. We'll look more closely into the problem when we have the chance.
==========
It has been possible to mount the device through NFS by fun_plugging it. The nc test has been done running nc -l ... > ... at the device side, and time nc ... < ... at the server side to transfer the file. The device is plugged to the same gigabit switch as the servers. The device has two 500 GB hard disks formated as RAID 0---although we have done a couple of tests as JBOD with the fastest transference type and yes, speed is the same.
So anyways, all in all transfer speed is not exactly great. Note that the highest speed is achieved by mounting the drive as SMBFS in FreeBSD. At the same time, NFS is particularly slow in this OS. Just in case it had to do with the NFS implementation, we also tested the transfer speed between the FreeBSD and one of the FC4 servers, achieving 300 Mbps. So it could be a compatibility problem, but not just FreeBSD's fault.
We also did some tests with scp and rsync over SSH, but due to Dropbear the speed was so slow that we decided to not keep even considering it. We have not tried the rsyncd because it would not work with our backup system.
Any insights here, or advice on how to make the transfer more efficient, will be greatly appreciated.
NOTE: Mbps have been calculated as:
(((transfer size in bytes) / (time in seconds)) / (1024^2)) * 8
If you just divide by 8, you will have MiB/s.
Last edited by c2_alvaro (2007-09-16 16:12:06)
Offline
Thanks for sharing your measurement results. While NFS speed at 100Mbit/s looks perfectly sane to me, I agree that the Gbit/s cases fall short of what one might expect, especially with FreeBSD. I assume you have repeated the measurement a few times to make sure the numbers are stable. I also assume, you've used the unfs3 server.
Can you provide the mount options you used for the NFS mounts? NFS performance can usually be improved by using the following mount options: 'rsize=16384,wsize=16384,noatime'. In the Gbit/s cases, jumbo frames could also improve performance (up to 30% has been reported).
In case there's a problem with NFS server implementation, a comparison with kernel-based NFS could provide new insights. I'm currently working on the procedure and updates to the funplug.
Offline
Thanks for your reply, and for your fun_plug!
Yep, we repeated those tests with the strangest results, the numbers are what they are. And yes, I have used unfs3---the one that comes with your fun_plug 0.3.
As for the NFS options, I tried the default ones. Anyways, FreeBSD does not accept the rsize and wsize parameters. But issuing async does improve the transfer speed, of course. As for Linux 2.6.16-1.2115_FC4smp, setting rsize and wsize with larger values seems to hurt performance, both with 16384 and 8192. Again, issuing async as a modifier when mounting the volume, increases performance up to 92.93 Mbps.
While fine-tuning the modifiers using when mounting can help, I don't think it is the reason for the low performance.
I have found that when transfering data from Linux 2.6.16-1.2115_FC4smp to the DNS-323, running top shows that the unfsd process is consuming all available CPU time, around 97%, and this seems to be the bottleneck. On the other hand, when transfering data over smbfs at the highest speed possible---i.e., from FreeBSD---, the CPU stays around 52%.
The CPU power required by unfs3 is very high for the DNS-323, the same that happens with Dropbear.
If all this is right, and the bottleneck with NFS is the processing power of the device and the NFS server implementation, then 80 Mbps should be the maximum speed when storing data in the device, for anyone, independently of the OS at the other end and the network speed. Can you please confirm this?
After more tests, I have also found that FreeBSD's mount_nfs seems to perform poorly both with the DNS-323 and with Linux 2.6.16-1.2115_FC4smp, with a similar speed of around 30 Mbps. While I was able to get a good transfer performance of 300 Mbps, it was when mounting a FreeBSD volume from Linux---therefore using mount.smbfs in Linux and FreeBSD NFS server implementation.
Offline
I just got a DNS-323 (upgraded to 1.03 firmware first thing). I've got a pair of 500G drives in a RAID 1 setup (a Samsung HD501LJ and a Western Digital WD5000AAKS; they are the exact same size). I was expecting better performance; I connected it to my gigabit network (Netgear 5 port switch) but I don't see anything like the speed I get from my computers.
I copied a Fedora DVD image onto the DNS-323. I then telneted to the box and timed how long it took to dd it to /dev/null; I got a speed of about 36 MBps. Doing the same thing with the file sitting on a local drive on a Linux box, I got about 49 MBps, so 36 is not too bad (maybe a little slow but still plenty quick).
When I dd the same file over a CIFS mount to a Linux box, I get about 13 MBps. On a Windows box, I get about 15 MBps. I tried unfs to Linux and got about 11 MBps.
Just netcatting from Linux to the DNS-323 (/dev/zero to /dev/null) I got about 27 MBps. Doing the same thing from the Linux box to the Windows box gave me about 38 MBps.
I guess the problem is that doing both disk and network I/O seems to slow the DNS-323 down significantly.
Interestingly, I was poking around on the box, and I see the Marvell SATA adapter as a PCI device (I think it is a PCI Express chip actually), but I don't see the Marvell gigabit ethernet device. Does anybody know how it is attached (some custom bus method)?
Offline