Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Pages: 1
I need to transfer a lot of data (about 500GB worth) between the two drives in my DNS-323. Unfortunately I don't have a functional desktop computer that I can just let run all day and copy over the network. I have my laptop which I could use as a last resort but I'm wondering if there's any way to get the device itself to do the copying.
Offline
It has been asked and answered. Search is your friend.
Offline
The short answer is use the 'cp' (copy) command.
Offline
@fun
I figured as much, but there doesn't appear to be a verbose mode in busybox's implementation of it, which is kind of weird.
Last edited by skootles (2011-06-20 15:25:32)
Offline
skootles wrote:
@fun
I figured as much, but there doesn't appear to be a verbose mode in busybox's implementation of it, which is kind of weird.
If you want the fastest performance you don't want verbose mode.
Offline
I would rather know what's going on and sacrifice a little time.
I actually remembered rsync which has the --progress argument which lets me see the status of transfers and see if any failed to copy.
Offline
rsync is not a good candidate for the first copy since to do its thing there is a big performance hit even if you use the whole file option.
Personally as far as progress goes when copying such a large amount of files I just look at what is being created from time to time in another shell.
Offline
If you want to have everything run in the background (allowing your laptop to leave the network) you can do the following:
1) ssh/telnet to the dns 323
2) run:
at now
at runs commands in the background similar to a cron job.
3) a prompt will appear and you will type the commands you want I suggest:
echo start cp command &> out.out
cp -af /SRC /DEST &> out.out
echo cp command finished starting rsync command &> out.out
rsync /SRC/ /DEST/ -a -vi &> out.out
Ctrl-D
The -af option will preserve times and permissions in the cp
The rsync -a -vi will clean up anything that was lost broken or screwed up in the cp
The -vi option will tell you what cp screwed up
All this output will be left in out.out so you can check out things by simply cat out.out
Ctrl-D inserts an end of file character to the terminal and instructs at that you have finished entering commands
4) Optionally you can confirm that at is running by typing:
atq
Further more a simple cat out.out will tell you where it is in the process.
Then as always 'top' and 'ps' will tell you what is going on
5) once you are satisfied everything is running ok 'exit' from ssh/telnet and go on with your day
....
....
...
6) Once you think everything should be done simply ssh/telnet in and inspect out.out to see what happened....
Take Care
Mike
Offline
I've set up my 323 to allow me to SSH into it. I am using rsync to copy one disk to the other. Though I have drives that support 3Gb/S transfer speeds, I am getting transfer rates, on large files, of only 4MB/s! If I loosely translate that to 40Mb/s, that is just over 1/10th the speed that it should operate at. Why is it so slow?
Bill...
Offline
As already pointed out in this thread, Rsync is slow. "cp" is significantly faster. They perform similar, but different functions.
Last edited by FunFiler (2011-07-08 00:34:31)
Offline
Yes, I understand "faster" and would not be surprised at that, but magnitudes faster?!
The reason I didn't use cp is because it wasn't clear that cp would preserve all the symlinks, hardlinks, permissions, and ownership properties, so I went back to rsync.
Bill...
Offline
You can specify options to cp to preserve the information that you want.
Try an internet search on "man cp". Note the -a or --preserve= option.
rsync is designed to SYNCronize files between Remote machines. (Note the capitalized R and SYNC). When run locally, rsync will actually start a process for the source files, and a process for the destination files and copy the file data over IPC between the two processes. There is probably verification and maybe compression/decompression going on and that won't help on a weak ARM processor. All this is pure overhead compared to cp, so if you want to do it that way, expect things to be much slower.
rsync excels at what it is designed to do. As mentioned here, you might consider using cp for the initial copy and then rsync (even locally) to keep up with relatively small changes.
Offline
Thanks, Karlrado, cp, then rsync... that is a great idea!
Though one of the issues that stopped me is that hardlinks are not preserved via cp using -R (and thus -a), so I wasn't sure, in general how it was handled if I just used -pr.
I'm using -W on rsync, so the difference/compression should be circumvented, I think. Do you know if there is an easy way to measure cp's throughput?
Offline
Not my idea - it was already discussed in this thread. I've never done it that way, partly because I was willing to let rsync make the initial copy. There were not that many files in the source to begin with. My machine is a server and it doesn't bother me too much if it is spending some time copying files.
If you are not sure how cp works, even after reading a man page, it is easy enough to test:
root@Toaster:~# cd /tmp
root@Toaster:/tmp# touch a
root@Toaster:/tmp# ln a b
root@Toaster:/tmp# ls -l a b
-rw-r--r-- 2 root root 0 Jul 8 09:12 a <<<---- The 2's imply that the files are linked
-rw-r--r-- 2 root root 0 Jul 8 09:12 b
root@Toaster:/tmp# mkdir aa
root@Toaster:/tmp# cp -a a b aa
root@Toaster:/tmp# ls -l aa
total 0
-rw-r--r-- 2 root root 0 Jul 8 09:12 a <<<---- The links are preserved with -a
-rw-r--r-- 2 root root 0 Jul 8 09:12 b
root@Toaster:/tmp# mkdir bb
root@Toaster:/tmp# cp a b bb
root@Toaster:/tmp# ls -l bb
total 0
-rw-r--r-- 1 root root 0 Jul 8 09:13 a <<<---- The links are not preserved
-rw-r--r-- 1 root root 0 Jul 8 09:13 b
It may not be a bad idea to copy a subset of your file tree over to the backup and then look at the results to make sure you have the cp parms the way you want.
Just for completeness, some people replicate file trees using a tar pipe. Tar is pretty good at preserving file attributes.
See:
http://www.commandlinefu.com/commands/v … e-to-cp-ra
For measuring cp's throughput, I would just make a large file that would take a minute or so to copy and then time it:
root@Toaster:~# cd /mnt/HD_a2
root@Toaster:/mnt/HD_a2# dd if=/dev/zero of=test count=1048576 bs=1024
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 43.7581 s, 24.5 MB/s
root@Toaster:/mnt/HD_a2# time cp test /mnt/HD_b2
real 0m48.907s
user 0m0.140s
sys 0m33.790s
or about 22 MB/s
The throughput will probably be a bit slower if you have a lot of small files. So, it will depend a lot on the type of data you have stored. But you should be able to approach the rate for a single large file is you have a large set of media files like music and video.
Offline
thank you!
I never even noticed that column of ls. I always watched the first column of output to see what type the file is.
I originally felt more confident about rsync's operation than spending the time to experiment with cp. But, had I known that rsync was THAT much slower, I would have spent more time up front--I never could have imagined that rsync would be so slow, so I thought it was some bottleneck in the device).
I didn't even think about using tar in the way referenced by your link, so that was also very instructive.
Thanks for the pointers. I'd used cp to copy the remaining directories. This worked well since they had simple (non-linked) but very large files. I then ran rsync to sync up the permission flags.
Thanks for the pointers and ideas (for next time!)
Bill...
Offline
I have other problem with copy. The file name characters are corrupted. maybe it is a codepage problem. But how to solve this?
Offline
Pages: 1