Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi there
I've been hunting around trying to find an answer to this question to no avail.
I've written a basic fun_plug script to install a USB drive, format it, mount it and do a simple recursive file copy from the internal disks to the USB disk. I'd like to be able to use rsync to do this as it will be far more efficient.
Is there a way of using/obtaining rsync as a standalone binary for this purpose? I don't really want to install a full blown FFP, I'm quite happy with the few files that make this happen automatically at boot time providing a USB drive is connected and powered on...
Offline
Just get the package http://www.inreto.de/dns323/fun-plug/0. … html#rsync and the dependencies , extract it, write some code in your fun_plug for it and it should work
Offline
Hi
Thanks for your reply.
I'm still struggling a little. I've worked out from that page that I need:
rsync-3.0.7-1
libiconv-1.12-3
uclibc-0.9.29-7
I've got those files and have extracted them. They contain folders such as ffp, bin, share, start. I have no idea what to do with them. As far as I'm aware they also reference files expected in the ffp directory - but I don't have FFP installed (trying to do this without FFP).
Jamie
Offline
Jamie, not sure I understand your problem.
You are using a usb stick enabled via fun_plug tools to do so copies, right? No you asking about a more sophisticated way of doing these copies via i.e. rsync but you don't want to use fun_plug?
Might it be you running fun plug allready without knowing? A look arround the wiki might be helpful to get an understanding of the components involved and how they interact.
Offline
I don't have any FFP components installed. I have a single fun_plug script on the root of my DNS-323. The script simply installs a USB storage driver and uses mount, format and cp commands to backup the NAS to USB drive. What I wanted to do was use rsync instead of cp to do a more efficient copy. What I didn't want to do was install the entire FFP system.
Offline
Kai, ffp = fonz fun_plug, a special version of fun_plug made by fonz, not a generic one which jamie wants
Ok, here is how it *should* work.
no pretty code, but instructions, you need to read/write the stuff all by yourself
first, copy the extracted folders and their content to your usb drive.
then add something like 'mount /your_USB/ffp /ffp' in your fun_plug, make the ffp folder on your usb accessible via /ffp
then add /your_USB/bin and perhaps /your_USB/sbin to your $PATH in your fun_plug, so $PATH gets set every time you boot
now you should be able to use rsync if not, come back with some console output
Honestly, i don't understand why you don't want to use ffp. if you don't install/start *everything* it is not slowing down the system in any way or makes the system unstable..
Offline
Thanks for the reply, but I'm not sure why I would need to extract these files to the USB drive? The USB drive should just end up with a copy of all the files/folders on the NAS. I don't mind if the NAS has a few components on there to help with that. By the looks of those files you pointed me at, I'd have folders with loads of library files in. I guess it's just not possible to use rsync without all of them.
I don't want to use FFP because apart from using rsync I have no reason to complicate my setup by installing it. I don't want to run in to any non-supported and non-standard D-Link issues such as drives not spinning down while FFP is running and anything else which may be causing an issue. The same reason why I avoid installing beta firmware. I need this box to be stable.
Offline
jamieburchell wrote:
Thanks for the reply, but I'm not sure why I would need to extract these files to the USB drive? The USB drive should just end up with a copy of all the files/folders on the NAS. I don't mind if the NAS has a few components on there to help with that.
You don't have to have them on the USB drive. They probably thought that you didn't want the components on the NAS drive for some reason.
Here is how I would go about it.
Untar the three packages into /dev/mnt/HD_a2. (I'm assuming you know how to do this) This will create a /dev/mnt/HD_a2/ffp dir with theoretically the minimum package set needed to run rsync. You'll only need to do this once.
In your custom fun_plug, you'll probably have to add a symbolic link to link /ffp to /mnt/HD_a2/ffp. I'm guessing that there may be some linked-in load library paths in rsync that assume that /ffp is there, to find libs in /ffp/lib.
Then add the rsync command you want in the fun_plug, probably invoking it with /ffp/bin/rsync.
At this point, it should work. I've not tried it though.
jamieburchell wrote:
By the looks of those files you pointed me at, I'd have folders with loads of library files in. I guess it's just not possible to use rsync without all of them.
Right. The other two tar files are the C runtime libraries and rsync needs them to run. I'm not sure I'd grumble over getting it down to three packages.
If you really wanted to get by this, you could try building rsync to link the required libs statically. You'd end up with a much larger executable, but you'd have a simpler install. But this is another topic for another thread.
jamieburchell wrote:
I don't want to use FFP because apart from using rsync I have no reason to complicate my setup by installing it. I don't want to run in to any non-supported and non-standard D-Link issues such as drives not spinning down while FFP is running and anything else which may be causing an issue. The same reason why I avoid installing beta firmware. I need this box to be stable.
Understood. As a data point, I installed ffp 0.5 on 1.07 FW and had none of these problems. ffp doesn't take over your machine or run things that you don't want. I didn't have issues until I started up all sort of tasks and operations in the ffp environment, and they were all part of the process of configuring these things - my mistakes. But you don't have to do any of that. The ffp/start directory controls what gets started, and you can turn everything off there, if you want.
You'd save yourself a lot of time by just trying ffp. If you don't like it, just replace fun_plug with your own fun_plug and the DNS won't even know it is there.
A couple of other thoughts:
- You could install a complete ffp and remove the files you don't want. Also check the ffp/start dir to control what runs.
- The ffp/etc/fun_plug.init script "fixes up" a lot of things in the system in order to run telnet and a few other things. If you install ffp and want to minimize the changes from the stock system, you might consider removing some of these fixups. Many of them are not needed for rsync.
Installing full ffp will work and can be done quickly. But I think that the three-tar-file approach will work too with some care and tweaking. Why not just go ahead with it? This is a bit off the beaten path, so you won't find a cookbook on wiki for this.
Offline
Thanks very much for your reply. I should be able to do that. I think you are right about rsync expecting library files in an ffp directory- if you open rsync in a text editor there are references in plain text to ffp.
I don't know much about Linux so the next step is to see what I need to do with symbolic links. Do you happen to know if after running rsync there would be any memory resident processes running that wouldn't have been there had rsync not have been called?
Thanks again for your help.
Offline
In your custom fun_plug, you'd have to do:
ln -snf /mnt/HD_a2/ffp /ffp
assuming that /mnt/HD_a2/ffp is where you put the three untarred tar files.
Then, invoke rsync:
/ffp/bin/rsync <parms>
You might want something like:
exec >>/mnt/HD_a2/fun_plug.log 2>&1
at the top of the fun_plug or
/ffp/bin/rsync <parms> >/mnt/HD_a2/rsync.log 2>&1
to see if things work. I'd just run "/ffp/bin/rsync --help" to see if it works first.
BTW, exactly how did you untar the three tar files onto the NAS? Just curious.
Offline
7zip for Windows. It handles most archive formats, iso, zip, rar, tar and other Linux based formats that confuse me. Odd bz files and archives within archives??
I don't think execute permissions will be an issue as all files copied to the NAS from Windows seem to have 777 permissions anyway.
I'll give it a go when I'm able to.
I'm curious about the redirects you did there with the ampersand and numbers. Also what does the first exec statement achieve? I usually just do command > file to capture output.
Offline
jamieburchell wrote:
7zip for Windows. It handles most archive formats, iso, zip, rar, tar and other Linux based formats that confuse me. Odd bz files and archives within archives??
I don't think execute permissions will be an issue as all files copied to the NAS from Windows seem to have 777 permissions anyway.
OK, thanks. I was wondering because the tar program in the original FW (/bin/tar) cannot untar the newer tar file formats. I needed to use ffp's tar to do that. That makes it tricky to do on the device.
The tar format is a very old unix format that isn't compressed in itself. The tar (originally Tape ARchive) format retains a lot of details about files like ownership and permissions. Today, we ship things around with compressed tar files in several varieties because we want the functionality of tar and the space savings of compression.
jamieburchell wrote:
I'll give it a go when I'm able to.
I'm curious about the redirects you did there with the ampersand and numbers. Also what does the first exec statement achieve? I usually just do command > file to capture output.
If you put
exec >>/mnt/HD_a2/fun_plug.log 2>&1
at the top of your script, the script starts directing output to the log file. So, any subsequent echos go to the file instead of the screen. It is handy because you then do not have to do a redirect on every command.
The '>>' part means redirect stdout to the file, appending to what is already there. Actually, '>' might be better, as that will overwrite the file each time, which may be what you want. The '2>&1' means redirect stderr(file descriptor 2) to stdout(file descriptor 1). This is a way to mix together anything written to stderr and stdout. Some applications write error messages to stderr and routine output to stdout.
/ffp/bin/rsync <parms> >/mnt/HD_a2/rsync.log 2>&1
redirects the output (stdout and stderr) of rsync to another log file.
Offline
I had a quick read of the Wiki entry for TAR and the various compression methods after my last post. Thank you very much for taking the time to reply to my questions, it is much appreciated.
I vaguely remeber reading previously about that stdout and stderr thing. It's good to know about that, and the exec method. That should save me from having to redirect all of my commands to a log file and may also help when my log file doesn't contain error info at times- now I know why!
I'm going to mod my fun_plug script when I can to incorporate all of these things. I don't know if constantly reformatting a drive and copying data to it fresh would ware the drives any more than using rsync, but it's got to be quicker and more efficient.
Offline
Having tried the suggestions above, I got an error in my log file along the lines of "rsync Input/Output error" after issuing a "rsync --help" command.
I've decided to stick with the script I have already. It's simple, it works and it only uses 2 files!
I've added the stdout and sterr redirect trick at the top though, that works well.
Thanks very much for your help guys.
Offline