Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi,
after having successfully set up ssh without passphrase I installed rsnapshot from the optware package.
I set up a dedicated rsnapshot.conf named rsnapshot_media.conf and tested the syntax with rsnapshot -c rsnapshot_media.conf configtest. Syntaxtest passed OK.
then I do a manual rsnapshot via rsnapshot -c rsnapshot_media.conf hourly.
It hangs at "receiving incremental file list". Well ... i googled for error 255 but got "error unknown.
HERE the configs and logs
root@backup:/mnt/usbstorage/ipkg/opt/etc# rsnapshot -c rsnapshot_media.conf hourly Setting locale to POSIX "C" echo 20894 > /opt/var/run/rsnapshot.pid mv /mnt/HD_a2/backup_media/hourly.3/ /mnt/HD_a2/backup_media/hourly.4/ mv /mnt/HD_a2/backup_media/hourly.2/ /mnt/HD_a2/backup_media/hourly.3/ mv /mnt/HD_a2/backup_media/hourly.1/ /mnt/HD_a2/backup_media/hourly.2/ mv /mnt/HD_a2/backup_media/hourly.0/ /mnt/HD_a2/backup_media/hourly.1/ mkdir -m 0755 -p /mnt/HD_a2/backup_media/hourly.0/ /opt/bin/rsync -av --delete --numeric-ids --relative --delete-excluded \ --rsh=/ffp/bin/ssh root@daten:/mnt/HD_a2/folder/a-c/folder \ /mnt/HD_a2/backup_media/hourly.0/localhost/ receiving incremental file list
rsnapshot_media.conf
snapshot_root /mnt/HD_a2/backup_media/
programm locations set to /opt/bin/
except ssh which i switched to the ssh on /ffp/bin/ssh
For testing I only included one directory to be backed up[b]
backup root@daten:/mnt/HD_a2/folder/a-c/folder/ localhost/
[b]Log_Entry
[31/Jan/2011:11:06:13] /opt/bin/rsnapshot -c rsnapshot_media.conf hourly: started [31/Jan/2011:11:06:13] echo 21213 > /opt/var/run/rsnapshot.pid [31/Jan/2011:11:06:13] mv /mnt/HD_a2/backup_media/hourly.4/ /mnt/HD_a2/backup_media/hourly.5/ [31/Jan/2011:11:06:13] mv /mnt/HD_a2/backup_media/hourly.3/ /mnt/HD_a2/backup_media/hourly.4/ [31/Jan/2011:11:06:13] mv /mnt/HD_a2/backup_media/hourly.2/ /mnt/HD_a2/backup_media/hourly.3/ [31/Jan/2011:11:06:13] mv /mnt/HD_a2/backup_media/hourly.1/ /mnt/HD_a2/backup_media/hourly.2/ [31/Jan/2011:11:06:13] mv /mnt/HD_a2/backup_media/hourly.0/ /mnt/HD_a2/backup_media/hourly.1/ [31/Jan/2011:11:06:13] mkdir -m 0755 -p /mnt/HD_a2/backup_media/hourly.0/ [31/Jan/2011:11:06:13] /opt/bin/rsync -av --delete --numeric-ids --relative --delete-excluded --rsh=/ffp/bin/ssh root@daten:/mnt/HD_a2/folder/a-c/folder/mnt/HD_a2/backup_media/hourly.0/localhost/ [31/Jan/2011:11:07:38] /opt/bin/rsnapshot -c rsnapshot_media.conf hourly: ERROR: /opt/bin/rsync returned 255 while processing root@daten:/mnt/HD_a2/folder/a-c/folder/ [31/Jan/2011:11:07:38] touch /mnt/HD_a2/backup_media/hourly.0/ [31/Jan/2011:11:07:38] rm -f /opt/var/run/rsnapshot.pid [31/Jan/2011:11:07:38] /opt/bin/rsnapshot -c rsnapshot_media.conf hourly: ERROR: /opt/bin/rsnapshot -c rsnapshot_media.conf hourly: completed, but with some errors
ERROR TESTING
Testruns with a different config for local backup from
/mnt/usbstorage/opt/
as well as from the local disk
/mnt/HD_a2/
were successful.
SSH-Config:
Logging in without passphrase on the remote DNS is working, but get the following message socket: Address family not supported by protocol before logging in.
root@daten:/mnt/usbstorage/home/root# which ssh /opt/bin/ssh
~/.ssh/config is empty
Rights on remote DIR
drwxrwxrwx 7 nobody 501 4096 Jan 17 11:42 FOLDER
WELL ... any hint and help appreciated.
Cheers,
Volker
Last edited by vschlenk (2011-01-31 18:04:09)
Offline
How many files are you trying to back up when you get the failure?
If there are many files (probably tens of thousands), rsync can run out of memory when generating the list of files that it needs to process.
This will tend to happen only when doing the initial backup or after adding a large number of files to the backup source. Note that it is the number of files and not the size of the files involved.
One way to verify is to watch the size of the rsync process with top or something while it is running. You might be able to see the process size increase quickly before it dies.
This is a weakness with rsnapshot and rsync on this device, since there is not much memory for application use on the 323. One can argue that rsync is not being very helpful by storing the "files needing processing" in memory. I think that there's been some discussion of this on the nets. And I'm not sure if there is a solution in place. If your file set did not change much since the last rsnapshot, then there is no problem even if you have a lot of files. It is the number of new and changed files that matters.
For my use, this happens pretty rarely, as my file set does not change often. So, I have not worked on getting around the problem.
One way to get past it is to reduce the size of the backup file set by selecting a subset of the files you eventually want to backup. For example, if your "a-c/folder" has folders in it, try backing up those folders one at a time. Once you get all the sub-folders backed up, you can try to go back to running it once with the parent folder.
Another way is to manually copy files that are going to be backed up for the first time by copying them from the source to the hourly.0 directory. Then, subsequent invocations of rsnapshot should have a much smaller set of files to backup.
Of course, working around this sort of problem in this way is only useful if it does not happen often. If you frequently have a large number of files that change between rsnapshot invocations, then we'd have to come up with a better way.
Maybe someone here has some ideas. I'd sure like to fix this as well.
Note that this is not a rsnapshot-specific problem. The same thing can occur with just using rsync or some other "time machine" script that uses rsync.
Offline
Hi Karlrado,
you are right - i read about this rsny memory problem, but I thought it would already have been solved [see quote below]. My problem should be connected with the original rsnapshot settings - as I only use a very, very small number of files [10] which are in the testfolder. I am inclined to search more deeply in the rsnapshot.conf setting, there has to be some ssh-specfic trigger which I didn't use correctly - or should I perhaps also have a look at rsyn.conf ?
http://samba.anu.edu.au/rsync/FAQ.html wrote:
memory usage
Rsync versions before 3.0.0 always build the entire list of files to be transferred at the beginning and hold it in memory for the entire run. Rsync needs about 100 bytes to store all the relevant information for one file, so (for example) a run with 800,000 files would consume about 80M of memory. -H and --delete increase the memory usage further.
Version 3.0.0 slightly reduced the memory used per file by not storing fields not needed for a particular file. It also introduced an incremental recursion mode that builds the file list in chunks and holds each chunk in memory only as long as it is needed. This mode dramatically reduces memory usage, but it only works provided that both sides are 3.0.0 or newer and certain options that rsync currently can't handle in this mode are not being used.
out of memory
The usual reason for "out of memory" when running rsync is that you are transferring a _very_ large number of files. The size of the files doesn't matter, only the total number of files. If memory is a problem, first try to use the incremental recursion mode: upgrade both sides to rsync 3.0.0 or newer and avoid options that disable incremental recursion (e.g., use --delete-delay instead of --delete-after). If this is not possible, you can break the rsync run into smaller chunks operating on individual subdirectories using --relative and/or exclude rules.
Cheers,
Volker
Offline
Right, memory exhaustion is not your problem if there are only 10 files.
Offline
right - but now a really, really funny thing happens - whilst scrolling in VI the terminal freezes. As I changed nothing than the rsnapshot.conf file this seems quite afunny behavior. Especially as even a reboot reproduces exactly the same behavior. I will try out some other activities on the system to verify if this only happens within VI, now.
Offline
[31/Jan/2011:11:06:13] /opt/bin/rsync -av --delete --numeric-ids --relative --delete-excluded --rsh=/ffp/bin/ssh root@daten:/mnt/HD_a2/folder/a-c/folder/mnt/HD_a2/backup_media/hourly.0/localhost/ [31/Jan/2011:11:07:38] /opt/bin/rsnapshot -c rsnapshot_media.conf hourly: ERROR: /opt/bin/rsync returned 255 while processing root@daten:/mnt/HD_a2/folder/a-c/folder/
The first line in this log looks a little funny. This part in particular:
root@daten:/mnt/HD_a2/folder/a-c/folder/mnt/HD_a2/backup_media/hourly.0/localhost/
There should really be a space between the second "folder" and "/mnt".
Maybe you lost the space when pasting the log to the forum.
Maybe it would be good if you posted your entire rsnapshot config file, and/or double check what you've got for the "backup" line.
I also wonder if the '-' char in the 'a-c' is causing an option parsing problem. You might try enclosing the path to the source dir in double quotes in your rsnapshot config file.
Offline
I also wonder if the '-' char in the 'a-c' is causing an option parsing problem. You might try enclosing the path to the source dir in double quotes in your rsnapshot config file
Good idea!
Thx, Volker
PS: but as of now my main problem is that both of my DNS started freezing the terminal when using VI some two hours ago. And it is repeatable from different systems/laptops which have access to the DNS's via Putty/SSH. Thus - no way to access the LOG or the CONFIG-file as of now. As I did not yet do a lot of configuration on the systems I will probably reinstall Optware & FFP on both USB-sticks and fun_plug on the 1st disks and restart from scratch ... making less mistakes than I made during this first try :-)
Offline
BTW, the out of memory condition is usually characterized by a return code of 13. I just hit it :-(.
Offline
Hi Karlrado,
thanks, this helps a lot for error checking. I can contine now, as I solved the problem with the freezing ssh-sessions. The router was defect ..... :-(
Cheers,
Volker
PS: Won't be able to advance today, though, as there are some other issues to be solved [concerning the way I make my living :-)
Offline