Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
I have an rsync set up to synce drive A to B nightly. Here is the crontab command:
/bin/echo "0 2 * * * /ffp/var/scripts/snapshot-all.sh >> /mnt/HD_a2/logs/snapshot-all.log 2>&1" >> $CRONTXT
which calls:
srcpath='/mnt/HD_a2/Media /mnt/HD_a2/External_Backup' export PATH=/ffp/sbin:/ffp/bin:$PATH dstpath=/mnt/HD_b2/Backup ffppath=/ffp date=`date +"%m-%d-%Y_%H:%M:%S"` mkdir $dstpath/$date $ffppath/bin/rsync -axvvi --progress --stats --link-dest=$dstpath/current $srcpath $dstpath/$date > /mnt/HD_a2/logs/snapshot-all.log 2>&1 var=`ls -lA $dstpath/$date | wc -l` if [ $var -ne 0 ] then rm $dstpath/current ln -s $date $dstpath/current fi
I've noticed that my drive B (backup) is MUCH larger than the Media + External Backup folders it is backing up. When I do a `du -h -d 1 /mnt/HD_b2/Backup` I get:
root@zbackup:~# du -h -d 1 /hd2/Backup/ 355.6G /hd2/Backup/01-27-2011_10:07:13 16.0k /hd2/Backup/.AppleDouble 22.9M /hd2/Backup/01-29-2011_09:23:46 22.9M /hd2/Backup/01-30-2011_02:00:00 385.3M /hd2/Backup/01-31-2011_02:00:01 24.4M /hd2/Backup/02-01-2011_02:00:01 24.7M /hd2/Backup/02-02-2011_02:00:01 23.0M /hd2/Backup/02-03-2011_02:00:01 17.6G /hd2/Backup/02-04-2011_02:00:01 23.1M /hd2/Backup/02-05-2011_02:00:00 23.1M /hd2/Backup/02-06-2011_02:00:01 23.5M /hd2/Backup/02-07-2011_02:00:01 16.8G /hd2/Backup/02-08-2011_02:00:01 633.7M /hd2/Backup/02-09-2011_02:00:00 23.9M /hd2/Backup/02-10-2011_02:00:01 23.9M /hd2/Backup/02-11-2011_02:00:01 23.9M /hd2/Backup/02-12-2011_02:00:01 4.0G /hd2/Backup/02-13-2011_02:00:01 30.5M /hd2/Backup/02-14-2011_02:00:01 23.9M /hd2/Backup/02-15-2011_02:00:01 12.5G /hd2/Backup/02-16-2011_02:00:01 ...etc
Is it taking roughly 24MB just to create the links?
I followed this tutorial from one of the wiki backups. If I just do a simple A copy to B nightly, will rsync run just as smooth if I leave the --link-dest option out?
Offline
Directory entries also take space, so if you have many files, it's perfectly possible that the overhead will be over 20M. If you leave out the --link-dest option then every directory will become a full backup, which takes a lot more space. If you still want to keep incremental backups, I recommend using rdiff-backup (http://www.nongnu.org/rdiff-backup/). It provides convenient high-level features for managing incremental backups which will save you from doing all of that manually.
Last edited by adambyrtek (2011-03-22 10:15:49)
Offline
If you're concerned about disk space, you may want to automatically delete snapshots that are older than 2 weeks, or whichever value that makes sense to you. You can do it automatically as part of the script that perform the snapshot.
You can see at the end of the script here: http://dns323.kood.org/forum/viewtopic. … 723#p37723
That's how I did it. You'll probably need to make some changes for it to work in your case.
Offline
And a lot depends on how often the files change. In the extreme case of the files never changing once they are backed up, you'll have nothing but the small overhead of directory links for each backup level. OTOH, if a bunch of big files change as fast as the backup frequency, most of your space will be taken up by versions of the large files.
It will take some tuning to adjust things to your pattern of file updates. But the advice to delete the older backup dirs is good for when you start to get full.
Just for another example, I use rsnapshot: (The "hourly's" run every four hours)
653G /mnt/HD_b2/Snapshots/hourly.0/
159M /mnt/HD_b2/Snapshots/hourly.1/
155M /mnt/HD_b2/Snapshots/hourly.2/
155M /mnt/HD_b2/Snapshots/hourly.3/
155M /mnt/HD_b2/Snapshots/hourly.4/
155M /mnt/HD_b2/Snapshots/hourly.5/
614M /mnt/HD_b2/Snapshots/daily.0/
619M /mnt/HD_b2/Snapshots/daily.1/
618M /mnt/HD_b2/Snapshots/daily.2/
617M /mnt/HD_b2/Snapshots/daily.3/
617M /mnt/HD_b2/Snapshots/daily.4/
618M /mnt/HD_b2/Snapshots/daily.5/
618M /mnt/HD_b2/Snapshots/daily.6/
924M /mnt/HD_b2/Snapshots/weekly.0/
600M /mnt/HD_b2/Snapshots/weekly.1/
593M /mnt/HD_b2/Snapshots/weekly.2/
648M /mnt/HD_b2/Snapshots/weekly.3/
117G /mnt/HD_b2/Snapshots/monthly.0/
777G total
My link overhead is about 155MB.
I pull down a database that is 450MB each night from a web site I run and so that is why the daily snapshots are about 615MB.
If I get to the point where the disk is nearly full, I would clobber the monthly.0 to get a quick 117GB.
This disk has mostly media on it, and some backups of other systems. It is a fairly static disk.
Offline
karlrado wrote:
Just for another example, I use rsnapshot: (The "hourly's" run every four hours)
I've never used rsnapshot, but it looks pretty similar to rdiff-backup. Do you know how do they compare?
Last edited by adambyrtek (2011-03-22 23:14:32)
Offline
adambyrtek wrote:
karlrado wrote:
Just for another example, I use rsnapshot: (The "hourly's" run every four hours)
I've never used rsnapshot, but it looks pretty similar to rdiff-backup. Do you know how do they compare?
OK, I've already found the difference: rsnapshot uses hardlinks while rdiff-backup maintains incremental deltas.
Last edited by adambyrtek (2011-03-22 23:16:37)
Offline
I am new to this type of thing. Does anyone have step-by-step instructions on how to set-up rsync on the DNS-321?
Offline
From your ffp/packages directory, just type:
funpkg -i rsync-3.0.7-1.tgz
Then type 'rsync --help' then pick the options that you want in your environment.
To update your FFP packages, this is what I run in a script
echo "`date` Rsyncing FFP from inreto.de to /ffp/0.5" /ffp/bin/rsync -auv --stats --delete inreto.de::dns323/fun-plug/0.5/ /mnt/HD_a2/ffp/0.5/ >/mnt/usb/logs/rsync_inreto.log if [ -f "/mnt/usb/logs/rsync_inreto.log" ]; then echo -n "`date` " /ffp/bin/grep "Number of files transferred" /mnt/usb/logs/rsync_inreto.log fi
I also perform a backup of a few key directories every night to my usb stick:
for _f in "etc" "ffp" "opt"; do echo "`date` Rsyncing /${_f} to /mnt/usb/backup/${_f} log:/mnt/usb/logs/rsync_${_f}.log" /ffp/bin/rsync -auv --stats --delete /${_f}/ /mnt/usb/backup/${_f}/ >/mnt/usb/logs/rsync_${_f}.log if [ -f "/mnt/usb/logs/rsync_${_f}.log" ]; then echo -n "`date` " /ffp/bin/grep "Number of files:" /mnt/usb/logs/rsync_${_f}.log echo -n "`date` " /ffp/bin/grep "Number of files transferred:" /mnt/usb/logs/rsync_${_f}.log fi done
Offline
Funfiler, I'm trying to follow your echo lines where you grep something on the following line.
echo -n means don't add a new line to the echo..but after you echo the `date`, it doesn't seem like you echo "Number of files:", you just grep it from the rsync_${_f}.log.
I know I'm missing something here because your script I'm sure works as designed, but can you explain that part?
PS. I researched echo -n...and I'm still dumb.
Offline
The output of the script itself is redirected to a log file. The echo -n does indeed print the date without a carriage return. I use that as a "header" for each line in the log. Remeber, when you 'grep' it prints the entire line that matches the text you are looking for, not just a portion of the line. A sample of the output is
Wed Mar 30 00:00:28 EDT 2011 Rsyncing FFP from inreto.de to /ffp/0.5 Wed Mar 30 00:00:30 EDT 2011 Number of files transferred: 0 Wed Mar 30 00:00:30 EDT 2011 Rsyncing /etc to /mnt/usb/backup/etc log:/mnt/usb/logs/rsync_etc.log Wed Mar 30 00:00:30 EDT 2011 Number of files: 112 Wed Mar 30 00:00:30 EDT 2011 Number of files transferred: 0 Wed Mar 30 00:00:30 EDT 2011 Rsyncing /ffp to /mnt/usb/backup/ffp log:/mnt/usb/logs/rsync_ffp.log Wed Mar 30 00:00:37 EDT 2011 Number of files: 7915 Wed Mar 30 00:00:37 EDT 2011 Number of files transferred: 2 Wed Mar 30 00:00:37 EDT 2011 Rsyncing /opt to /mnt/usb/backup/opt log:/mnt/usb/logs/rsync_opt.log Wed Mar 30 00:00:37 EDT 2011 Number of files: 33 Wed Mar 30 00:00:37 EDT 2011 Number of files transferred: 0
There are actually numerous logs. A summary that contains the info above and also the detailed rsync info should I wish to go review it. I check the summary log every day to monitor the units. It ultimately contains more than just what I posted above.
Sample rsync_inreto.log
receiving file list ... done Number of files: 229 Number of files transferred: 0 Total file size: 278651585 bytes Total transferred file size: 0 bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 8698 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 90 Total bytes received: 8738 sent 90 bytes received 8738 bytes 3531.20 bytes/sec total size is 278651585 speedup is 31564.52
Last edited by FunFiler (2011-03-30 07:35:43)
Offline
Ohhhh k, so you're grepping from the detail log and echoing the output into a summary log. Genius.
Thanks for the help
Offline
Yes, the summary gives me a quick and easy way to ensure all the processes have run, and have run correctly as well as the general health of the box. It is particularly useful for the SmartCtl log as it is large, yet I pull out just the number of "pass" entries to ensure both drives are in good shape.
LOG_FILE="/mnt/usb/logs/smartctl.log" if [ -f "${LOG_FILE}" ]; then echo "`date` Clearing old SmartCtl log ${LOG_FILE}" rm ${LOG_FILE} fi for _f in "sda" "sdb"; do echo "`date` Dumping SmartCtl data for /dev/${_f} to ${LOG_FILE}" echo "`date` SmartCtrl Data for /dev/${_f}" >> ${LOG_FILE} /ffp/sbin/smartctl -a -i -d marvell /dev/${_f} >> ${LOG_FILE} done if [ -f "${LOG_FILE}" ]; then echo -n "`date` SmartCtl log Diagnostic Pass count = " /ffp/bin/grep "self-assessment test result: PASSED" ${LOG_FILE}|/ffp/bin/wc -l else echo "`date` WARNING: SmartCtl log ${LOG_FILE} does not exist" fi
I rarely go and look at any of the detail now but all logs get archived so I can review right back to day one if I need to.
#!/ffp/bin/sh echo "`date` Running script $0" export hostname=`hostname` export cnd=`date +"%Y%m%d"` /ffp/bin/tar -cz -f /mnt/usb/archive/${cnd}_${hostname}_Logs.tar.gz /mnt/usb/logs/* >/dev/null 2>&1 echo "`date` End of script $0 " # eof
Similarly with the rsync logs, if I see a high number of files being transfered, I can review the detail log to see what happened.
Last edited by FunFiler (2011-03-31 03:55:27)
Offline