Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
First of all, just want to applaud the information on this site! I just bought DNS-323 not long ago, and the original firmware was quite limited. However, after installing fun_plug and all, it has opened up all sorts of doors!
I bought this machine initially just to do backups. I didn't want to do mirroring because I'm not running a business that needs 24/7 up time, and I am likely to accidentally delete a file that I wish to recover. Initially I tried using the download web page of the DNS-323, but it kept stalling and running into issues. When I found the thread on backing up with rsync, I knew that's what I was looking for, so I set it up and everything was great.
But this only gave me a safety net of one day because if I accidentally deleted or corrupted a file, it would get nuked overnight. I wouldn't be able to go back to, say, March 25 and recover that version of a file. What I needed was a backup system that would do incremental backups but allow me to restore to any version of any file on any day.
Then I read this page on doing backups:
http://blog.interlinked.org/tutorials/r … chine.html
I tried it, and it worked very well! In a nutshell, it's a way of using rsync's hard link functionality to make a full backup, but creating hard links and saving space if the file already existed in the previous backup. The way hard links work, for those of you who aren't familiar, is like a ref-counting system where multiple files share the actual data. Until all hard links are deleted, the file will continue to exist.
To get this to work, I created a little "snapshot.sh" script in fun_plug.d/bin directory:
#!/bin/sh
srcpath=/mnt/HD_a2
dstpath=/mnt/HD_b2
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
/mnt/HD_a2/fun_plug.d/bin/rsync -aivx --link-dest=$dstpath/current $srcpath $dstpath/$date > $dstpath/$date/rsync.log 2>&1
if [ -s $dstpath/$date/rsync.log ]
then
rm $dstpath/current
ln -s $date $dstpath/current
fi
Then in editcron.sh, I just have a
/bin/echo "00 2 * * * /mnt/HD_a2/fun_plug.d/bin/snapshot.sh" >> $CRONTXT
That's it! The first copy got everything, which is about 260 GB in my case. The second backup was quick and took about 25 MB extra for the hard links. I'm willing to spend a little HD space for the hard links and the convenience of having full copies of everything in every directory. Sure, it wil take up about 10 GB of overhead for a full year's worth of backups, but that's not too bad considering what I get in return.
In the end, /mnt/HD_b2 will look like with a snapshot a day:
20080427_020001
20080428_020001
20080429_020001
current -> 20080429_020001
I can delete the backup from any day, or recover any file from any day, without worrying about how it would affect the previous or the next backup. It's great!
Anyway, just thought some people may find this useful, if they're also looking for a Time Machine type of backup with the DNS-323 and may not be an expert on rsync or hard links.
Offline
This is also what BackupNetClone (http://backupnetclone.sourceforge.net/) does. Additionally, BNC uses SSH to encrypt transfers in case you want to do the backup over the Internet, and it sends you status emails, and it cleans up old stuff as the disk gets full (avail in the next version which I'm working on releasing).
Offline
Hi raid123,
thanks for your effort on this, its appreciated.
I have set this up on fun plug 0.5 so my snapshot.sh has had to be modified for 0.5 as follows:-
-------------------------snapshot.sh------------------------
#!/bin/sh
srcpath=/mnt/HD_a2/backedup
dstpath=/mnt/HD_b2/backup
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
/mnt/HD_a2/ffp/bin/rsync -aivx --link-dest=$dstpath/current $srcpath $dstpath/$date > $dstpath/$date/rsync.log 2>&1
if [ -s $dstpath/$date/rsync.log ]
then
rm $dstpath/current
ln -s $date $dstpath/current
fi
-------------------------snapshot.sh------------------------
which works great.
following is my cronedit.sh which is located at /mnt/HD_a2/ffp/start with permissions to run.
credit to mig in the following thread http://dns323.kood.org/forum/t265-littl … ccess.html
-------------------------cronedit.sh-------------------------
#!/bin/sh
CRONTXT=/mnt/HD_a2/crontab.txt
# start with existing conrontab
/bin/crontab -l > $CRONTXT
# add the snapshot command
/bin/echo "00 2 * * 0 /mnt/HD_a2/ffp/bin/snapshot.sh" >> $CRONTXT
# install the new crontab
/bin/crontab $CRONTXT
# clean up
/bin/rm $CRONTXT
-------------------------cronedit.sh-------------------------
which gives the cron job that starts at 02:00am on Sunday the ability to survive a reboot without losing any existing cron tasks.
cheers
Last edited by index monkey (2008-09-02 15:23:06)
Offline
Very Good tuturial, great for me too... but
It is possible to copy same folders, and not the full disk ?
Like This :
srcpath=/mnt/HD_a2/Videos
srcpath=/mnt/HD_a2/Data
srcpath=/mnt/HD_a2/Pictures
dstpath=/mnt/HD_b2/Pictures
dstpath=/mnt/HD_b2/Data
dstpath=/mnt/HD_b2/Videos
?????
Tks.
Offline
You can back up select directories by using quotes and multiple folders:
srcpath="/mnt/HD_a2/Folder1 /mnt/HD_a2/Folder2 /mnt/HD_a2/Folder3'
But if your intention is to have a little workspace on the hard drive that you don't want to backup, you can do the opposite by specifying the folder you want to exclude. If there's only one folder, you can just add --exclude to the rsync command. For example:
/mnt/HD_a2/fun_plug.d/bin/rsync -aivx --link-dest=$dstpath/current --exclude='workspace/' $srcpath $dstpath/$date > $dstpath/$date/rsync.log 2>&1
Or you can use --exclude-from and put the list in a separate file. Be careful with this method, though, because if you have other folders with the name "downloads", they will also be excluded.
Last edited by raid123 (2008-05-06 08:28:53)
Offline
By the way, I have modified my script slightly since my last post. The first version works, but here are some of the updates I did:
- The first version stored a copy of the log file for each backup. To save space, this new version will just keep a single copy of the log under fun_plug.d/log/snapshot.log. Instead of checking for the existence of the log file before updating the current link, it checks to make sure the directory is not empty before updating the "current" link.
- Added a ffppath so that the fun_plug directory is more easily configurable. For example, fun_plug 0.5 users may find it in /mnt/HD_a2/ffp, and USB users may find it in /mnt/usb_1/fun_plug.d or /mnt/usb_1/ffp
Here's the new snapshot.sh:
=======================================
#!/bin/sh
srcpath=/mnt/HD_a2
dstpath=/mnt/HD_b2
ffppath=/mnt/HD_a2/fun_plug.d
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
$ffppath/bin/rsync -aivx --link-dest=$dstpath/current --exclude='workspace/' $srcpath $dstpath/$date > $ffppath/log/snapshot.log 2>&1
if [ $(ls -1A $dstpath/$date | wc -l) -ne 0 ]
then
rm $dstpath/current
ln -s $date $dstpath/current
fi
========================================
Offline
Well.. i have problems...
My Snapshot :
#!/bin/sh
srcpath="/mnt/HD_a2/NETWORK_DISK_01/ANIME /mnt/HD_a2/NETWORK_DISK_01/DATA /mnt/HD_a2/NETWORK_DISK_01/MUSICA /mnt/HD_a2/NETWORK_DISK_01/VIDEOS"
dstpath=/mnt/HD_b2/Backups
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
/mnt/HD_a2/ffp/bin/rsync -aivx --link-dest=$dstpath/current $srcpath $dstpath/$date > $dstpath/$date/rsync.log 2>&1
if [ -s $dstpath/$date/rsync.log ]
then
rm $dstpath/current
ln -s $date $dstpath/current
fi
The Directory DATA ( have 23956 itens / 24,3 GB ) and it this directory the task always stop. If i remove this directory from the snaphots all works great.
The total backups with all directories is 80 a 90 GB, but the directory DATA have most of the files, small files and many "23956"...
memory problem ? Solution ? I have only transmission runing......
Tks.
Last edited by Nasp (2008-05-08 16:53:23)
Offline
Let's start by making sure there's plenty of disk space in the backup drive, just in case. Then take a look at the rsync.log file in the backup folder to see where it stopped. Try moving that file or directory out of the DATA folder, run snapshot.sh and see if it works. Maybe a single file or directory name is having problems with rsync.
Offline
raid123 wrote:
Let's start by making sure there's plenty of disk space in the backup drive, just in case. Then take a look at the rsync.log file in the backup folder to see where it stopped. Try moving that file or directory out of the DATA folder, run snapshot.sh and see if it works. Maybe a single file or directory name is having problems with rsync.
Well, it stop at a file named .pureftp-upload.................................................... what file is that ? Maybe I have to stopped the FTP server or the backups dont work and i used the beta firmware 1.04a in my CH3SNAS............ but for now i only removed the folder from DATA, tonight i will test again.
tks from the help again, raid123.
Last edited by Nasp (2008-05-09 02:33:54)
Offline
raid123,
Thanks for the post. I however ran into a problem with your latest script.
if [ $(ls -1A $dstpath/$date | wc -l) -ne 0 ]
error output with set -x enabled
+ ls -1A /mnt/HD_b2/20080514_215811
+ wc -l
wc: No such file or directory
+ [ -ne 0 ]
[: 0: unknown operand
Im a newbie when it comes to scripts,
I see some places that talk about adding "" around wc command
if [ "$(ls -1A $dstpath/$date | wc -l)" -ne 0 ]
but this does not seem to be right/working either.
Thanks for the help
Newbie2
Last edited by newbie2 (2008-05-15 08:35:59)
Offline
wc should be part of one of fonz's fun_plug packages. Under 0.3, it should be part of fun_plug.d/bin and under 0.5 it should be under /ffp/bin, soft linked to busybox.
You can try replacing it with:
if [ $(ls -1A $dstpath/$date | $ffppath/bin/wc -l) -ne 0 ]
The line is just checking to see if there's something in the directory before moving the current link. You can even replace these lines
if [ $(ls -1A $dstpath/$date | wc -l) -ne 0 ]
then
rm $dstpath/current
ln -s $date $dstpath/current
fi
with
rm $dstpath/current
ln -s $date $dstpath/current
and the script should still work.
Last edited by raid123 (2008-05-15 09:45:07)
Offline
That was it. I thought wc was giving the error not the system stating it couldn't find wc .
Great post and thanks for the help.
Newbie2
Last edited by newbie2 (2008-05-16 06:48:34)
Offline
This topic is what changed my config from Raid1 to standard config of two single disks.
I did however make loads of backups during testing, and it got messy, so I decided to delete them all, start again and backup only once a week.
I deleted the destination path /mnt/HD_b2/backup however it wouldnt work again until I manually recreated the /mnt/HD_b2/backup folder.
hope this helps if someone else runs into this same issue.
Offline
For those who don't know rsnapshot, have a look here Rsnapshot for local and remote backup,
It does the same as the script proposed here in this post, but rsnapshot is now quite stable and very well tested.
Just a hint,
marinalink
Offline
fonz wrote:
marinalink wrote:
For those who don't know rsnapshot
There's always more than one way to do things. Personally, I like the simplicity of raid's script.
Absolutly! I fully agree. As I said, just a hint...
Just one comment in general for those who use rsync:
rsync needs approximately 100byte of RAM for each file to be synchronized (since it builds up a list of files to be synchroniyed). This means, if you have 100.000 files to be backuped, this results in a memory usage of at least 10MB.
Maybe someone already encountered this problem. Tonight I realized, that my DNS ran out of RAM and started swapping to disk. This of course slows down the backup process... A solution to this problem is to run rsync on sub directories to reduce the amount of files to be synchronized and kept in RAM.
marinalink
Offline
marinalink wrote:
Just one comment in general for those who use rsync:
rsync needs approximately 100byte of RAM for each file to be synchronized (since it builds up a list of files to be synchroniyed). This means, if you have 100.000 files to be backuped, this results in a memory usage of at least 10MB.
That's certainly true for rsync version < 3. Memory usage is much smaller with with rsync 3. From the NEWS file:
ENHANCEMENTS:
- A new incremental-recursion algorithm is now used when rsync is talking
to another 3.x version. This starts the transfer going more quickly
(before all the files have been found), and requires much less memory.
See the --recursive option in the manpage for some restrictions.
- Lowered memory use in the non-incremental-recursion algorithm for typical
option values (usually saving from 21-29 bytes per file).
Offline
fonz wrote:
That's certainly true for rsync version < 3. Memory usage is much smaller with with rsync 3. From the NEWS file:
ENHANCEMENTS:
- A new incremental-recursion algorithm is now used when rsync is talking
to another 3.x version. This starts the transfer going more quickly
(before all the files have been found), and requires much less memory.
See the --recursive option in the manpage for some restrictions.
- Lowered memory use in the non-incremental-recursion algorithm for typical
option values (usually saving from 21-29 bytes per file).
Hm, you are right... strange, that rsync needed 117% of my memory for 95GB and 81.500 files. For 36GB and 10.000 files it needs 15MB. Its just an observation. But unfortunately I can't explain it...
Anybody explainations for that?
marinalink
Offline
Wiki does not match this post!
quoted from http://blog.interlinked.org/tutorials/r … chine.html
"-a means Archive and includes a bunch of parameters to recurse directories, copy symlinks as symlinks, preserve permissions, preserve modification times, preserve group, preserve owner, and preserve device files. You usually want that option for all your backups. "
I don't see the -a in the wiki script. Is there a reason this was omitted there and not here?
Looking at this more its seems the wiki entry on backup seems to omit permissions owner and group. Whast weird is the wiki links to this forum post and yet they don't see to match. So I backup up my system following the wiki dns-323 and tried to restore and it failed because the permissions did not back up and the also the script adds in read and write for g and other!
EDIT: I have been updating this as I figure it out sorry for the changes....
Last edited by jrose78 (2008-06-12 05:18:30)
Offline
Feel free to add the -a back and use any other flags that make sense to you. The reason they have -a on that post was because that post was actually on a full blown linux system where user accounts are used to log in. I'm not sure the usage of the DNS-323 user account is the same. In my case, I did not use the accounts on the DNS-323 other than to provide access, and I wanted my backup image to be read-only so that a virus that deletes all files on all my shares will not delete my backup image as well for extra protection. That said, I've updated the wiki to use the -a flag once again.
What command are you using to restore and why would you run into problems?
Last edited by raid123 (2008-06-15 18:56:30)
Offline
Since the -a is not in the backup, permissions are all open except you took away write which leaves OTHERS SET TO READ. Since ssh uses a key you have given everyone access to read key and SSH is like no way and does not start.. and since I disabled telnet for security I was not able to log into system without replacing ffp folder. I appreciate the WIKI post and showing everyone how to do this. Once I got it working it is just amazing how easy it is to restore a file or do a system restore.
Last edited by jrose78 (2008-06-16 17:27:34)
Offline
I had this working, and then I decided to clear out my snapshots folder and start over because I had restructured the files. However, now it isn't creating the "current" folder. It is giving the following error:
"--link-dest arg does not exist: /mnt/HD_b2/v1_snapshots/current"
Any ideas? Thanks
Offline
Hi guys. This is a great script and I can't thank you enough.
However, I have a little problem. When I run the script (./snapshot.sh) it says..
./snapshot.sh: not found. But if I run each command from the command line it works perfect.
Obviously I don't want that.
Any ideas?
Thanks!
PS. Another question. I don't completely understand how this works. When I start to get out of space. What would be the correct way of making space? Just delete any folder which is not the first one or the last one maybe? (I mean to keep all the files from the first run and the last modified ones. I only want to know which ones NOT to delete.
Thanks again!
Last edited by blizzard182 (2008-08-22 19:00:13)
Offline