Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Nevermind, it was a problem in my script. I put /bin/bash instead of /bin/sh
Do'h!
Nice script. Thanks again.
Offline
Hi,
This is exactly what I'm looking for, but I'm completly fresh when it comes to Linux/Unix.
This may be a stupid question, but how to I automatically run this script each night?
Offline
Balnes wrote:
Hi,
This is exactly what I'm looking for, but I'm completly fresh when it comes to Linux/Unix.
This may be a stupid question, but how to I automatically run this script each night?
Look in the reply 3, in first page.
There is two files/scripts, snapshot.sh and cronedit.sh, the cronedit.sh in runing automatic, for each configuration.
My Cronedit.sh only run 1x for week, i do not need backups each night.
"This is real, the best backup solution" tks one more time RAID123
Offline
Nasp wrote:
Look in the reply 3, in first page.
There is two files/scripts, snapshot.sh and cronedit.sh, the cronedit.sh in runing automatic, for each configuration.
My Cronedit.sh only run 1x for week, i do not need backups each night.
"This is real, the best backup solution" tks one more time RAID123
Thank you.
I found I had the cronedit.sh located in /mnt/HD_a2/ffp/bin/ not /mnt/HD_a2/ffp/start.
But as I'm a complete infant when it comes to this I have a couple questions more:
1. Where do I set the parameters to set 1x day, 2xday or 1x week etc?
2. I had the same problem as
blizzard182 wrote:
However, I have a little problem. When I run the script (./snapshot.sh) it says..
./snapshot.sh: not found. But if I run each command from the command line it works perfect.
Any ideas
Offline
Question 1 - Answer:-
http://en.wikipedia.org/wiki/Cron I googled it for you , but I cant always be here when you want someone to google for you! you dont have to read it all, just skip down to the Fields section.
Question 2 - Answer:-
./ assumes you are in the directory that the snapshot.sh script is located, so does the script run if you put the full path eg /mnt/HD_a2/ffp/bin/snapshot.sh
and what blizzard182 said :Nevermind, it was a problem in my script. I put /bin/bash instead of /bin/sh
Last edited by index monkey (2008-09-02 14:44:45)
Offline
I have make one howto, in Portuguese (My Language) for help Portuguese native language users, for use this simple amazing backup solution on ch3snas/dns-323, the howto :
http://cria-o-teu-avatar.blogspot.com/2 … na-do.html
tks raid123
Offline
index monkey wrote:
Question 1 - Answer:-
http://en.wikipedia.org/wiki/Cron I googled it for you , but I cant always be here when you want someone to google for you! you dont have to read it all, just skip down to the Fields section.
Question 2 - Answer:-
./ assumes you are in the directory that the snapshot.sh script is located, so does the script run if you put the full path eg /mnt/HD_a2/ffp/bin/snapshot.sh
and what blizzard182 said :Nevermind, it was a problem in my script. I put /bin/bash instead of /bin/sh
Thank you for the answers and the patience. I'm starting to find my way:)
A1:- I should have seen that:)
A2:- I still can't run the scripts
I have this file in /mnt/HD_a2/ffp/start
-------------------------cronedit.sh-------------------------
#!/bin/sh
CRONTXT=/mnt/HD_a2/crontab.txt
# start with existing conrontab
/bin/crontab -l > $CRONTXT
# add the snapshot command
/bin/echo "20 19 * * * /mnt/HD_a2/ffp/bin/snapshot.sh" >> $CRONTXT
# install the new crontab
/bin/crontab $CRONTXT
# clean up
/bin/rm $CRONTXT
-------------------------cronedit.sh-------------------------
and this file in /mnt/HD_a2/ffp/bin
-------------------------snapshot.sh------------------------
#!/bin/sh
srcpath='/mnt/HD_b2/Backups/Hanne /mnt/HD_b2/Backups/Petter'
dstpath=/mnt/HD_b2/Backup
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
/mnt/HD_a2/ffp/bin/rsync -aivx --link-dest=$dstpath/current $srcpath $dstpath/$date > $dstpath/$date/rsync.log 2>&1
if [ -s $dstpath/$date/rsync.log ]
then
rm $dstpath/current
ln -s $date $dstpath/current
fi
-------------------------snapshot.sh------------------------
when I'm in /mnt/HD_a2/ffp/bin and writes ./snapshot.sh, I still gets the error. See below:
root@dlink-7FA600:~# cd ffp/bin
-sh: cd: can't cd to ffp/bin
root@dlink-7FA600:~# ./snapshot.sh
-sh: ./snapshot.sh: not found
Any idea? Is my filepaths wrong?
Offline
Balnes wrote:
root@dlink-7FA600:~# cd ffp/bin
-sh: cd: can't cd to ffp/bin
Use absolute paths:
cd /ffp/bin ./snapshot.sh
or
/ffp/bin/snapshot.sh
Also, ensure that the script is executable:
chmod a+x /ffp/bin/snapshot.sh
Offline
Hi,
Thanks for all the help so far.
I still cant run the snapshot.sh.
I can make it exectutable, so the file is there, but when I try to run it, it says not found. See below from Putty.
root@dlink-7FA600:/mnt/HD_a2/ffp/bin# chmod a+x snapshot.sh
root@dlink-7FA600:/mnt/HD_a2/ffp/bin# ./snapshot.sh
-sh: ./snapshot.sh: not found
root@dlink-7FA600:/mnt/HD_a2/ffp/bin#
Same with cronedit.sh
Any clue?
Offline
Balnes wrote:
Hi,
Thanks for all the help so far.
I still cant run the snapshot.sh.
I can make it exectutable, so the file is there, but when I try to run it, it says not found. See below from Putty.
root@dlink-7FA600:/mnt/HD_a2/ffp/bin# chmod a+x snapshot.sh
root@dlink-7FA600:/mnt/HD_a2/ffp/bin# ./snapshot.sh
-sh: ./snapshot.sh: not found
root@dlink-7FA600:/mnt/HD_a2/ffp/bin#
Same with cronedit.sh
Any clue?
Balnes,
Did you create the files ffp/bin/snapshot.sh and ffp/start/editcron.sh? They do not exist by default. Also, did you create them in Windows? If so, the carriage returns are incorrect. If that is your situation and you want to use Windows to create the text file, you might want to try Notepad++. There is a menu item "Format > Convert to UNIX Format" that will allow you to save the file with the correct carriage returns.
Lastly, note that you can put the snapshot.sh file wherever you want, as long as the editcron.sh file points to the correct location. The ffp/bin location is a good choice for simplicity and consistency. However, the editcron.sh file needs to be in ffp/start with its permissions set to execute.
Offline
halfsoul wrote:
Also, did you create them in Windows? If so, the carriage returns are incorrect. If that is your situation and you want to use Windows to create the text file, you might want to try Notepad++. There is a menu item "Format > Convert to UNIX Format" that will allow you to save the file with the correct carriage returns.
Ah, a new world for us windows-addicts:)
Now everything seems to work flawlessly.
Thanks for all the help.
Offline
Hey...I posted this in another post but I think it will be appreciated here too.
Here is a little script that will delete older backups created with Raid's script. Keep in mind it will only work if you use the default.
--------------------------------------------------------------------------------
#!/bin/sh
find /mnt/HD_b2/ -mtime +15 -type d -name '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][_][0-9][0-9][0-9][0-9][0-9][0-9]' > toDelete
chmod 777 toDelete
while read EachLine
do
echo $EachLine
# Remove the '#' before the next line to actually remove the directory. Now it just showing the folder to check it
# rm -rf $EachLine
done < toDelete
rm -f toDelete
-----------------------------------------------------------------------------------
Run it once. It should show you the folders which are older than 15 days. (You can change the +15 with any days)
When you checked that everything is fine, remove the # like it says there, and then it will remove the complete folder.
Note: Please test it first. I hope it's useful for someone
Cheers
Edit: Sorry, I posted a previous version. If this is the first time you see this, ignore this comment. If you already copied, change toDelete.txt to toDelete -w/o the extension-
Last edited by blizzard182 (2008-10-03 18:18:02)
Offline
I want to make a script that will run every day. After 7 days a daily backup must be replaced by a new one. So there will be always be one monthday backup, one tuesday backup etc. Once a week I want a weekly backup and once a month a monthly backup. Therefore I adjusted the original snapshot.sh script. Now I'm not a Linux hero so now there is an error I'm stuck. When I run my script I get this error:
/ffp/bin/snapshot.sh: 34: Syntax error: "(" unexpected
This is my current snapshot.sh:
#!/bin/sh # Set Source Path SRCPATH='/mnt/HD_a2/Data /mnt/HD_a2/Video' # Set the Destination Path DSTPATH=/mnt/HD_b2/Backup_NAS # Set path to Fun_Plug files # Fun_Plug 3.0 or 4.0 # ffppath=/mnt/HD_a2/fun_plug.d # Fun_Plug 5.0 FFPPATH=/ffp # Set Rsync RSYNC=$FFPPATH/bin/rsync # date --help # %w day of week (0..6); 0 represents Sunday # %V week number of year with Monday as first day of week (01..53) BACKUPDATE=`date +'%w %V'` # Set date variables WEEKDAY=`echo $BACKUPDATE|cut -f 1 -d ' '` YESTERDAY=$[ ($WEEKDAY+6) % 7 ] WEEKNUMBER=`echo $BACKUPDATE|cut -f 2 -d ' '` WEEKNUMBER=$[10#$WEEKNUMBER] LASTMONTH=$[ (53+$WEEKNUMBER-4) % 53 ] # Set directories BACKUPDIR=/$DSTPATH/daily/$WEEKDAY LINKDIR=/$DSTPATH/daily/$YESTERDAY WEEKBACKUP=/$DSTPATH/weekly/$WEEKNUMBER LASTMONTH=/$DSTPATH/weekly/$LASTMONTH MONTHBACKUP=/$DSTPATH/monthly/$WEEKNUMBER # Make directories mkdir -p /$DSTPATH/daily/ mkdir -p /$DSTPATH/weekly/ mkdir -p /$DSTPATH/monthly/ date >> $BACKUPDIR.list # Backup source paths if [ -z `find $BACKUPDIR -maxdepth 0 -ctime -1 ` ] then # first backup of this day rm -rf $BACKUPDIR $RSYNC -v -x -a --dry-run --delete --link-dest=$LINKDIR $SRCPATH $BACKUPDIR/ \ >> $BACKUPDIR.list else # add data to existing backup on the same day $RSYNC -v -x -a --dry-run --delete --link-dest=$LINKDIR $SRCPATH $BACKUPDIR/ >> $BACKUPDIR.list fi date >> $BACKUPDIR.list touch $BACKUPDIR # Copy data for weekly and monthly purposes if [ $WEEKDAY -eq 5 ] then rm -rf $LASTMONTH rm -rf $LASTMONTH.list cp -al $BACKUPDIR $WEEKBACKUP cp -af $BACKUPDIR.list $WEEKBACKUP.list if [ $[ $WEEKNUMBER % 4 ] -eq 0 ] then rm -rf $MONTHBACKUP rm -rf $MONTHBACKUP.list cp -al $BACKUPDIR $MONTHBACKUP cp -af $BACKUPDIR.list $MONTHBACKUP.list fi fi
The error is on this line:
YESTERDAY=$[ ($WEEKDAY+6) % 7 ]
But also on this line:
LASTMONTH=$[ (53+$WEEKNUMBER-4) % 53 ]
Has this maybe something todo with the FFP 0.5 shell? Who can help me solving this issue?
Edit oct 9: With the help of Ardezo (the maker of the original script http://www.xs4all.nl/~ardezo/backup_scr … ckup_linux) I've solved the problems. The changed code:
# Set date variables WEEKDAY=`date +'%w'` YESTERDAY=`expr \( $WEEKDAY + 6 \) % 7` WEEKNUMBER=`date +'%V'` # Convert the string to a number WEEKNUMBER=`expr \( $WEEKNUMBER + 0 \)` LASTMONTH=`expr \( 53 + $WEEKNUMBER - 4 \) % 53` if [ -z `find $BACKUPDIR -maxdepth 0 -mtime -1 ` ]
Last edited by Unit106 (2008-10-09 11:49:34)
Offline
I have tried the scripts and works fine. For some reason, the scripts will always backup all files instead of changing only. Correct me if I am wrong that I thought this is a method for incremental backup.
E.g. set srcpath=/mnt/HD_a2/doc
day1, i have doc->a->1.txt (only 1 file) and it backup to $dstpath (no problem)
day2, i have doc->a->1.txt (unchange) and 2.txt (2 files) and both files are backup to $dstpath in a new $date folder (expect only 2.txt is being backup)
Thanks
Offline
Thats the magic of this time machine script. You see full Backups every day (you will see all files in the backup directory - so its a full backup from a users perspective), but an unchanged file will be stored only once on disk (using diskspace - so its an incremental backup from a systems perspective). This is done by using hardlinks.
What you can do:
- You can burn a Backup-day-folder to dvd or copy the Backup to another disk --> you get your files (not the links) automatically
- You can delete any backup-day-folder - the file will remain on disk as long as it is used in another backup-day-folder.
- If you delete the last backup-day-folder using a file, the file is removed from disk.
You can check this out:
ls -al in the day2-folder. (-al shows a lot of information about the file. One is the link-count). 1.txt will show a linkcount of 2 (because it is used in day1 and in day2 folder), but 2.txt will show a linkcount of 1 (used only in day 2). ls -al in the day1-folder. 1.txt will show a linkcount of 2 (because its the same 1.txt used in the day2-folder).
If you delete the day1-folder, 1.txt will stay in the day2 folder. Linkcount will be 1 (because it is only used in day2 now)
So it IS some kind of magic
EDIT: Typo
Last edited by Loose Gravel (2008-10-15 14:13:38)
Offline
Loose Gravel wrote:
Thats the magic of this time machine script. You see full Backups every day (you will see all files in the backup directory - so its a full backup from a users perspective), but an unchanged file will be stored only once on disk (using diskspace - so its an incremental backup from a systems perspective). This is done by using hardlinks.
What you can do:
- You can burn a Backup-day-folder to dvd or copy the Backup to another disk --> you get your files (not the links) automatically
- You can delete any backup-day-folder - the file will remain on disk as long as it is used in another backup-day-folder.
- If you delete the last backup-day-folder using a file, the file is removed from disk.
You can check this out:
ls -al in the day2-folder. (-al shows a lot of information about the file. One is the link-count). 1.txt will show a linkcount of 2 (because it is used in day1 and in day2 folder), but 2.txt will show a linkcount of 1 (used only in day 2). ls -al in the day1-folder. 1.txt will show a linkcount of 2 (because its the same 1.txt used in the day2-folder).
If you delete the day1-folder, 1.txt will stay in the day2 folder. Linkcount will be 1 (because it is only used in day2 now)
So it IS some kind of magic
EDIT: Typo
Understand now. Thanks so much for your explanation. And thanks for this magical backup!
Offline
The script works, but I feel that calling this a "snapshot" is a little misleading. Taking a snapshot implies that you'll be able to restore the files to how they were at that time, but that is not the case.
Take this scenario for example:
- I run the "snapshot.sh" file every day to backup my files
- I create a file "myFile.txt" on the 1st of the month and add some text to it, "Hello World!"
- At 2am on the 2nd of the month, "snapshot.sh" runs and "myFile.txt" is backed up
- On the 5th, I edit the file to say "Hello World, I'm here!", but when I save the file it gets irrecoverably corrupted, and I don't notice that it's corrupted.
- At 2am on the 6th, "snapshot.sh" runs and updates "myFile.txt". Because the file is hard-linked, it updates "myFile.txt" on ALL of the backups (i.e. on the 2nd, 3rd, 4th, 5th and 6th)
- Later on the 6th I realise that the file on my computer is corrupted, but it's too late because that file has already been overwritten on all of the backups on the backup drive, and there is no way to go back to the version that was backed up on the 2nd. All I have is the corrupted file on my computer AND my backup.
Hard link backups are great for minimizing space, but not good for taking proper snapshots. I use rsync and hard-links to back up, and then create a daily or weekly (depending on the backup) snapshot that gets compressed.
Offline
nothsa wrote:
- At 2am on the 6th, "snapshot.sh" runs and updates "myFile.txt". Because the file is hard-linked, it updates "myFile.txt" on ALL of the backups (i.e. on the 2nd, 3rd, 4th, 5th and 6th)
No, this is not the case, at least on my dns. As far as I understand rsync will detect the differenc between the current "myFile.txt" and the last Backup (in the current folder) and NOT hardlink the file in the new backup set.
Example:
a) you create ~/testfile.h
b) you run snapshot.sh --> File is new, so it is copied to the backup dir (not hardlinked). So you have ~/testfile.h AND backup/day1/testfile.h
c) you do NOT CHANGE ~/testfile.h
d) you run snapshot.sh --> ~/testfile.h and backup/day1/testfile.h are identical. So rsync creates a hardlink backup/day2/testfile.h --> backup/day1/testfile.h
Note: There is no hardlink to the original file ~/testfile.h
e) you corrupt / change ~/testfile.h
f) you run snapshot.sh --> ~/testfile.h and backup/day2/testfile.h are NOT identical. So rsync will copy ~/testfile.h to backup/day3/testfile.h
g) you do NOT CHANGE the corrupted ~/testfile.h
h) you run snapshot.sh --> ~/testfile.h and backup/day3/testfile.h are identical. So rsync creates a hardlink backup/day3/testfile.h --> backup/day4/testfile.h
So you end up with 3 files
1. ~/testfile.h ... your original (now corrupted)
2. backup/day4/testfile.h linked to backup/day3/testfile.h ... backup of corrupted file
3. backup/day2/testfile.h linked to backup/day1/testfile.h ... backup of original file
So in this scenario you can go back in thime (day4, day3, day2) till you find an uncorrupted backup. A least as far as I understand . Can please somone confirm / comment on this?
Offline
nothsa, have you tested the scenario you describe? I've been using rsnapshot for years
(a hard link backup) and have not encountered the problem you describe. I believe when
you change the contents of "myFile.txt" it gets a new inode and the next snapshot will have
"myFile.txt" using the new inode and all the previous "myFile.txt" using the old inode.
Last edited by mig (2008-10-16 02:32:59)
Offline
i confirm this snapshot works GREAT. it keeps all version you need....
Loose Gravel wrote:
nothsa wrote:
- At 2am on the 6th, "snapshot.sh" runs and updates "myFile.txt". Because the file is hard-linked, it updates "myFile.txt" on ALL of the backups (i.e. on the 2nd, 3rd, 4th, 5th and 6th)
No, this is not the case, at least on my dns. As far as I understand rsync will detect the differenc between the current "myFile.txt" and the last Backup (in the current folder) and NOT hardlink the file in the new backup set.
Example:
a) you create ~/testfile.h
b) you run snapshot.sh --> File is new, so it is copied to the backup dir (not hardlinked). So you have ~/testfile.h AND backup/day1/testfile.h
c) you do NOT CHANGE ~/testfile.h
d) you run snapshot.sh --> ~/testfile.h and backup/day1/testfile.h are identical. So rsync creates a hardlink backup/day2/testfile.h --> backup/day1/testfile.h
Note: There is no hardlink to the original file ~/testfile.h
e) you corrupt / change ~/testfile.h
f) you run snapshot.sh --> ~/testfile.h and backup/day2/testfile.h are NOT identical. So rsync will copy ~/testfile.h to backup/day3/testfile.h
g) you do NOT CHANGE the corrupted ~/testfile.h
h) you run snapshot.sh --> ~/testfile.h and backup/day3/testfile.h are identical. So rsync creates a hardlink backup/day3/testfile.h --> backup/day4/testfile.h
So you end up with 3 files
1. ~/testfile.h ... your original (now corrupted)
2. backup/day4/testfile.h linked to backup/day3/testfile.h ... backup of corrupted file
3. backup/day2/testfile.h linked to backup/day1/testfile.h ... backup of original file
So in this scenario you can go back in thime (day4, day3, day2) till you find an uncorrupted backup. A least as far as I understand . Can please somone confirm / comment on this?
Offline
mig wrote:
nothsa, have you tested the scenario you describe? I've been using rsnapshot for years
(a hard link backup) and have not encountered the problem you describe. I believe when
you change the contents of "myFile.txt" it gets a new inode and the next snapshot will have
"myFile.txt" using the new inode and all the previous "myFile.txt" using the old inode.
Yes, I tested the scenario and it worked as I described, which is why I started doing tar backups as well. It was overwriting the same inode.
I'm starting to think that I did something wrong though, because I've had 3 people tell me that I'm wrong . I'll run another test later today to see if I get the same results, and report back.
Offline
The behavior you are seeing may be a function of what program (or editor) is changing the file.
I know if you append text to a file with the unix '>>' the inode does not change, but the time and
date stamp does. I not sure how Rsync Tiime Machine would handle this condition, same inode but
new timestamp.
% touch newfile.txt % ls -ali newfile.txt 9098634 -rw-rw-r-- 1 mig games 0 Oct 16 11:36 newfile.txt % echo "hello world" >> newfile.txt % ls -ali newfile.txt 9098634 -rw-rw-r-- 1 mig games 12 Oct 16 11:36 newfile.txt
Do you know what process in changing the files you see this problem?
Last edited by mig (2008-10-16 21:40:32)
Offline
mig wrote:
Do you know what process in changing the files you see this problem?
My guess is that I had incorrect rsync settings when I performed my initial tests, because I have it working now! It creates new inodes on my DNS-323 backups for any changes I make to the files on my servers.
Once I confirmed that it was working, I downloaded rsnapshot and configured it for all of my backups, and they're working beautifully. I had to add "-H" to "rsync_short_args" to get it to preserve my servers' hard-links, and I had to remove "--relative" from "rsync_long_args" to get it to back anything up, but other than that it went pretty smoothly.
Thanks for the heads-up, guys! This is going to save me SO much HD space!
P.S. mig: I also tested your append scenario using ">>" and it works as it should, i.e. inode on the server stays the same, and on the DNS-323 a different inode from the one in "daily.1" is created for the "daily.0" backup.
Offline
Hi folks!
I'm considering this approach to backup my DNS-323 to a remote DNS-343.
However, I'd prefer to have files backed up as soon as they change, e.g. by using inotify to start rsync. The idea being that by"instant backup" I might reduce the risk that backing up files takes too long. I happen to generate quite a lot of large picture files, which might take longer to upload than the interval of one backup, and believe that a "real-time" approach might help to solve this issue - in a cheaper way than upgrading my upload speed...
Of course, this requires inotify on the DNS-323, which as far as I understand is not included in version 0.5 of the fun_plug. Is it at all possible to run inotify on the DNS-323?
In order to prevent a zillion backup folders from being created on the far end (every time rsync kicks off), I'd like for all backups on a certain day (i.e. files that have changed throughout that day) to be stored in the same backup folder, e.g. "backup\2008-10-21".
Does anyone know whether this is a feasible solution - i.e. to have inotify execute rsync for the changed file(s) instead of crontab?
In addition: This approach renders the "snapshot" approach somewhat tricky - any thoughts about that?
Note: I'm a newbie to Linux (but have telnet and fun_plug running nicely on my DNS-323).
Offline
Offline