DSM-G600, DNS-3xx and NSA-220 Hack Forum

Unfortunately no one can be told what fun_plug is - you have to see it for yourself.

You are not logged in.

Announcement

#51 2008-10-24 22:29:26

afloden
New member
Registered: 2008-10-21
Posts: 4

Re: DNS-323 Rsync Time Machine!

fonz wrote:

http://dns323.kood.org/forum/p21857-Yesterday-16%3A47%3A33.html#p21857

Thanks!  :-)  Have started setting up time-based rsnapshot to remote NAS.  :-)

Offline

 

#52 2008-11-20 03:17:26

ozziegt
Member
Registered: 2008-07-10
Posts: 18

Re: DNS-323 Rsync Time Machine!

My script isn't updating the symlink when it is run. If I run it manually, the symlink updates but when the cron runs it, the symlink is not changed. Any ideas what could cause this?

Offline

 

#53 2008-11-20 15:49:06

Loose Gravel
Member
Registered: 2008-10-14
Posts: 50

Re: DNS-323 Rsync Time Machine!

ozziegt wrote:

My script isn't updating the symlink when it is run. If I run it manually, the symlink updates but when the cron runs it, the symlink is not changed. Any ideas what could cause this?

In the "if" statement wc (wordcount) is called. There are (at least) two wc applications on your NAS:

/ffp/bin/wc (This one works)
/bin/wc (This one doesn't)

When you run the script manually, wc is used from your ffp environment. This one works --> the "if" is true -_> the link is created.
When crond is running the script, wc is used from the cron environment. This one doesn't --> If fails --> no link.

The solution is to add
   export PATH=/ffp/sbin:/ffp/bin:$PATH
as the second line in your script setting up the ffp environment.

Offline

 

#54 2008-12-05 03:08:09

ozziegt
Member
Registered: 2008-07-10
Posts: 18

Re: DNS-323 Rsync Time Machine!

Loose Gravel wrote:

ozziegt wrote:

My script isn't updating the symlink when it is run. If I run it manually, the symlink updates but when the cron runs it, the symlink is not changed. Any ideas what could cause this?

In the "if" statement wc (wordcount) is called. There are (at least) two wc applications on your NAS:

/ffp/bin/wc (This one works)
/bin/wc (This one doesn't)

When you run the script manually, wc is used from your ffp environment. This one works --> the "if" is true -_> the link is created.
When crond is running the script, wc is used from the cron environment. This one doesn't --> If fails --> no link.

The solution is to add
   export PATH=/ffp/sbin:/ffp/bin:$PATH
as the second line in your script setting up the ffp environment.

Wow...thanks, I'll give it a shot. I'm surprised nobody else is running into this problem.

Offline

 

#55 2008-12-16 21:25:18

shepherd wong
Member
Registered: 2008-12-08
Posts: 7

Re: DNS-323 Rsync Time Machine!

Hi all.  I'm a Linux infant so please be patient with me...

I setup my DSN323 as per the wiki's how-to guide (http://dns323.kood.org/howto:backup) based on raid123's thread here.  It's doing a daily backup but it doesn't appear to be linking files.  What it's doing is a full backup each time.  I believe this is what's happening because a) each backup is taking about 12 hrs (200+ gb of data), b) drive space on my backup drive is decreasing about about 200 gb after each backup, and c) when I do a ls -ln on the backup drive, I see all the directories made up of date_time but none are listed as ->current.

Can someone please offer some suggestions as to what might  be wrong?  How can I get it to start making virtual copies of the unchanged files?  I compared my snapshot.sh file to the one on the wiki many, many times and don't see any differences.  Same thing with my editcron file.  And I've restarted my DSN323.

Offline

 

#56 2008-12-16 22:22:23

Loose Gravel
Member
Registered: 2008-10-14
Posts: 50

Re: DNS-323 Rsync Time Machine!

shepherd wong wrote:

Can someone please offer some suggestions as to what might  be wrong?  How can I get it to start making virtual copies of the unchanged files?  I compared my snapshot.sh file to the one on the wiki many, many times and don't see any differences.  Same thing with my editcron file.  And I've restarted my DSN323.

ls-l should output something like
current -> /mnt/HD_b2/Backup/20081216_030010

If there isn't a current entry, maybe you have the same problem as ozziegt three posts above. Please try to add the path-statement as stated two posts above. Then start snapshot.sh manually. Files will still be copied the first time, but this should give you a current entry pointing to the newest directory. Run snapshot.sh a second time. This should be much faster, files should be linked. current should be updated also.

If this works, add the cronjob (and wait till tomorrow). Check current again.

loose gravel

btw: you are using firmware 1.5, ffp 0.5, aren't you?


EDIT: As your backup job runs 12 hours it may be wise to disable the cronjob wile running snapshot.sh manually (or run the cronjob only and wait two nights...)

Last edited by Loose Gravel (2008-12-16 22:28:46)

Offline

 

#57 2008-12-17 00:08:58

shepherd wong
Member
Registered: 2008-12-08
Posts: 7

Re: DNS-323 Rsync Time Machine!

Loose Gravel wrote:

btw: you are using firmware 1.5, ffp 0.5, aren't you?

Yep, firmware 1.5 and ffp 0.5.

EDIT: As your backup job runs 12 hours it may be wise to disable the cronjob wile running snapshot.sh manually (or run the cronjob only and wait two nights...)

Yeah, good idea. 

Thx for the suggestions.  I'll give 'em a shot when I get home.

Offline

 

#58 2008-12-18 18:46:22

shepherd wong
Member
Registered: 2008-12-08
Posts: 7

Re: DNS-323 Rsync Time Machine!

Loose Gravel, thanks very, very much for your help.  I added the path statement for wc and that did the trick.  I ran it manually and confirmed that it work.  Then re-enabled the cron job and rebooted the box.  This morning, it ran snapshot.sh and it worked fine.  Much, much faster and the "current" link is now there.

That path statement really should be added to the wiki, I think.

Offline

 

#59 2008-12-19 01:45:50

Loose Gravel
Member
Registered: 2008-10-14
Posts: 50

Re: DNS-323 Rsync Time Machine!

shepherd wong wrote:

Loose Gravel, thanks very, very much for your help.

Glad I could help.

shepherd wong wrote:

That path statement really should be added to the wiki, I think.

Good idea. Added a troubleshooting section.  I didn't modify the script, because this could be used with an old ffp version also, and I do not know the correct path for e.g. ffp 0.4.

Offline

 

#60 2008-12-19 02:09:44

shepherd wong
Member
Registered: 2008-12-08
Posts: 7

Re: DNS-323 Rsync Time Machine!

Loose Gravel wrote:

shepherd wong wrote:

Loose Gravel, thanks very, very much for your help.

Glad I could help.

shepherd wong wrote:

That path statement really should be added to the wiki, I think.

Good idea. Added a troubleshooting section.  I didn't modify the script, because this could be used with an old ffp version also, and I do not know the correct path for e.g. ffp 0.4.

I just had a look at the section you added.  On behalf of all the other newbs, THANKS!  Um, one suggestion though...maybe for the absolutely clueless folks out there (ie: ME), you could paste the full contents of snapshot.sh file into your example code so it makes it clear that a) you're talking about the snapshot.sh and b) we know exactly where that path statement goes.  I put mine after the "ffppath=/ffp" line in snapshot.sh and it seems to work fine.

Again, thanks very much for your help.  My DNS323 is now doing everything I hoped it would (incremental backups, torrents, and auto power down when my UPS loses power, easy data access from multiple computers).   Wouldn't have been possible w/o this fantastic Linux community.

Offline

 

#61 2009-01-05 03:30:16

talo
New member
Registered: 2008-03-02
Posts: 2

Re: DNS-323 Rsync Time Machine!

ozziegt wrote:

I had this working, and then I decided to clear out my snapshots folder and start over because I had restructured the files. However, now it isn't creating the "current" folder. It is giving the following error:

"--link-dest arg does not exist: /mnt/HD_b2/v1_snapshots/current"

Any ideas? Thanks

Just experienced the same thing though I did not see anyone respond to this. Anyone?

Offline

 

#62 2009-01-15 18:32:15

hape
Member
From: Germany
Registered: 2008-12-22
Posts: 12

Re: DNS-323 Rsync Time Machine!

hi

everything described here is running very well - so thanks to all of you. But i have an new idea for the rsync time machine:

based on the idea below (delete directorys older than ..)

----------------------------------------------------------
#!/bin/sh

find /mnt/HD_b2/ -mtime +15 -type d -name '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][_][0-9][0-9][0-9][0-9][0-9][0-9]' > toDelete

chmod 777 toDelete

while read EachLine
do
   echo $EachLine
#  Remove the '#' before the next line to actually remove the directory. Now it just showing the folder to check it
#   rm -rf $EachLine
done < toDelete

rm -f toDelete
-----------------------------------------------------------

I have the idea to delete with two stages:
The second stage is to delete everything like in the script above.
But the first stage is to get out of the directory which the above script wants to delete the files with inode 1. This files would be physically deleted if i delete the directory. So this files will be compressed in a compress directory from the archive day and the whole directory will be deleted after a few days more.
sorry for my english so i hope i can it explain with an example:

every day runs the rsync time machine.
20090110_030008
20090111_030008
20090112_030008
20090113_030008
are the archives.
20090114 is the actual day. If the above script runs with +5 the directory 20090110_030008 is deleted.
now a script with the same search runs with +4 and detected 20090111_030008. But this script searches for the files with inode 1 and copy and compress this files to 20090111_030008_comp.
Afterwards it deletes 20090111_030008.
So you get 3 stages of backup 1. full 2. compressed 3. deleted.

Why did i not write the code for this myself? Because i'm a basic user of linux (with some programming background on other OS's) and i have no idea for searching files with inode 1 and the options from tar to compress and copy etc.
So i hope that someone can make such a script which seems to me (if i understand the above script right) not to complicated. I will test it for sure on my machine and give the results back.
Or do you mean that this idea sounds not so good pls tell the problems you see.
thx a lot to all of you

hape

Offline

 

#63 2009-08-22 02:23:52

lagreat
Member
Registered: 2009-08-22
Posts: 7

Re: DNS-323 Rsync Time Machine!

Loose Gravel wrote:

Thats the magic of this time machine script. You see full Backups every day (you will see all files in the backup directory - so its a full backup from a users perspective), but an unchanged file will be stored only once on disk (using diskspace - so its an incremental backup from a systems perspective). This is done by using hardlinks.

What you can do:
- You can burn a Backup-day-folder to dvd or copy the Backup to another disk --> you get your files (not the links) automatically
- You can delete any backup-day-folder - the file will remain on disk as long as it is used in another backup-day-folder.
- If you delete the last backup-day-folder using a file, the file is removed from disk.

You can check this out:

ls -al in the day2-folder. (-al shows a lot of information about the file. One is the link-count). 1.txt will show a linkcount of 2 (because it is used in day1 and in day2 folder), but 2.txt will show a linkcount of 1 (used only in day 2). ls -al in the day1-folder. 1.txt will show a linkcount of 2 (because its the same 1.txt used in the day2-folder).
If you delete the day1-folder, 1.txt will stay in the day2 folder. Linkcount will be 1 (because it is only used in day2 now)

So it IS some kind of magic smile

EDIT: Typo

Not sure if this thread is active however, I believe that I am mistaken in understanding this - that is because I am windows guy with some knowledge of linux. Due to my limited knoledge I am not sure if this is right or wrong.

If I run du -d 1 I get this

785616  ./20090819_020011
785616  ./20090818_020010
786172  ./20090820_020010
786172  ./20090821_020010
787640  ./20090821_173527
3931220 .

See each directory is 785MB and over and adding to 3.7G shown above. How is this possible? This means the script is creating full backups everyday. Am I missing something - I must be. I will appreciate if someone can shed some light please.

I have DNS-323 with FW 1.07 and fun plug version 0.5 and raid123's script that works very well. Thanks raid123 for such simplicity.

Last edited by lagreat (2009-08-22 15:31:25)

Offline

 

#64 2009-08-24 16:16:53

index monkey
Member
From: UK
Registered: 2007-06-14
Posts: 112

Re: DNS-323 Rsync Time Machine!

Each directory appears to be a full backup, but actually its an incremental backup, it uses links to the files. The links are called inodes.
inodes, which are effectively like shortcuts in windows, point to the actual files, and only after all the last inode is deleted will the actual file be deleted.
This saves on disk space, backup time and having to manage where the physical files are.

Last edited by index monkey (2009-08-24 16:18:40)


DNS-323, HW B1, 2 x 2TB WD green, fw 1.08, fun_plug 0.5, transmission, automatic, nzbget newsreader & rsync time machine backup.

Offline

 

#65 2009-08-24 21:39:44

Loose Gravel
Member
Registered: 2008-10-14
Posts: 50

Re: DNS-323 Rsync Time Machine!

There is an easy test: Do a

Code:

df

(diskfree) before a backup and afterwards. Look at "used" on your Backup device.  It should increase only slightly. (Remember: Linking files will use diskspace also, but only some bytes per file.). If you are backing up lets say 10 GB unchanged files, and the used diskspace increases 10 GB, then something is going wrong. In that case: Post your sript.

Gravel

Offline

 

#66 2009-08-25 08:32:38

raid123
Member
Registered: 2008-04-30
Posts: 22

Re: DNS-323 Rsync Time Machine!

lagreat wrote:

Not sure if this thread is active however, I believe that I am mistaken in understanding this - that is because I am windows guy with some knowledge of linux. Due to my limited knoledge I am not sure if this is right or wrong.

If I run du -d 1 I get this

785616  ./20090819_020011
785616  ./20090818_020010
786172  ./20090820_020010
786172  ./20090821_020010
787640  ./20090821_173527
3931220 .

See each directory is 785MB and over and adding to 3.7G shown above. How is this possible? This means the script is creating full backups everyday. Am I missing something - I must be. I will appreciate if someone can shed some light please.

If you run du on each directory separately, it will report the full size with all the files.  However, if you run du on multiple directories together, the second directory should be much smaller because each inode gets counted only once.  For example, I just tried it on my last two days of backup as an example:

/mnt/HD_b2# du -s -h -d 1 20090824_020001 20090823_020001
...
677.5G  20090824_020001
...
45.9M   20090823_020001

It does look like you listed the directories together, so it does look like there may be a problem with your setup.  What do you see what you do "ls -l current" in the directory?  If it's set up correctly, it should point to the latest directory:

/mnt/HD_b2# ls -l current
lrwxrwxrwx    1 root     root           15 Aug 24 02:02 current -> 20090824_020001

If that's set up correctly, then make sure that you have the script set up correctly.  Have you pointed "dstpath=/mnt/HD_b2" to the right directory?  That folder should contain the "current" link.  If that's correct, then make sure that your command for running rsync contains "--link-dest=$dstpath/current", which is the key to letting rsync know how to create the hard links.

Offline

 

#67 2009-08-25 09:41:48

raid123
Member
Registered: 2008-04-30
Posts: 22

Re: DNS-323 Rsync Time Machine!

I haven't checked back to this forum for a while, and looks like some people had problems with future versions of ffp and wc.  I think at one point I made the suggestion of removing the if check, which is what my current version has:

#!/bin/sh

srcpath='/mnt/HD_a2'
dstpath=/mnt/HD_b2
ffppath=/ffp
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
$ffppath/bin/rsync -aivx --link-dest=$dstpath/current $srcpath $dstpath/$date > $ffppath/log/snapshot.log 2>&1
rm -f $dstpath/current
ln -s $date $dstpath/current

Others have mentioned the desire to clean up the backups.  I have a script that I wrote, but it's very specific to my own needs.  My backup scheme involves keeping daily backups for the 2 latest months, and then keeping monthly backups for 6 months prior to that.  If this meets your needs, feel free to take a look at my script.

** DISCLAIMER: THE SCRIPT HAS NOT BEEN EXTENSIVELY TESTED, SO USE AT YOUR OWN RISK **

That said, I've been using it for quite a while with the standard snapshot.sh script and it works for me.  It does use wildcards, so if you keep other files in the backup drive, you should understand what the script does before deciding whether or not to use it.  You may also want to backup your backup first.

Here's the script I use:

#!/bin/sh

path=/mnt/HD_b2/
months=6

decrement_month()
{
    let month=$(($month-1))
    if [ $month -eq 0 ]
    then
        month=12
        let year=$(($year-1))
    fi
}

year=$(date "+%Y")
month=$(date "+%m")
filetail="_*"
filetaildate="??_*"

decrement_month

while [ $months -gt 0 ]
do
    decrement_month
    let yearmonth=$(($year*100+$month))
    let yearmonthfirst=$(($year*10000+$month*100+1))
    if [ -d $path$yearmonth ]
    then
        echo Skipping $path$yearmonth
    else
        echo Processing $path$yearmonth
        mv $path$yearmonthfirst$filetail $path$yearmonth
        rm -rf $path$yearmonth$filetaildate
    fi
    let months=$(($months-1))
done

Obviously I haven't put in the time to make it customizable, mainly because it takes time to test that the customizations indeed work.  The first decrement_month, for example, is to make it keep 2 months of daily backups.  That way, on the 1st of every month, there's still at least 28 days of daily backup available.  This script also assumes that rm -rf will complete.  If somehow the program crashes or the device resets, you might end up with some orphan directories that needs to be cleaned up manually.  I haven't had the time to make it more robust...  sorry!  smile

Note that I'm on an older version of ffp, so some modifications may be necessary to use this on the latest version.

Offline

 

#68 2009-08-29 05:11:04

lagreat
Member
Registered: 2009-08-22
Posts: 7

Re: DNS-323 Rsync Time Machine!

Thanks to raid123, loose gravel and index monkey. I appreciate your responses. I will try to provide all the information for further troubleshooting to resolve this issue. I am providing all the information here so the script starts working as it works for all of you.

Here's the output of ls -l current after I ran manual backup this evening. Previously it was pointing to 20090828_020011

root@home-nas:/mnt/HD_b2/Backup# ls -l current
lrwxrwxrwx    1 root     root           15 Aug 28 21:46 current -> 20090828_204908


- I have written down sizes for each day below:

root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 20090825_020010/
16.6G   20090825_020010/homedrives
16.6G   20090825_020010
root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 20090826_020010/
16.6G   20090826_020010/homedrives
16.6G   20090826_020010
root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 20090827_020011/
16.6G   20090827_020011/homedrives
16.6G   20090827_020011
root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 20090828_020011/
16.6G   20090828_020011/homedrives
16.6G   20090828_020011


- So if I understand this correctly each directory is on its own with 16.6GB size!

I ran the script with 'sh snapshot.sh start' command and here's the size of the folder

root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 20090828_204908
16.6G   20090828_204908/homedrives
16.6G   20090828_204908

And ran the same command for 'current' folder giving me this result

root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 current/
16.6G   current/homedrives
16.6G   current


- Again the size is 16.6GB!

I made changes as indicated by raid123 in last post by removing the if and fi leaving rm and ln lines, the results are shown below:

root@home-nas:/mnt/HD_b2/Backup# du -s -h -d 1 20090828_220424
16.6G   20090828_220424/homedrives
16.6G   20090828_220424
- Once again 16.6G however it didn't take over an hour like the previous one to complete this time, so not sure what it did!!!!!


Here's my script called snapshot.sh
==================================
#!/bin/sh

srcpath=/mnt/HD_a2/homedrives
dstpath=/mnt/HD_b2/Backup
ffppath=/mnt/HD_a2/ffp
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
$ffppath/bin/rsync -aivx --link-dest=$dstpath/current --exclude='workspace/' $srcpath $dstpath/$date > $ffppath/log/snapshot-$date.log 2>&1
if [ $(ls -1A $dstpath/$date | wc -l) -ne 0 ]
then
  rm $dstpath/current
    ln -s $date $dstpath/current
    fi
===================================


Any suggestions to conserve disk space will be appreciated.

I will copy raid123's cleanup script and give that a try. I was thinking on the lines of keeping end of month backup as a compressed folder, eventually cleaning up after 12 months of compressed folders.

Offline

 

#69 2009-08-29 23:36:48

lagreat
Member
Registered: 2009-08-22
Posts: 7

Re: DNS-323 Rsync Time Machine!

I believe I have fixed the issue after reading this thread all over again. Boy that was some 8+ hours of work but if I have indeed fixed it for my case I am happy to have put in that time. Now to what I did.

- I read a reply from loose gravel suggesting to add 'export PATH=/ffp/sbin:/ffp/bin:$PATH' and further suggests that there are two wc commands. However on searching my disks using find -name wc* at '/' on /mnt/HD_a2 I found just one in /ffp/bin/wc, /bin/wc was not found. I still added the command and ran ./snapshot.sh and it gave me another 16.6 GB folder

- I commented the line 2nd line 'export PATH=/ffp/sbin:/ffp/bin:$PATH' and ran again - this created 16.6GB folder again. du and df proved that the sizes are indeed real.

- I read from last comment by raid123 not requiring the 'if then fi' combination and left out the two lines after 'then'.

- Chose a smaller folder with 600+ mb as srcpath. It started to work as I ran it on schedule every 5 minutes. Each folder will increase by just few KB instead of 600+ MB. df and du agreed too.

Present script looks like this:
=========================
#!/bin/sh

# export PATH=/ffp/sbin:/ffp/bin:$PATH
srcpath=/mnt/HD_a2/homedrives
dstpath=/mnt/HD_b2/daily-backup
ffppath=/mnt/HD_a2/ffp
date=`date "+%Y%m%d_%H%M%S"`
mkdir $dstpath/$date
$ffppath/bin/rsync -aivx --link-dest=$dstpath/current --exclude='workspace/' $srcpath $dstpath/$date > $ffppath/log/snapshot-$date.log 2>&1
# if [ $(ls -1A $dstpath/$date | wc -l) -ne 0 ]
# then
    rm $dstpath/current
    ln -s $date $dstpath/current
# fi
==========================

Removed the 5 minute schedule and manually backed up the original 16.6 GB. I will confirm how it is after the backup finishes at 2AM tonight and post it here. However, if any of you believe that I have made any mistake please correct me. I am M$ trained so it is easy for me to overlook things in Linux. Your guidance has helped me, and I will appreciate your comments for my own edification.

Once this is settled, I will work with raid123's script however not sure how I'll test as I won't have folders for last 60 days.

Thanks to all.

Offline

 

#70 2009-09-01 01:57:43

lagreat
Member
Registered: 2009-08-22
Posts: 7

Re: DNS-323 Rsync Time Machine!

Seems to be working as it should. Here are the results as promised - hope the gurus will get time to scrutinize this and let me know if I am on track or not.

root@home-nas:/mnt/HD_b2/daily-backup# du -s -d 1 -h
17.2G   ./20090829_160608
4.7M    ./20090830_020001
9.7M    ./20090831_020010
17.2G   .

Thanks

Offline

 

#71 2009-09-17 15:30:06

Iggy
New member
Registered: 2009-09-17
Posts: 1

Re: DNS-323 Rsync Time Machine!

Hi,

I am reading this forum and found a lot of useful stuff.

I have a problem. The snapshot.sh script works, but i don't know how to get files from the windows server which is in a domain.
I'm a newbie in linux and don't know if it only the problem in the path (srcpath) or you must in a way first log on to the domain and then execute the script.

Thanks in advance

Offline

 

#72 2009-12-14 16:47:48

NeilN
New member
Registered: 2009-11-02
Posts: 2

Re: DNS-323 Rsync Time Machine!

Long story short: My primary drive had fiesystem issues and is getting re-formatted (I've pulled the secondary drive). I've been using the Time Machine to backup to the secondary.  After the format is complete, what's the best way to restore the files to the primary? Just re-install funplug, insert the secondary drive, and copy the files over?

Offline

 

#73 2010-02-02 12:00:25

Rival
Member
From: Budapest
Registered: 2008-03-13
Posts: 53
Website

Re: DNS-323 Rsync Time Machine!

Just a small thoughts of mine about purging old backups.

I'm using weekly incremental backup. The rsync does not have the feature to delete old backups before disk became full, so I use these two additional lines in my script before starting rsync:

---

dateold=`date -d '-5 week' +%Y%m%d`
rm -r /mnt/HD_b2/backup/back-${dateold}*

---

I have the actual and the last 4 weeks' backup. Of course if you have daily backup you have to use '-x days'. In case you have a more complicated way of backup, this method won't work.

Iknow this is not a "clear" way: it not verifies whether -5 week date exists or not, rm will send a "not found" error message, but it does what I want... smile

I hope I have helped.

Last edited by Rival (2010-02-02 12:19:13)

Offline

 

#74 2010-04-06 20:10:24

aldo.corleone
Member
Registered: 2008-10-23
Posts: 12

Re: DNS-323 Rsync Time Machine!

Hello everyone,

First off, I love the time machine backup.  It works superbly.  However, I have 2xDNS-323 devices, and would like to backup to the second DNS-323.

How can I do this? rsyncd? Is there an easier way? Any help is greatly appreciated.

Thanks.

Offline

 

#75 2010-08-12 13:24:30

scaramanga
Member
Registered: 2010-08-04
Posts: 251

Re: DNS-323 Rsync Time Machine!

I'm posting here my version for the editcron.sh script. It has some additional "features":
1. Can be used like other /ffp/start scripts to start, stop, restart or check status of cron job.
2. Safe to call start multiple times - job will only be scheduled once.
3. separated "data" (the schedule, the cron job) from the code for easy editing.
4. Made all commands path's super-easy to modify to accommodate different fun-plug versions.

Your inputs are always welcomed.

P.S.: I thought of turning this into a whole system that supports multiple cron jobs. If anyone finds that interesting please let me know.

Code:

#!/bin/sh

# PROVIDE: Editcron

#
# Schedule job
# Important: Always run "./editcron.sh stop" BEFORE making changes to job.
#            Otherwise, the old job will remain scheduled until you restart.
#
schedule="0 2 * * *"
job="/ffp/bin/snapshot.sh"

#
# Commands
#
grepcmd="/bin/grep"
crontabcmd="/bin/crontab"
rmcmd="/bin/rm"
echocmd="/bin/echo"

#
# Tempporary files
#
TMPDIR="/tmp"
TMPCRONTXT="${TMPDIR}/crontab.txt"
TMPOTHERCRONTXT="${TMPDIR}/othercrontab.txt"

#
# FFP Start Functions
#
. /ffp/etc/ffp.subr
start_cmd="editcron_start"
stop_cmd="editcron_stop"
status_cmd="editcron_status"

editcron_start()
{
    # grab existing crontab
    ${crontabcmd} -l > ${TMPCRONTXT}

    # check if already scheduled
    cronjobs=$(${grepcmd} "${job}" ${TMPCRONTXT})
    if test -n "${cronjobs}"; then
        ${echocmd} "${job} already scheduled:"
        ${echocmd} "${cronjobs}"
    else
        # add the cron job and install the new one
        ${echocmd} "Scheduling ${schedule} ${job}"
        ${echocmd} "${schedule} ${job}" >> ${TMPCRONTXT}
        ${crontabcmd} ${TMPCRONTXT}
    fi

    # clean up
    ${rmcmd} ${TMPCRONTXT}
}

editcron_stop()
{
    # grab existing crontab
    ${crontabcmd} -l > ${TMPCRONTXT}

    # check if already scheduled
    cronjobs=$(${grepcmd} "${job}" ${TMPCRONTXT})
    if test -z "${cronjobs}"; then
        ${echocmd} "${job} not scheduled"
    else
        # remove the cron job(s) and install the new one
        ${grepcmd} -v "${job}" ${TMPCRONTXT} > ${TMPOTHERCRONTXT} 
        ${crontabcmd} ${TMPOTHERCRONTXT} 
        ${echocmd} "${job} not longer scheduled"

        # clean up
        ${rmcmd} ${TMPOTHERCRONTXT} 
    fi

    # clean up
    ${rmcmd} ${TMPCRONTXT}
}

editcron_status()
{
    # grab existing crontab
    ${crontabcmd} -l > ${TMPCRONTXT}
    cronjobs=$(${grepcmd} "${job}" ${TMPCRONTXT})

    # check if already scheduled
    if test -n "${cronjobs}"; then
        ${echocmd} "${job} is scheduled:"
        ${echocmd} "${cronjobs}"
    else
        ${echocmd} "${job} not scheduled"
    fi

    # clean up
    ${rmcmd} ${TMPCRONTXT}
}

run_rc_command "$1"

EDIT: Only tested with FW 1.08 and fun-plug 0.5

Last edited by scaramanga (2010-08-12 13:58:39)


DNS-323 HW Rev. C1 FW 1.10 fun-plug 0.5
2 x WD10EARS-00Y5B1 in Standard mode (LCC set to 5 min; Aligned to 4K)
Transmission with Transmission Remote GUI

Offline

 

Board footer

Powered by PunBB
© Copyright 2002–2010 PunBB