Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
aldo.corleone wrote:
Hello everyone,
First off, I love the time machine backup. It works superbly. However, I have 2xDNS-323 devices, and would like to backup to the second DNS-323.
I'm also doing this, and eventually the second DNS-323 will be remote. I've exchanged 323's with a friend and we'll be using each other's DSL from 2am - 6am to do some off-site backups.
Any suggestions on how to get this to work well? Do I need to make sure the Userids or Userid numbers match so file ownership stays the same? Is there any easy way for my script to check whether a previous night's script is, OH NO, still running?
Any other obvious issues or concerns you've run into? Let us know!
Thanks for the original code and all the great advice and troubleshooting. My snapshots have saved my ass twice now!
eastpole
Offline
I tried to set this up and the snapshot.sh runs properly when I manually run it, but I can't get the editcron part to work. I have set it up as in the instructions, but when I type crontab –l, rsync isn't listed. I used chmod a+x /mnt/HD_a2/ffp/start/editcron.sh, but it doesn't start when I reboot my 323. Any ideas of what I'm doing wrong?
Thanks.
Offline
ahughes wrote:
I tried to set this up and the snapshot.sh runs properly when I manually run it, but I can't get the editcron part to work. I have set it up as in the instructions, but when I type crontab –l, rsync isn't listed. I used chmod a+x /mnt/HD_a2/ffp/start/editcron.sh, but it doesn't start when I reboot my 323. Any ideas of what I'm doing wrong?
Thanks.
Try running editcron.sh manually. If that completes successfully and crontab -l lists the rsync job it probably means that there's a problem with one of the commands in the script. the PATH environment variable tells the shell process that runs the script where to look for the executable files that the script runs. This variable is set differently when you run it yourself from the command line and when it's run at startup.
If that's the problem, you've got 2 options:
1. Modify the script to use absolute paths. That explicitly tell the shell where the commands are and the PATH env. variable becomes insignificant.
2. Add a command that changes the PATH variable to the beginning of the script to add the correct location of binaries.
Can't help you more since you didn't write which ffp version you use and where it is installed.
See my post here for my version of the editcron.sh script.
Last edited by scaramanga (2010-09-10 15:14:14)
Offline
manually running editcron.sh did start the process and was listed at crontab
Here's my editcron.sh:
#!/bin/sh
CRONTXT=/mnt/HD_a2/crontab.txt
# start with existing conrontab
/bin/crontab -l > $CRONTXT
# add the snapshot command
/bin/echo "0 0 * * * /mnt/HD_a2/ffp/bin/snapshot.sh" >> $CRONTXT
# install the new crontab
/bin/crontab $CRONTXT
# clean up
/bin/rm $CRONTXT
I'm using ffp 0.5 and it is installed at /mnt/HD_a2/ffp - what should I change in my script?
Thanks.
Offline
Hmm, it seems all good to me. Please reboot the unit and check a couple of things:
1. ffp log, located at /mnt/HD_a2/ffp.log - if there were errors when running this script they "should" be there.
2. if this file exists: /mnt/HD_a2/crontab.txt - this can tell you that the script failed after it created the file but before deleting it.
Please let us know which firmware version you're running. I'm a rather new user and am using FW v1.08.
Last edited by scaramanga (2010-09-11 10:12:17)
Offline
Thanks for your help scaramanga. I'm on FW 1.09.
Here's my ffp log after rebooting:
**** fun_plug script for DNS-323 (2008-08-11 tp@fonz.de) ****
Sat Sep 11 08:39:27 GMT 2010
ln -snf /mnt/HD_a2/ffp /ffp
* Running /ffp/etc/fun_plug.init ...
* Running /ffp/etc/rc ...
* /ffp/start/syslogd.sh inactive
* /ffp/start/SERVERS.sh inactive
* /ffp/start/portmap.sh inactive
* /ffp/start/nfsd.sh inactive
* /ffp/start/ntpd.sh inactive
* /ffp/start/LOGIN.sh inactive
* /ffp/start/vsftpd.sh ...
Starting /ffp/sbin/vsftpd /ffp/etc/vsftpd.conf
* /ffp/start/unfsd.sh inactive
* /ffp/start/transmission.sh ...
Starting transmission-daemon
* /ffp/start/telnetd.sh inactive
* /ffp/start/kickwebs.sh ...
Kicking webs ...
* /ffp/start/startweb.sh ...
Starting /ffp/sbin/lighttpd -f /mnt/HD_a2/newsbin/conf/lighttpd.conf
* /ffp/start/startnzbget.sh ...
Starting /mnt/HD_a2/newsbin/bin/nzbget -D -c /mnt/HD_a2/newsbin/conf/nzbget.conf
* /ffp/start/sshd.sh ...
Starting /ffp/sbin/sshd
* /ffp/start/rsyncd.sh inactive
* /ffp/start/mediatomb.sh ...
Starting /mnt/HD_a2/mediatomb12/usr/bin/mediatomb -m /mnt/HD_a2/mediatomb12 -f config -d --add /mnt/HD_a2/Media/Video
2010-09-11 08:39:34 INFO: Loading configuration from: /mnt/HD_a2/mediatomb12/config/config.xml
2010-09-11 08:39:34 INFO: Checking configuration...
2010-09-11 08:39:34 INFO: Setting filesystem import charset to ASCII
2010-09-11 08:39:34 INFO: Setting metadata import charset to ASCII
2010-09-11 08:39:34 INFO: Setting playlist charset to ASCII
2010-09-11 08:39:35 INFO: Configuration check succeeded.
* /ffp/start/lighttpd.sh ...
WARNING: lighttpd: Already running
* /ffp/start/inetd.sh inactive
* /ffp/start/editcron.sh ...
* OK
The last two lines look good from what I can tell. But rsync is still not listed unless I manually run crontab.
There's no crontab.txt file though.
Offline
There doesn't seem to be any errors. crontab.txt is not there becuase the last line in the script deletes it.
Can you try to comment it out by replacing this line:
/bin/rm $CRONTXT
with this:
#/bin/rm $CRONTXT
(use a *nix compatible editor)
That way the file won't be deleted. Double check that it didn't work, and then try running
/bin/crontab /mnt/HD_a2/crontab.txt
from the command line. Please post the content of the file here.
It should look similar to this:
/ # crontab -l 0 0 * * 5 email -m 256 & 59 1 * * * /usr/sbin/daylight& 30 2 * * * /usr/sbin/stime& */60 * * * * /usr/sbin/getdhcp& 32 2 * * * /usr/sbin/rtc -s 30 2 2 * * /usr/sbin/rtc -c 30 2 * * * /ffp/bin/snapshot.sh / #
Last edited by scaramanga (2010-09-12 00:52:28)
Offline
crontab.txt says:
59 1 * * * /usr/sbin/daylight&
30 2 * * * /usr/sbin/stime&
32 2 * * * /usr/sbin/rtc -s
30 2 2 * * /usr/sbin/rtc -c
0 0 * * * /mnt/HD_a2/ffp/bin/snapshot.sh
but when I just crontab -l I get:
root@DNS:~# crontab -l
59 1 * * * /usr/sbin/daylight&
30 2 * * * /usr/sbin/stime&
32 2 * * * /usr/sbin/rtc -s
30 2 2 * * /usr/sbin/rtc -c
When I manually run crontab, snapshot.sh shows up there.
Offline
It seems like everything goes right, and then this command fails:
/bin/crontab $CRONTXT
You can try to replace it with the following, maybe it'll give you a hint what's going on:
/bin/crontab $CRONTXT > /tmp/crontab_log.txt 2>&1 echo RC $? >> /tmp/crontab_log.txt
Restart, and take a look at /tmp/crontab_log.txt
It's a bit of a long shot, but maybe the problem has something to do with the /mnt/HD_a2 is inaccessibly when the command is run. You can try to change:
CRONTXT=/mnt/HD_a2/crontab.txt
to:
CRONTXT=/tmp/crontab.txt
(/tmp is in-memory)
Offline
I was working on something completely unrelated and found I had to let the DNS323 "settle" and perform all its operations before modifying the crontab list. By adding a 3 minute sleep command to the script prior to updating the crontab entries all my problems went away. This may work for you too in this scenario.
Last edited by FunFiler (2010-09-12 12:31:48)
Offline
I've been using this script for several months now but I noticed my 'backup' drive was full recently - won't deleting the 'oldest' backup directories get rid of the original files that havent changed since that first backup and force them to be re-copied? Or perhaps I'm not understanding how rsync works
ie if I delete 20100101 today can I still browse and restore from 20100201?
Offline
scaramanga wrote:
It's a bit of a long shot, but maybe the problem has something to do with the /mnt/HD_a2 is inaccessibly when the command is run. You can try to change:
Code:
CRONTXT=/mnt/HD_a2/crontab.txtto:
Code:
CRONTXT=/tmp/crontab.txt(/tmp is in-memory)
That seems to have done it! Thanks!
crontab -l gives:
59 1 * * * /usr/sbin/daylight&
30 2 * * * /usr/sbin/stime&
32 2 * * * /usr/sbin/rtc -s
30 2 2 * * /usr/sbin/rtc -c
0 0 * * * /mnt/HD_a2/ffp/bin/snapshot.sh
I appreciate your help scaramanga. I'm not sure what was wrong, but it's working now.
Last edited by ahughes (2010-09-12 20:54:44)
Offline
Glad I could help.
I have little experience with the DNS-323, so maybe another member of this community can shed more light about this.
Last edited by scaramanga (2010-09-16 10:31:12)
Offline
plutoz wrote:
I've been using this script for several months now but I noticed my 'backup' drive was full recently - won't deleting the 'oldest' backup directories get rid of the original files that havent changed since that first backup and force them to be re-copied? Or perhaps I'm not understanding how rsync works
ie if I delete 20100101 today can I still browse and restore from 20100201?
"this script" - which script? rsync can be used in many different ways.
Offline
If you delete 20100101, you can still browse and restore 20100201. In this case, Rsync uses hard links, which means that there is only one copy of any file across all backups, and as long as there is a directory with the file in it. When the last reference to the file is deleted, the file itself is deleted.
As an example, if you had these files
Directory of 20100101
A 10 MB
B 20 MB
C 40 MB
D 80 MB
And between 20100101 and 20100201 you deleted C:
Directory of 20100201
A 10 MB
B 20 MB
D 80 MB
The hard drive will take up around 150 MB (+directory overhead) because while it may seem like you have multiple copies of A, B, C and D, there's actually only one copy of each of those files internally between the two directories. When you delete 20100101, A B and D remain on the disk because 20100201 still has a reference to them, but C will be deleted because 20100101 has the last reference to it. So the hard drive will take up around 110 MB after the deletion.
So if the contents of your 20100101 and 20100201 are identical, then the only thing you get by deleting 20100101 is the directory overhead, which won't be very much compared to the data itself. Hope that makes sense.
Offline
Hi!
How it is possible to leave only last new 4 folders (including "current"), and old automatically to delete?
Thanks.
Offline
gattor wrote:
Hi!
How it is possible to leave only last new 4 folders (including "current"), and old automatically to delete?
Thanks.
It is possible to do that, by adding a few lines of code to the rsync script. I'm using my own version of the snapshot script, which is based on the script from the wiki, therefore I don't want to write code here without testing it and knowing it works fine. You can find scripts here:
/ffp/start/editcron.sh
/ffp/bin/snapshot.sh
In the /ffp/bin/snapshot.sh script, change the value of maxSnapshotCount to whichever value you like to control how many snapshots it should keep.
Edit: If you're using the script from the wiki, current is *not* a backup, it's just a link/shortcut to the last backup. If you'd like to keep just 3 folders (and the current link), set maxSnapshotCount in my script to 3.
Last edited by scaramanga (2011-02-13 21:12:16)
Offline
scaramanga, thanks! :-)
Offline
My Rsync Time Machine has been running flawlessly for the past 2.5 years, due in large part to the help I got from this forum in setting it up.
In fact, it's been running so well that I haven't gone into the command line on my DNS323 in all that time, and have forgotten how to do any of this!
But now I have a question: is there any easy way to delete all backups and differential backups of a particular file or folder?
Thanks!
Offline
shepherd wong wrote:
My Rsync Time Machine has been running flawlessly for the past 2.5 years, due in large part to the help I got from this forum in setting it up.
In fact, it's been running so well that I haven't gone into the command line on my DNS323 in all that time, and have forgotten how to do any of this!
But now I have a question: is there any easy way to delete all backups and differential backups of a particular file or folder?
Thanks!
I actually started writing you a procedure to do it, but since it involves deleting of files, you could loose data. Instead, I'll point you in the "right" direction, and offer assistance with specific questions, in case you may have any. I'm sorry, but I'm not comfortable helping with things like that when I'm not sure how knowledgeable/comfortable you are with the command line.
What I suggest you do, is read about the find command, which finds (duh) files. It also has an option to execute a command (-exec) on those files. In your case, that command would be to delete them (rm):
find: http://en.wikipedia.org/wiki/Find
I suggest you first just use this command to list the files. Then, use the -exec to just list the files again, like so: find ... -exec echo going to delete {} \;
That way you'll get a good sense of what you're doing.
rm: http://en.wikipedia.org/wiki/Rm_%28Unix%29
specifically, I suggest using rm -i, like so: find ... -exec rm -i {} \;
Don't start by deleting masses of files or directories. Start with a very limited single file from multiple snapshots.
Offline
Rival wrote:
dateold=`date -d '-5 week' +%Y%m%d`
rm -r /mnt/HD_b2/backup/back-${dateold}*
I really like this one ^^
Rival wrote:
Iknow this is not a "clear" way: it not verifies whether -5 week date exists or not, rm will send a "not found" error message, but it does what I want...
Hum, maybe :
[ -d /mnt/HD_b2/backup/back-${dateold}* ] && rm -r /mnt/HD_b2/backup/back-${dateold}*
@All : thanks a lot for this very interesting thread !
Offline
Hi I have a question about df & du on the dsm (it's related to this script as it's about the hardlinks created)
I use an USB disk to backup some parts of my RAID 1 mounted with usb-storage.ko.
Firstly about "df" :
if I use /bin/df, I see my disk :
# /bin/df -h
Filesystem Size Used Available Use% Mounted on
/dev/ram0 9.7M 7.5M 1.6M 82% /
/dev/loop0 5.6M 5.6M 0 100% /sys/crfs
/dev/md0 1.3T 615.9G 757.2G 45% /mnt/HD_a2
/dev/sda4 486.2M 10.3M 475.9M 2% /mnt/HD_a4
/dev/sdb4 486.2M 10.3M 475.9M 2% /mnt/HD_b4
/dev/sdc1 458.5G 167.8G 267.4G 39% /mnt/usb
#
But /ffp/bin/df does not show it :
# /ffp/bin/df -h
Filesystem Size Used Avail Use% Mounted on
%root% 9.7M 7.6M 1.7M 83% /
/dev/ram0 9.7M 7.6M 1.7M 83% /
/image.cfs 5.7M 5.7M 0 100% /sys/crfs
/dev/md0 1.4T 616G 758G 45% /mnt/HD_a2
/dev/sda4 487M 11M 476M 3% /mnt/HD_a4
/dev/sdb4 487M 11M 476M 3% /mnt/HD_b4
#
Any explanation ?
Then about "du"
According to the man pages, hardlinks are counted only once, but on my system it count them for each snapshots with /bin/du and it's correct with /ffp/bin/du (but this one does not support "-d", that's why I was confused when I read this thread, as people are using "-d" option) :
I have 2 snapshots :
/mnt/usb/archives/snapshots# ls -la
total 16
drwxr-xr-x 4 root root 4096 Sep 13 10:01 .
drwxrwxrwx 5 root root 4096 Sep 13 07:30 ..
drwxr-xr-x 4 root root 4096 Sep 12 21:50 20110912_215031
drwxr-xr-x 4 root root 4096 Sep 13 09:59 20110913_095903
lrwxrwxrwx 1 root root 15 Sep 13 10:01 current -> 20110913_095903
With /bin/du :
/mnt/usb/archives/snapshots# /bin/du -s -h -d 1 2011091*
9.0G 20110912_215031/aaaa
39.0G 20110912_215031/bbbb
48.0G 20110912_215031
9.0G 20110913_095903/aaaa
39.0G 20110913_095903/bbbb
48.0G 20110913_095903
With /ffp/bin/du (-s conflict with --max-depth) :
/mnt/usb/archives/snapshots# du --max-depth=1 -h 2011091*
9.1G 20110912_215031/aaaa
39G 20110912_215031/bbbb
48G 20110912_215031
9.2M 20110913_095903/aaaa
35M 20110913_095903/bbbb
44M 20110913_095903
And I'm pretty sure the snapshoting is working well because df return a correct value for "Used" (167.8Go vs 215Go) and rsync was very fast for the second pass.
I'm a little confused betwwen my results and the several tests showed in that thread but obviously I should use /ffp/bin/du and not /bin/du, but again, how can you have correct result with du -d and not me ?
Thanks.
Offline
sioban wrote:
Hi I have a question about df & du on the dsm (it's related to this script as it's about the hardlinks created)
I use an USB disk to backup some parts of my RAID 1 mounted with usb-storage.ko.
Firstly about "df" :
if I use /bin/df, I see my disk :# /bin/df -h
Filesystem Size Used Available Use% Mounted on
/dev/ram0 9.7M 7.5M 1.6M 82% /
/dev/loop0 5.6M 5.6M 0 100% /sys/crfs
/dev/md0 1.3T 615.9G 757.2G 45% /mnt/HD_a2
/dev/sda4 486.2M 10.3M 475.9M 2% /mnt/HD_a4
/dev/sdb4 486.2M 10.3M 475.9M 2% /mnt/HD_b4
/dev/sdc1 458.5G 167.8G 267.4G 39% /mnt/usb
#But /ffp/bin/df does not show it :
# /ffp/bin/df -h
Filesystem Size Used Avail Use% Mounted on
%root% 9.7M 7.6M 1.7M 83% /
/dev/ram0 9.7M 7.6M 1.7M 83% /
/image.cfs 5.7M 5.7M 0 100% /sys/crfs
/dev/md0 1.4T 616G 758G 45% /mnt/HD_a2
/dev/sda4 487M 11M 476M 3% /mnt/HD_a4
/dev/sdb4 487M 11M 476M 3% /mnt/HD_b4
#Any explanation ?
Then about "du"
According to the man pages, hardlinks are counted only once, but on my system it count them for each snapshots with /bin/du and it's correct with /ffp/bin/du (but this one does not support "-d", that's why I was confused when I read this thread, as people are using "-d" option) :
I have 2 snapshots :/mnt/usb/archives/snapshots# ls -la
total 16
drwxr-xr-x 4 root root 4096 Sep 13 10:01 .
drwxrwxrwx 5 root root 4096 Sep 13 07:30 ..
drwxr-xr-x 4 root root 4096 Sep 12 21:50 20110912_215031
drwxr-xr-x 4 root root 4096 Sep 13 09:59 20110913_095903
lrwxrwxrwx 1 root root 15 Sep 13 10:01 current -> 20110913_095903With /bin/du :
/mnt/usb/archives/snapshots# /bin/du -s -h -d 1 2011091*
9.0G 20110912_215031/aaaa
39.0G 20110912_215031/bbbb
48.0G 20110912_215031
9.0G 20110913_095903/aaaa
39.0G 20110913_095903/bbbb
48.0G 20110913_095903With /ffp/bin/du (-s conflict with --max-depth) :
/mnt/usb/archives/snapshots# du --max-depth=1 -h 2011091*
9.1G 20110912_215031/aaaa
39G 20110912_215031/bbbb
48G 20110912_215031
9.2M 20110913_095903/aaaa
35M 20110913_095903/bbbb
44M 20110913_095903And I'm pretty sure the snapshoting is working well because df return a correct value for "Used" (167.8Go vs 215Go) and rsync was very fast for the second pass.
I'm a little confused betwwen my results and the several tests showed in that thread but obviously I should use /ffp/bin/du and not /bin/du, but again, how can you have correct result with du -d and not me ?
Thanks.
Not sure this will help, but my sdc1 USB shows up for both /ffp/bin and /bin:
~# /ffp/bin/df -h
Filesystem Size Used Available Use% Mounted on
rootfs 9.7M 6.4M 2.8M 70% /
/dev/root 9.7M 6.4M 2.8M 70% /
/dev/loop0 4.5M 4.5M 0 100% /sys/crfs
/dev/sda2 1.3T 593.8G 780.6G 43% /mnt/HD_a2
/dev/sdb2 1.3T 595.2G 779.2G 43% /mnt/HD_b2
/dev/sda4 486.2M 2.3M 483.9M 0% /mnt/HD_a4
/dev/sdb4 486.2M 2.3M 483.9M 0% /mnt/HD_b4
/dev/sdc1 963.6M 427.3M 487.3M 47% /mnt/USB
~# /bin/df -h
Filesystem Size Used Available Use% Mounted on
/dev/ram0 9.7M 6.4M 2.8M 70% /
/dev/loop0 4.5M 4.5M 0 100% /sys/crfs
/dev/sda2 1.3T 593.8G 780.6G 43% /mnt/HD_a2
/dev/sdb2 1.3T 595.2G 779.2G 43% /mnt/HD_b2
/dev/sda4 486.2M 2.3M 483.9M 0% /mnt/HD_a4
/dev/sdb4 486.2M 2.3M 483.9M 0% /mnt/HD_b4
/dev/sdc1 963.6M 427.3M 487.3M 47% /mnt/USB
I'm on ffp0.5 from June 2010, with fw 1.09. Also, ffp is running from the USB, not HD_a2 in case that matters.
Offline
A few people were asking about remote backups. Just a few comments on this...
1. The snapshot script needs to be more resilient to failures if you want to use it remotely. My original script didn't handle this because the frequency of errors is much lower doing a local backup, but once we talk about remote connections that can disconnect at any time, the script needs to handle the case where the rsync fails by cleaning up the failed backup. ≈
2. If you want to send your data across the wire, make sure the channel is encrypted and that the connection between the machines is secured.
3. If your connection is slow, backing up remotely could take very long time. You probably want to do the initial backup locally on the same physical network before moving the backup device off site.
Now one way to do this would be to set up rsync over ssh using certificate authentication. That way you're less prone to hacking... provided that during set up you don't give the certificate away. You can also restrict the connections by IP in ssh so that only your backup machine can connect.
That's a high level description of a set up that works. Unfortunately I haven't had enough time to bake this, so it's not ready to be shared here yet.
James
Offline
raid123 wrote:
A few people were asking about remote backups. Just a few comments on this...
1. The snapshot script needs to be more resilient to failures if you want to use it remotely.
2. If you want to send your data across the wire, make sure the channel is encrypted and that the connection between the machines is secured.
(4.) Now one way to do this would be to set up rsync over ssh using certificate authentication.
(5.) You can also restrict the connections by IP in ssh so that only your backup machine can connect.
3. If your connection is slow, backing up remotely could take a very long time.
Thanks James! Great suggestions as to what needs to be taken care of.
Along with (retrying or notifying) on link failures, as in #1, it could also be more tolerant of (or recover more gracefully from) a disk full condition.
Personally I'm just having mine log the number of bytes transferred each night, automatically graph the data from the logs with gnuplot, and then glance at the graphs occasionally to make sure nothing has gone weird.
It's not 100%. On the other hand, it never telephones me in the middle of the night.
Thanks again to James/Raid123 and also to Andrew Tridgell and Paul Mackerras who wrote rsync in the first place.
eastpole
Offline