Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Recently I purchased my second DNS-323 for testing/learning and backup for the first unit. While I will never be through learning, I would like to start backing up the primary to the second unit. Initially I thought the Schedule Download application would do the job but ruled that out when the file date attributes were lost when the file was moved.
Both units have B1 hardware, Standard disk configurations, ffp 0.5 and optware installed. The primary unit is running F/W 1.09 and the secondary unit is 1.10b7. The primary unit contains backup files of various types and sizes, some larger than 4 GB.
My initial thoughts were to utilize cron and ftp on the primary unit to control the transfer. Since the transfer is staying within my local network security was not a high priority, but simplicity and decent documentation would be nice.
After finding 16 optware packages with 'ftp' in the name and doing Google searched on some of the names, I figured it was time to ask the users that have gone down this path before me.
Thank you for any suggestions.
Offline
I use rsync to copy between units.
Offline
I , personally, use rsync as the back-bone of my backup process. Both local (drive-to-drive) and over lan.
These should get you on the right track:
http://dns323.kood.org/howto:backup
http://dns323.kood.org/howto:backup_-_pc
Offline
Thank you both for your input. I have not considered rsync because my OS X backups were tar copies of my $HOME directory and by naming them based on the day of the week I had could keep seven copies with no extra work.
I am going to have to give this some thought.
Offline
I use rsync as well.
I built the backup scripts with the second DNS-323 local here, but once I had a current backup I took it to a friend's house elsewhere in the city. In return I have his spare DNS-323 on my network here, and now we serve as each other's offsite backup.
Nightly, a cron job push updates to the remote. The nice thing is, you can buy the slowest 3.5" disks you can find and they'll still be way too fast. (My upstream ADSL is limited to 1 megabit/s or so.)
Restore would be limited by my friend's upload rate, but I don't intend to restore from this box unless my house burns down or is quite thoroughly burglarized.
Cheers.
PS: My question for others is, how identical do you want your pair of DNS-323 units? Would you run the same firmware? Would you want the same hard drives in them? If not, why not?
Last edited by eastpole (2011-05-18 20:30:23)
Offline
Running the same firmware is always a good idea. In particular, run the most up to date FW available, that's my motto.
Matching drives is not really important these days. In a Raid setup, it might be a good idea though.
I specifically went with different manufacturers, putting one WD drive and one Seagate drive into each NAS. This was due to all the issues the manufacturers are having these days with failing drives. No issues so far though.
I always pick up the largest drives for the best price at the time. I also take into consideration that at some point I'm likely to put the drive into a PC or eSATA enclosure. Speed may not be important in the NAS, but it may be important later on. Other than size, price and performance, sound and temperature are my next considerations.
Last edited by FunFiler (2011-05-18 22:44:56)
Offline
(We're hijacking the OPs thread) But sticking different brand drives makes a lot of sense to me. Buying a pair of drives from the same make, model and manufacturer batch and using that in your SOHO NAS in RAID1 seems to me like asking for trouble. Those HDDs are too corelated, which means one one fails, the other may be prone to malfunction sooner than you think, which is the just the thing you wouldn't want to happen.
Offline
I am trying to create a user on my test system to start playing with rsync and have two questions because I am trying to limit my activities as root.
Should the rsync process be run by root, one of the user accounts that were originally added to the DNS-323 or by a newly created ffp user?
When I create a new user under ffp using useradd, passwd, store-passwd.sh and other commands, I tried to make it resemble the root account that was created during ffp installation. Login for this user worked fine until I rebooted the system. After restart the $HOME directory changed from /ffp//home/john to /home/ftp. Under ffp, where should the home directories be placed?
Offline
jhtopping wrote:
I am trying to create a user on my test system to start playing with rsync and have two questions because I am trying to limit my activities as root.
Should the rsync process be run by root, one of the user accounts that were originally added to the DNS-323 or by a newly created ffp user?
Depends on the scope of your backup. If you are backing up the entire disk, best to do it as root. If you want user xyz to be able to backup only files they can see, then rsync can be run as user xyz.
You'll have to consider authentication as well for the destination rsync machine. There are several ways to do that - see rsync docs online.
jhtopping wrote:
When I create a new user under ffp using useradd, passwd, store-passwd.sh and other commands, I tried to make it resemble the root account that was created during ffp installation. Login for this user worked fine until I rebooted the system. After restart the $HOME directory changed from /ffp//home/john to /home/ftp. Under ffp, where should the home directories be placed?
I've always had some trouble using user account related commands to do this. Might consider editing /etc/passwd to set the intended home dir before running store-passwd.sh. I believe that the home dir is the next-to-last field on each line, where the fields are delimited by ':'.
Offline
karlradp, thank you for your reply.
At the current time the scope of my backup is to get something to work and have it built on a sound foundation. Using crontab is important, and from my readings while installing ffp, it appears that the crontab file needs to be rebuilt after each system startup. I am populating the file as part of fun_plug.local, therefore it looks I will need to schedule and run the rsync backups as root and test very carefully.
My attempt to create a new user was based on my attempt to avoid problems caused by running jobs as root. I wanted to try and create a non-root user in the hopes that it would be possible. Editing /etc/passwd will probably work but it is a shortcut. Again, I try to avoid shortcuts because when I use them I usually find I have created another problem for myself when I least expect it.
Again, thank you for your input.
Offline
Hi, avoiding root usage is smart, misstakes made as root may cause big damage.
However if you are running backup as non-root, ownership of file cannot be keept, it will always be owned by the backupuser. To keep ownership of files must be kept in some archive format like TAR.
You may change user homedir with usermod command.
Beeing root (like in cron) you can always run scripts as a less powerful user with su e.g "su backupuser -c dobackup.sh".
I think /ffp/home/john is a good place for homedirs. Strange it didnt survive reboot store-passwd should make change persistant.
Last edited by bjby (2011-05-21 09:48:36)
Offline