|Overview||Features||Requirements||Download||Quick Start||Installation||FAQ||History||Philosophy||Contact||Project Page||Donate|
Several questions repeatedly come up related to BackupNetClone, so I've compiled this list of frequently asked questions:
Another way of asking this is how you can have more than one backup client with just one backup server (the machine running BackupNetClone).
BackupNetClone was designed with this feature in mind. In the BNC directory where all of the *.sh files are stored you'll find at least one file named backup-config.*.sh (such as the backup-config.example.sh file that is included in the .tgz distribution). BackupNetClone will search for any files that start with backup-config. and end with .sh.
The trickiest thing you might run into related to backing up multiple clients is with the SSH authentication. In order to use passwordless (easily-automated) SSH, any IP address that the backup server (running BackupNetClone) talks to has to have a consistent server hash. Often this won't be an issue, but if your backup clients are behind a firewall, then BackupNetClone has to be configured to talk to different ports through the firewall's external IP address. This means that BackupNetClone (SSH) thinks the different clients (configured as different port forwards on the firewall) are actually the same server, and therefore they all need to have the same public/private keys. Specifics on how to copy keys from one machine to another are available by reading through the Installation page.
In many backup situations you want the backups to happen during the night when usage and traffic are low and systems aren't busy. So how do you make sure BackupNetClone stops its activities in the morning, so it doesn't tie up resources even if it still has data to transfer to finish the backup?
Included in the BackupNetClone .tgz package is a file called interrupt_transfer.sh. Open this file and modify the top several lines that have directory paths in them. These lines tell the script where BackupNetClone is installed so it can properly end an in-progress transfer. After modifying the file, you can add it as a cron job when you want BNC to stop its activities (if it's still active).
I recommend having interrupt_transfer.sh run a few times in a row around the time you want activity to stop. My crontab starts BNC at 11:03pm every night and interrupts it around 6:00am in the morning if it's still running. Here's what those two lines in my crontab look like:
03 23 * * * /bin/sh "/mnt/HD_a2/fun_plug.d/bin/BackupNetClone/start_here.sh" & 05,10,15,20 06 * * * /bin/sh "/mnt/HD_a2/fun_plug.d/bin/BackupNetClone/interrupt_transfer.sh" &
To upgrade from version 1.0.0 to version 1.0.2 follow these steps, keeping in mind that all of your settings are stored in the backup-config.*.sh and system_config.sh files:
mkdir /mnt/HD_a2/fun_plug.d/bin/BackupNetClone.old cp -a /mnt/HD_a2/fun_plug.d/bin/BackupNetClone/* /mnt/HD_a2/fun_plug.d/bin/BackupNetClone.old/
mv /mnt/HD_a2/fun_plug.d/bin/BackupNetClone/system_config.sh /mnt/HD_a2/fun_plug.d/bin/BackupNetClone/system_config.old.sh
tar xzf /mnt/HD_a2/fun_plug.d/bin/BackupNetClone/BackupNetClone.1.0.2.tgz -C /mnt/HD_a2/fun_plug.d/bin/BackupNetClone
So you don't have a DNS-323 or a Linux box lying around, and you still want to use BackupNetClone?
I'm sorry to say, but BackupNetClone won't run on a Windows PC as the backup server. (Remember this is not the same as having a Windows PC be a backup client, which is very possible and is outlined in the Installation instructions.) Basically, BackupNetClone relies on a feature (multiple hard links to one data block) of a most Linux file systems that doesn't properly exist in the MS Windows operating systems. Yes, I know Windows does have junction points, but it's not in mainstream use and is generally not recommended.
Incidentally, Mac OS X is Unix-based and should be able to run BackupNetClone as a backup server. I have no experience with this, so you're on your own for this one.
When you have backup clients that are Windows PCs, you might run into a situation where you want to backup a file that is constantly in-use. For example, I leave MS Outlook open on my computer during the night but still want to perform a backup of my .pst file nightly. When BackupNetClone attempts to copy the file from the backup client, it will get an error message from Windows saying that the file was in-use and cannot be copied:
rsync: read errors mapping "/Documents/Emily's Outlook E-mail.pst" (in ddocs): Permission denied (13) ERROR: Documents/Emily's Outlook E-mail.pst failed verification -- update discarded.
In order to backup a file that is in-use, Windows provides a mechanism called shadow copying. You can create a copy of an entire drive by making a shadow drive. Or if you only want to copy a single file, you can do so by using the following files from a Microsoft programmer's blog:
BackupNetClone uses rsync over SSH to perform the backups. This often means slow transfers when data is very large (more than a few GB). When first installing BackupNetClone, all of your data will be copied into the first snapshot. This initial snapshot, then, is as large as all of your data, which is usually more than a few GB. Here's a tip (from a forum topic) on speeding up that first snapshot so you don't have to let BackupNetClone run for several days just for the first full backup:
Ok I admit, the email sending code in BackupNetClone is very hard-coded to a particular style of email servers. So how do you get it to work with your email system, such as Yahoo, Gmail, Hotmail, etc?
Unfortunately I have bad news on this front. BackupNetClone is only capable of supporting simple SMTP servers that either take no authentication or simple username/password authentication. That means that Yahoo, Gmail, Hotmail, and most other mainstream email systems that use fancy SSL authentication won't work with the current version of BackupNetClone. What it boils down to is that no one has made an SSL library for the DNS-323, and I'm not interested in changing the code to work on a broader range of Linux distributions that do support SSL.
So if you still want to tweak the email sending portion of BackupNetClone, be sure to check the RESPONSEMATRIX environment variable in complete_email.sh. If you need more complexity, you can modify complete_email.sh and sendmail.sh to your heart's content. If you get something working, I'd appreciate hearing from you so that others can benefit from your hard work...
A problem that many of you will run into while setting up BackupNetClone to do remote backups over the Internet is that the backup clients will be using an Internet connection that changes their external IP address once in a while. This is a common occurrence with most home broadband connections (DSL, cable modem, etc). Since the backup server will need to reliably contact these clients, then this change of addresses presents a problem. There is a way around this called Dynamic DNS...
You can Google Dynamic DNS for lots of detailed information, but here's the basics:
Be sure to read your dynamic DNS host's help pages for specifics on how to make your setup work. For example, I had to tell dnsExit (my domain name registrar) what the DNS servers were at FreeDNS (my dynamic DNS host) in order for the world to see the DNS entry changes for my domain name.
Sometimes you may have trouble removing a file while cleaning up files from a BackupNetClone snapshot if you are doing the cleaning from a Windows PC connected through a mapped network drive to the BackupNetClone server (usually a DNS-323). This can happen if the file is located within a series of folders that all combine into a long filename path. The Windows file sharing mechanism (referred to as Samba) can have, depending on the version, a limitation of 255 characters total in the filename path.
Here's an example filepath that exceeds the 255 character Samba limit:
\\192.168.1.200\Volume_1\Auto-Backup Snapshots\My Business Laptop Documents Backups.2008-01-01.11h01m03s\Full Copies of Technical Invoices\Johnathan Rolando Christiansen Mortgage Holding Companies\All Invoices from 2000 to 2005\Year 2002\11 (November)\Week 3\loan transfers-2002-11-16.pdf
Two possible work-arounds are available for manipulating files that are located within a folder structure that combines into a long series of names:
So let's say you have a very large file that is being transferred over a dial-up connection through BackupNetClone, and the transfer is interrupted mid-way through the file. What will happen to the partial file, and what will BackupNetClone do during the next session?
In version 1.0.0 BNC would remove partial file transfers causing the large file to be attempted again from the beginning of the file each time a BNC transfer occurs.
Starting with version 1.0.2 BNC is able to resume a file transfer where it left off previously. This ensures that eventually every file will be transferred no matter what size it is and what kind of connection you have between the backup client and the backup server. To enable this feature for a backup client, be sure to have the KEEPPARTIALFILE= setting as "yes" in your backup-config.*.sh file. BNC will also create an additional "flag file" alongside of the partial file so that you know where the transfer was interrupted in case you need to use files from that BNC snapshot. The flag file will have the same name as the interrupted file but will append the words "is only a partial file.txt" to the end of the filename for the flag file.
rsnapshot seems to have all of the same features as BackupNetClone, so why wouldn't I just use that instead?
My main motivation for writing BackupNetClone after finding rsnapshot was that I wanted it to run on my newly-purchased D-Link DNS-323. rsnapshot depends on Perl, for which there were no pre-combiled binaries available on the DNS-323. Yes, I could have run a chroot Debian environment on my DNS-323, and then run Perl (and rsnapshot) within that, but I wanted something to run natively without the overhead of chrooting.
In addition I could not find any mention of an email status feature in rsnapshot. This is an important feature in BackupNetClone for me, because it provides a heartbeat to let me know my backups are successful.
So now you have snapshot backups of your data that are all dated. What would happen if you changed a file within one of those snapshots?
The simple answer to this is that YOU SHOULD AVOID MODIFYING SNAPSHOT FILES (unless you are simply deleting them to free up space). Since BackupNetClone relies on hard links to store snapshots, you might end up modifying more than you bargained for. So you should essentially treat the snapshot data as untouchable backup data that is read-only. Of course, you are allowed (and encouraged) to remove old snapshot data at any time. Read this FAQ entry for more information.
Since BackupNetClone can be used to backup data off-site (over the Internet), should I be worried about the security of the data in-transit? Can someone read the files being transferred between the BackupNetClone backup server and the backup clients?
You can rest assured that your BackupNetClone data is very secure while in-transit. All transfers (except the email status) are sent through an SSH tunnel which uses public-key cryptography. Of course there are other potential ways your data could be compromised. For instance if your backup server is on a network that has a wireless LAN, then the wireless security (WEP, WPA, etc) would be the most vulnerable point of access to your important data.
When encountering large files (relative to the available transfer bandwidth and disk size), what does BackupNetClone do to remain efficient yet effective?
Assuming a version of the large file exists in the most recent snapshot, then BackupNetClone will use that version as the basis for the current transfer. This means only the differences between the previous version and the new version are actually transferred between the backup client and backup server. Unfortunately, the file is stored in its entirety for each snapshot where the file changes. For example, if a 100MB file was backed up yesterday and you added data to make it 101MB today, then only 1MB will be transferred during tonight's BackupNetClone session, but the snapshots will each have their own full copy, resulting in 100MB plus 101MB of data from the two days' worth of snapshots.
This storage feature is by design. The only possible alternative I could think of would be to store the big file once, then save only file differences in subsequent snapshots. The disadvantages of this differential scheme include (1) implementation would be more difficult though I realize there are binary diff and patch tools available that might help (or maybe even Subversion could be used somehow); and (2) cleaning up the snapshots would be greatly complicated because removing any particular snapshot might affect other snapshots that depend on the deleted snapshot's differential file information (instead, removing snapshot data is straight-forward; see this FAQ entry for more information).
Snapshots are stored as subdirectories within the snapshot directory on your backup server. By default the snapshot directory is named Auto-Backup Snapshots\, which means you can usually find the snapshot data on your DNS-323 at:
The snapshots within that directory are all named according to the backup client name (TGTDSC) followed by the date and time the snapshot was created. So to restore a file from a snapshot, simply open the directory of the desired snapshot and find the file organized the same way it was when it was copied from the backup client.
Since the BackupNetClone snapshots store data with hard links, standard file utilities (such as Windows Explorer) will usually report a larger amount of disk usage than what is actually stored. So how does one get the actual size of the data being used by all snapshots on the backup server?
The only way to see the actual disk usage is with the Linux/UNIX df command. This check is automatically done and summarized in the status emails sent by BackupNetClone.
With the backup data being stored as snapshots that use hard links, are there any special considerations I should take when trying to clean up old backup/snapshot files?
Cleaning up BackupNetClone snapshot data is easy! All you have to do is delete files from the snapshots. The data is stored such that when anything is deleted, everything is automatically accounted for if there are other links to the same data that you deleted. In other words, the storage system will automatically keep raw files that are referenced in other snapshots if you delete them from one snapshot. If this is confusing, don't worry--just remember that you can delete anything you want, and all the fancy linking will take care of itself.
Of course there are a couple of things to keep in mind: (1) You should avoid deleting anything from the most recent snapshot, since this snapshot will be used to speed up the next backup session when files haven't changed much. (If you do delete the most recent snapshot it's ok--the next BackupNetClone session will just do a full transfer of all files in the backup.) (2) Sometimes deleting a file won't give you back any of your disk space since other snapshots refer to that same exact file. In order to recover that disk space, you'll have to delete all references to that version of the file.
Starting with BNC version 1.0.2, you can have BNC automatically clean up old snapshots for you. Enabling this feature will trigger BNC to remove the oldest snapshots when there isn't enough room for a new snapshot. Be sure to check the CLEANWHENFULL= setting in your backup-config.*.sh file(s) and the MAXDELETEDSNAPSHOTS= setting in your system_config.sh file for more information.
There's several places to look at if you want more information about the device for which BackupNetClone was originally designed:
There are several other ways of using the DNS-323 to keep backups of data. Here are links to discussions or Wiki entries that describe them in more detail:
Can BackupNetClone be downloaded by just anyone, even if that person wants to use it for educational, government, commercial, etc. use?
YES! BackupNetClone is completely free. In fact, I've placed the whole thing in the Public Domain. That means you may go so far as taking BackupNetClone and selling it as your own without giving me any credit and without any guilt or nagging feelings whatsoever. There are no restrictions on its use.
Of course it would be nice if you gave me credit or even made a small donation if you feel so inclined. But rest assured that I am giving you full permission to simply take BackupNetClone and use it in whatever manner suits you best, all without charge.
I do realize the commercial value of a solution like this, so some day I may stop development of BackupNetClone and make a derivative solution for sale. At that point this website probably won't be available, but BackupNetClone will still be free and in the Public Domain if you can find it. (Perhaps someone else will host the website to keep the free version available.) If you disagree with this viewpoint, I'd like to hear from you.
Standard legalese... I take no responsibility for how you use or misuse BackupNetClone and I make no promises as to whether it will keep your data safe or completely destroy it. By using BackupNetClone you release me from all liability. Thank you and have a good day!
Benjamin L. Brown, released to the Public Domain.