Intro
This guide will assist you in setting up BackupPC using the CentOS RPMs in the CentOS testing repository. It will not go into detailed explanations of all the possible BackupPC configurations. It will also assume that you're setting up BackupPC to do backups across rsync. BackupPC is capable of archive, tar, smb, and rsyncd backups, but this guide will concentrate only rsync to other Linux hosts. BackupPC is heavily documented when it comes to configuration options, and the guide is present in the web interface. Also, BackupPC should reside on it's own server, because Apache must be run as the BackupPC user created on the system which could affect regular webserver things.
This page was previously maintained by Max Hetrick at http://www.maxsworld.org/index.php/how-tos/backuppc-on-centos [Max's site is now 404] but he was graciuos enough to pass the baton over to me so that I could continue maintaining this howto. My contact info is sorin dot srbu at gmail dot com.
System
Originally written for CentOS 5.x.
With some modifications, this guide should also work with later CentOS versions.
References
System Setup
Setting up repos
The first thing to do is to install the CentOS testing repo, along with the yum-priorities plugin.
# cd /etc/yum.repos.d # wget http://dev.centos.org/centos/5/CentOS-Testing.repo # yum install yum-priorities
Follow the guide on the wiki for setting up yum-priorities. If you have RPMForge's repo installed, you'll want to make sure all the CentOS Base items are priority 1, RPMForge's items priority 2, and CentOS testing repo is priority 3. If you don't have RPMForge's repo installed, then all the Perl packages will be downloaded from the testing repo, otherwise, they will come from RPMForge. To setup the RPMForge repo, follow the guide on the wiki for the RPMForge repo.
Install BackupPC
Next, install the BackupPC RPM, Apache, and mod_perl. All the Perl dependencies will be collected automatically.
# yum --enablerepo=c5-testing install backuppc httpd mod_perl
Configure Apache
As mentioned, the BackupPC user created on the system when installing the RPM has to run Apache in order for everything to work properly with the CGIs and mod_perl. Go ahead and setup the appropriate values in httpd.conf.
# vim /etc/httpd/conf/httpd.conf ## Change User apache to User backuppc User backuppc ServerNameThis email address is being protected from spambots. You need JavaScript enabled to view it. :80
Save and quit the file, then change the backuppc.conf file that was created under the conf.d directory.
# vim /etc/httpd/conf.d/backuppc.conf ## Change Allow from 127.0.0.1 to all Allow from all
Save and quit this file, and then create the user and password that you are going to allow access to the web interface.
# htpasswd -c /var/lib/backuppc/passwd/htpasswd your_user New password: your_password Re-type new password: your_password Adding password for user your_user
Last, start and configure Apache to start up at boot time. Then browse to your machine and make sure Apache is dishing out a test page.
# service httpd start # chkconfig httpd on
Browse to http://your_server and make sure Apache is working ok.
BackupPC Server Configuration
BackupPC Main Config
The initial configuration needs to be edited on the command line with a few parameters, and then later on the rest can be either done from the command line, or from the web interface. To start, open up the main BackupPC configuration file, and set the following parameters. The TopDir path is where the actual backup data resides. The default is /var/lib/backuppc. I have an encrypted partition used to store backups, so my path is /srv/backuppc. Change to suit your needs.
# vim /etc/BackupPC/config.pl ## Default transfer method BackupPC uses. $Conf{XferMethod} = 'rsync'; ## Path to where actual backup data is stored. $Conf{TopDir} = '/var/lib/backuppc'; ## Path to init.d which is used to to start server. $Conf{ServerInitdPath} = '/etc/init.d/backuppc'; $Conf{ServerInitdStartCmd} = '$sshPath -q -x -l root $serverHost$serverInitdPath start'; ## Allowed user that you created using htpasswd. $Conf{CgiAdminUsers} = 'your_user';
BackupPC Sudo Setup
The backuppc user needs to have sudo access to run the gtar and tar commands. Otherwise, BackupPC won't run correctly. Sudo should already be installed on your system, so you can change what you need using the visudo command.
# visudo ## Comment out Defaults requiretty ## Add the following two lines. Defaults !lecture backuppc ALL=NOPASSWD:/bin/gtar,/bin/tar
Save and quit the file, then restart Apache, start and configure BackupPC to start and turn on at boot time.
# service httpd restart # service backuppc start # chkconfig backuppc on
Open up a web browser and go to the BackupPC web interface at http://your_server/backuppc. You'll need to sign on using the username and password you created earlier, then you should be displayed a general server information page about BackupPC. If not, retrace your steps and make sure Apache and BackuPC are configured correctly.
BackupPC SSH Keys
Since concentrating on rsync backups, you'll want to create passwordless keys used for the backuppc process to connect remotely to your hosts being backed up. As root create the hidden SSH directory under /var/lib/backuppc and change the permissions accordingly.
# cd /var/lib/backuppc # mkdir .ssh # chown backuppc.backuppc .ssh # chmod 700 .ssh
Next, drop in as the backuppc user. You'll have to specify a shell because by default the backuppc user has no shell assigned to it. Then create the passwordless SSH keys using ssh-keygen.
# su -s /bin/bash backuppc bash-3.2$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/var/lib/backuppc/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/lib/backuppc/.ssh/id_dsa. Your public key has been saved in /var/lib/backuppc/.ssh/id_dsa.pub. The key fingerprint is: xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xxThis email address is being protected from spambots. You need JavaScript enabled to view it.
Server Key to Client
For each client you're going to configure backups for, you'll need to copy the key you created from the server over to the client. To do so, continue from the last step, and run the ssh-copy-id command while still logged in as the backuppc user on the server.
bash-3.2$ ssh-copy-id -i .ssh/id_dsa.pub root@host_to_backup
In case you're running SSH on a different port, you should be able to pass the -p option using single quotes.
bash-3.2$ ssh-copy-id -i .ssh/id_dsa.pub '-p 12345 root@host_to_backup'
It should have copied the key over to the host, and then also logged you into the host with SSH.
Client Setup
BackupPC Basics
Before you start using the web interface, let's explain a few basics about the configurations and options used with hosts. Most other guides I read left me bewildered from this aspect. The main configuration file you edited earlier, located at /etc/BackupPC/config.pl, is where all the defaults for BackupPC reside. This configuration file can either edited on the command line, or through the GUI where documentation is linked in to the options. When you log in to the web interface, you should see an Edit Config tab on the left-hand side. When you click this, you're actually editing the /etc/BackupPC/config.pl file. Until you read the documentation, pretty much leave alone all the defaults options or you'll find a non-working backup system pretty quickly.
While the main configuration file is what sets all the defaults, you can actually override these settings on a per-host basis. After you add a host to the web interface, any configurations you add to the host overrides the default. This then creates a separate directory, /etc/BackupPC/pc/host_name.pl, file. This file contains all the settings that are either changed, or different than what the defaults are from /etc/BackupPC/config.pl. So for instance, what would be displayed in these files would be the directories or excluded directories that are to be backed up for the host that you set.
Unless you are backing up the identical directories on all hosts, you'll want to use the override feature to setup the directories after you create the host. This is done on the Xfer tab with the RsyncShareName. Right below this setting, is the BackupFilesExclude where directories like /var/cache can be excluded from being backed up.
Under the main BackupSettings tab, you'll find the DumpPreUserCmd. This allows you to use any kind of script to be specified, which is run before the actual rsync of directories. For instance, if you have databases that need dumped and included in the backups, they are specified here. There already exists an AutoMySQLBackup from SourceForge, so this will be explained in the next section. The DumpPostUserCmd allows you run a command after the backup occurs, and the DumpPreShareCmd and DumpPostShareCmd allow you to runs scripts before and after the share of a dump. NOTE: A lot of people seem confused about parameters in these settings. You cannot treat the fields as command prompts or shells. Shell syntax is not passed here, so if you have multiple commands or scripts to run, put them in one script on the client side where they can be executed one after the other.
Last, the Schedule tab is where you setup how full and incremental backups happen. You can either set this up at the default level in Edit Config, or you can again override settings on a per-host basis. The default setting is to keep only one full backup, and 6 incrementals. This gives you a week's worth of backup. The FullPeriod is set for 6.97, which means every 7 days a full backup dump occurs. IncrPeriod is set to 0.97 which means that every day an incremental dump will occur. The IncrKeepCnt is set to 6 which gives you 6 days of incrementals and the other day of a full backup. You can also setup BlackoutPeriods, or times you don't want backups to occur. Every setting in here is explained by the documentation, so read up.
Add Client in Web Interface
Next, add the client and all the configurations from within the web interface. Click on the Edit Hosts tab and then the Add button. Fill in the host's name that you want to backup, and then under the User section, put backuppc as the user. This needs to be backuppc, because the backup is launched as this user. It won't work otherwise. When you're done, click the save button at the top. Click on the Host Summary link on the left, and you should see your new host. To start configuring the host settings, click on the host name, then a new section at the top opens. Click on Edit Config at the very top left-hand side under the host's name.
From here, everything you setup will override the default settings thereby storing them in the /etc/BackupPC/pc/host_name.pl file. Click on the Xfer button, and start adding directories under the RsyncShareName. Below this setting in the Include/Exclude area, enter anything in BackupFilesExclude that you want excluded. When finished, make sure you click save at the top. This is pretty much it for adding a host. If you select the host, you can then manual start and stop backups to see if they work.
By default, BackupPC wakes up every hour around the clock to see if hosts need queued up for a backup. To change this behavior, look under the Server configuration tab under EditConfig. Change WakeupSchedule to suit your needs.
Dumping Databases
To include a database dump in BackupPC on the host being backed up, you have two choices. First, write your own script that is run on the host prior to the dump. Second, you can use the AutoMySQLBackup script project found on SourceForge to dump your MySQL databases. With some basic editing to the script, you can easily make it work for PostgreSQL databases too. To get started, download the script from SourceForge. Copy the script to somewhere like /usr/local/bin on the host you are wanting to backup. I generally rename the script to reflect the host name for which the dump is going to take place. This step isn't necessary, and only done for logic's sake. You do have to make sure that the script is executable, however.
# scp automysqlbackup.sh.2.5 host_name:/usr/local/bin # ssh host_name # mv automysqlbackup.sh.2.5 mysql_hostname_dump # chmod +x mysql_hostname_dump
The script is heavily documented for what all the options do. Make sure to go through and read it once so you understand what the script is doing. AutoMySQLBackup connects to and dumps all the databases you specify to /backups, e-mails you if you set it up, and also does rotational steps to make sure you have a set of database dumps.
Make sure to fill in USERNAME, PASSWORD, and the DBNAMES. If you want mailing setup, configure it as the directions state in the script. It's pretty basic to configure, so once you have the options set up correctly, go ahead and run the script once manually on the host being backed up. You can then see if the /backups directory was created, and whether or not the dump occured.
# cd /usr/local/bin # ./mysql_hostname_dump ....tons of output...or errors...fix accordingly. # cd /backups # ll
You should see a directory structure as daily, weekly, and monthly with all your database names in a directory of their own inside these. You can traverse these to locate the actual .sql.gz files which are your databases.
Add DumpPreUserCmd to BackupPC Host
Now that the dump script is set up physically on the host, configure the DumpPreUserCmd in BackupPC's web interface with the command to SSH to the host being backed up and execute AutoMySQLBackup. Select a host in the web interface, and then choose the EditConfig tab at the top left-hand side. Choose the BackupSettings tab and go down under the User Commands section. The very first line is DumpPreUserCmd. Add the following line in this section.
$sshPath -q -x -l root $host /usr/local/bin/mysql_hostname_dump
Choose save at the top, and then head over to the Xfer tab for the host. Make sure you add the /backups directory to the RsyncShareName, otherwise the dumps will never be copied off the host. That's it, now your host's MySQL databases are being dumped prior to the BackupPC rsync of directories. As mentioned, you can easily edit AutoMySQLBackup to be a AutoPostgreSQLBackup script. Just replace all the MySQL syntax with PostgreSQL commands and options.
BackupPC's Other Capabilities
As mentioned, BackupPC can also backup SMB shares, use tar for backups, and archive backups to other media. The appropriate documentation of BackupPC explains each.
$Conf{XferMethod} = 'rsync'; The valid values are: - 'smb': backup and restore via smbclient and the SMB protocol. Easiest choice for WinXX. - 'rsync': backup and restore via rsync (via rsh or ssh). Best choice for linux/unix. Good choice also for WinXX. - 'rsyncd': backup and restore via rsync daemon on the client. Best choice for linux/unix if you have rsyncd running on the client. Good choice also for WinXX. - 'tar': backup and restore via tar, tar over ssh, rsh or nfs. Good choice for linux/unix. - 'archive': host is a special archive host. Backups are not done. An archive host is used to archive other host's backups to permanent media, such as tape, CDR or DVD.
If you're looking to backup Windows hosts, then you'll want to use either SMB, creating shares on the Windows machines, or search the Internet for how to use rsync and Cygwin on Windows. That is beyond the scope of this article. Otherwise, have fun backing up machines.
Backing up Windows Hosts with autofs
An alternative way to backup Windows hosts would be use autofs, which is a tool for automatically mounting and unmounting filesystems. If you're looking to do remote backups, then this isn't the way for you, since this would be best used on a local trusted network. Passwords are stored on the BackupPC server in files. If you have Windows PCs and servers internally on your networks, though, this method works great since it doesn't require installation of software or clients on the Windows machines. Instead it uses Samba shares on Windows machine to automatically mount filesystems when asked.
Configure autofs
The autofs package should already be installed on your Linux server. The first thing to do is add a seperate file to handle the Windows hosts, along with the options you want passed along when automounting Windows hosts. Open up /etc/auto.master and add these.
# vim /etc/auto.master /windows /etc/auto.windows --timeout=30 --ghost
These options tell autofs to use /windows as the base mount point, get all Windows autofs hosts from /etc/auto.windows, use an inactivity timeout of 30 seconds, and create ghost (empty) directories of the mount point. This means that after 30 seconds of inactivity on the mount point, it will unmount the share, and since the empty directories will be there, you can't remove them when they are not mounted.
Next, add a host to the auto.windows file with the appropriate options you need.
# vim /etc/auto.windows machinename -fstype=cifs,ro,credentials=/etc/.autofs.smbpasswd ://machinename/C\$
This line means mount "machinename" using cifs with read-only access. Generally, you probably don't want anything other than read-only access, since you're only backing up files. The key file stores the samba user name and password allowed to make the mount. Last, use the C drive as the mount point. If you have Windows installed on another drive, or you have data on another drive letter, just use it.
NOTE: Lately troubles were occuring when using a credentials file, when it had previously worked fine in the past. If you have issues using the credentials file in the section ahead, then try to do so without a credentials file. Instead, pass the username and password in the autofs configuration file, as shown here
machinename -fstype=cifs,ro,username=$username,password=$password ://machinename/C\$
Add the key file with the proper credentials needed to mount samba shares. You probably don't have to have the domain name added in, as it should still work without it.
# vim /etc/.auto.smbpasswd username=DOMAIN/username password=password # chmod 600 /etc/.auto.smbpasswd
That's pretty much it to configuring it. Restart the autofs daemon and try mounting your share now. When you navigate to the directory it will automatically mount your samba share on your Windows machine. As long as you're in the mounted directory, it will stay mounted. As soon as your leave the directory, the timeout of 30 seconds will start, and then your share will unmount itself.
# service autofs restart # cd /windows/machinename # mount //machinename/C$ on /windows/machinename type cifs (ro,mand)
Configure BackupPC to use autofs host
Go ahead and add a host the way you already have been doing within BackupPC's web interface. There aren't many differences between adding normal Linux hosts and the autofs Windows hosts. Basically, the big difference is that you don't need to use SSH to access the host, since you're mounting the Windows filesystem locally on BackupPC.
After you add the host, navigate to Edit Config -> Xfer. Change the following rsync commands:
RsyncClientCmd = $rsyncPath $argList RsyncClientRestoreCmd = $rsyncPath $argList
Now, under your share names to backup, you can use the same formatting as with normal Linux servers being backed up. If you wanted to backup a directory with spaces in them, type them as you see them. Removing the + at the end of $argList+ deals with handling escaping better since this is locally backing up, and not backing up across SSH, at least that's my understanding.
RsyncShareName = /windows/machinename/backup RsyncShareName = /windows/machinename/Documents and Settings
That should be it. You should start to see your Windows directories being backed up now.
Appendix: Personal notes and gotcha's
- When installing BackupPC (BPC) using the C5-Testing repo, the default installation location will be in /var/lib/backuppc. This is also where the backup-files will be stored. It is likely you won't have the space to store the backup data on the main system harddrive, or you may feel this is not appropriate. So, if you like me have a separate raid-array for storing the backup data, the files and folders from /var/lib/backuppc need to be moved to this new location. It's easiest to take care of this before you start the actual backups (less data to move around).
- Assuming raid array is mounted on /bak.
- Move the contents of /var/lib/backuppc to a suitable place.
- Remove the backuppc-folder. # rmdir /var/lib/backuppc # cd /var/lib # ln -s /bak backuppc
- Move the contents to /bak.
- Assuming raid array is mounted on /bak.
- With CentOS v5.x the documentation-links in the web-GUI aren't in the expected location. Solve this by following the procedure outlined below.
# cd /usr # ln -s /usr/share/backuppc/html/doc doc
- Courtesy of Martin Jeppesen: On 64bit CentOS, the admin has to change
use lib "/usr/lib"; to use lib "/usr/bin/BackupPC"; in /usr/bin/BackupPC.
Note from Martin: "Note that you will need to change *all* BackupPC scripts (unless the others have the correct value ...), i.e. BackupPC, BackupPC_archive, BackupPC_archiveHost, BackupPC_archiveStart ... let's hope they're all still named as they should be ;-)."
Source: //www.mail-archive.com/This email address is being protected from spambots. You need JavaScript enabled to view it. /msg11285.html">http://www.mail-archive.com/This email address is being protected from spambots. You need JavaScript enabled to view it. /msg11285.html.
- If BackupPC s installed on CentOS 5.7 x64 and using the BPC from the c5-Testing repo, you may get an error about not being able to find Lib.pm when starting the BackupPC deamon. Solution is to create a symlink in /usr/lib like so:
# cd /usr/lib # ln -s /usr/lib64/BackupPC BackupPC
When starting the BackupPC daemon now, with eg "service backuppc start", the daemon starts up properly. Source: http://bugs.centos.org/view.php?id=3175 - Problem in short; BPC wouldn't mail any alerts. Solution below as en excerpt from the BackupPC-Discussion mailing list.
"I looked at the /var/log/maillog and found a problem with the domain name, see excerpt below. I'm not quite sure where it picked up localhost.localdomain from and where to change it to the proper name. Does it look in /etc/hosts or /etc/BackupPC/config.pl? And FWIW, I can't find an /etc/sendmail.cf for some reason. Has this been removed and set somewhere else maybe?
Further, I discovered a problem with the installdir-variable on this particular machine. It was set to /usr. Checking the working email BPC-server I discovered that it should be /usr/share/BackupPC and changed it accordingly and then restarted the BPC daemon. In fact, this errant variable might be why restores have failed as it can't find the right scripts to run.
>, relay=backuppc@localhost Mar 12 14:16:53 machinename sendmail[27054]: q2CDGrJQ027054: tcpwrappers (localhost.localdomain, 127.0.0.1) rejection Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrM9027045: to=
This email address is being protected from spambots. You need JavaScript enabled to view it. , ctladdr=This email address is being protected from spambots. You need JavaScript enabled to view it. (150/150), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30195, relay=[127.0.0.1] [127.0.0.1], dsn=5.0.0, stat=Service unavailable Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrM9027045: q2CDGrMA027045: DSN: Service unavailable Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrMA027045: to=This email address is being protected from spambots. You need JavaScript enabled to view it. , delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31219, relay=[127.0.0.1], dsn=5.0.0, stat=Service unavailable Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrMA027045: q2CDGrMB027045: return to sender: Service unavailable Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrMB027045: to=postmaster, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=32243, relay=[127.0.0.1], dsn=5.0.0, stat=Service unavailable Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrMA027045: Losing ./qfq2CDGrMA027045: savemail panic Mar 12 14:16:53 machinename sendmail[27045]: q2CDGrMA027045: SYSERR(backuppc): savemail: cannot save rejected email anywhereAnyway, as Google is my very good friend, I found this helpful little bit of info; http://www.adamsdesk.com/be/archives/2005/09/09/sendmail-dsn-service-unavailable/.
The specific info I used from this page was
1. Removed any reference to localhost.domain and/or localhost in my /etc/hosts file. 2. Added "ALL: 127.0.0.1" in /etc/hosts.allow. 3. And for good measure (not necessarily required) added "sendmail: localhost" in /etc/hosts.allow.
I then tested the mail function using the email from the BPC documentation;# su -s /bin/bash - backuppc # /usr/bin/BackupPC_sendEmail -u
And voila, I got a mail from the BPC Genie in my mailbox. So, solved. 8-DThis email address is being protected from spambots. You need JavaScript enabled to view it. Thanks for the hint Les, it set me on right on track very fast. You and this list rocks!"
- The only way to backup a Windows client that worked for me, was to use the DeltaCopy package. DeltaCopy is based, as many other similar packages, on Cygwin, and is able to present a share suitable to use with rsyncd in BPC. I only use the server part of the DeltaCopy package, which installs a service on Windows. You can then point DeltaCopy to a Windows share and have BPC back that up. DeltaCopy can be downloaded from http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp.
- Note to self, don't effing forget this! Can't login to web GUI after install. Assuming a BPC install on CentOS 6.5 x64 from epel. Touch and edit /etc/BackupPC/apache.users; edit file to add a line with "root". File permissions should be set to owner and group "apache", rwxrx. Restart backuppc daemon and try to login again. Should work now.
- Excerpt from the BackupPC-users mailing list. You might want to keep this commant in mind.
>> So have we, but Deltacopy still can't handle files that are in use, it skips
>> them and adds an error to the log.
For Windows 7 workstations and above, that'd be correct. You can updated the DeltaCopy DLLs to support VSS under XP, but our XP workstations are dwindling fast.
- If you can't get the ssh-copy-id routine working from the BPC host to the client for whatever reason, just do a cat on .ssh/id_rsa.pub on the BPC host, copy the output and paste it in root's .ssh/authorized_keys file on the client. If all else is correctly set up, this will work.
- If you get an error such as rsync error: protocol incompatibility (code 2) at compat.c(178), check that the root user doesn't have anything in the .bashrc that outputs any kind of data. In my case I had added neofetch last in the .bashrc file, thus effing up the rsync routine completely. Commenting the neofetch line, restored the backup routine as expected. I found the solution at https://serverfault.com/a/304126.
Updates
2024-04-29 - Added a note to the appendix regarding a rsync error.
2023-01-04 - Added a note to self in the appendix, regarding a recurring problem with ssh-copy-id not always working as expected.
2018-09-03 - Moved this guide from old 1996 site to Projects category on new site. Also did some cleaning up wrt headers, typos etc.
2014-08-26 - Added a note re Deltacopy, VSS and Win7 to the appendix (thanks to Doug Lytle).
2014-08-13 - Added a note to self in the appendix, regarding not able to login to web GUI.
2012-05-21 - Added a note on Windows and DeltaCopy.
2012-03-12 - Added another personal note re BPC not mailing status reports.
2012-01-31 - Added another personal note.
2010-06-28 - Fixed embarrasing typo (thanks to Martin Jeppesen for bringing it to my attention!) Also added notes for running BPC on 64b CentOS (thanks again to Martin!).
2010-05-19 - Added a new paragraph; "Personal notes and gotcha's".
2010-05-18 - Migrated howto to new site. Adapted standard ISO-type date coding; YYYY-MM-DD. 8-P
2010-03-15 - Added note about autofs credentials file not working.
2010-01-21 - Added note on passing -p option for ssh-copy-id.
2008-12-15 - Created.
- Written by: Sorin Srbu
- Hits: 2934