Compare commits

...

10 Commits

Author SHA1 Message Date
2833f12b1a make log file destination configurable 2025-01-12 12:01:45 +01:00
a75604c097 fix expansion 2023-12-13 16:53:15 +01:00
2c643c73c6 delete temporary directories before starting host backup 2023-12-13 16:47:52 +01:00
8faea3fef1 Run two-stages backups (#1)
1. Download to a local directory, archive/encrypt there
1. Push encrypted archive to remote folder

This helps when the backup destination is e.g. a NFS drive.

Also, introduce harder checks and fix some flaws.

Reviewed-on: #1
Co-authored-by: Max Mehl <mail@mehl.mx>
Co-committed-by: Max Mehl <mail@mehl.mx>
2023-12-13 12:22:54 +01:00
6da9f1fabc allow to backup a specific host 2021-08-30 19:45:32 +02:00
46903a4038 support non-standard SSH ports, fix some shellchecks 2021-08-30 19:27:47 +02:00
4c8127c388 fix behaviour with UTF-8 file names; make mysql backup source depend on uberspace version 2020-11-06 15:13:37 +01:00
31105d3875 add REUSE badge 2019-09-04 12:01:45 +02:00
96be32af8b remove unused CIs 2019-08-07 10:57:12 +02:00
c046b4d0ca SPDX-Copyright -> SPDX-FileCopyrightText 2019-08-07 10:56:34 +02:00
9 changed files with 212 additions and 105 deletions

View File

@@ -1,4 +1,4 @@
# SPDX-Copyright: 2019 Free Software Foundation Europe e.V. # SPDX-FileCopyrightText: 2019 Free Software Foundation Europe e.V.
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
pipeline: pipeline:

2
.gitignore vendored
View File

@@ -1,4 +1,4 @@
# SPDX-Copyright: 2019 Max Mehl <mail [at] mehl [dot] mx> # SPDX-FileCopyrightText: 2019 Max Mehl <mail [at] mehl [dot] mx>
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
config.cfg config.cfg

View File

@@ -1,7 +0,0 @@
# SPDX-Copyright: 2019 Free Software Foundation Europe e.V.
# SPDX-License-Identifier: CC0-1.0
reuse:
image: fsfe/reuse:latest
script:
- reuse lint

View File

@@ -1,11 +0,0 @@
# SPDX-Copyright: 2019 Free Software Foundation Europe e.V.
# SPDX-License-Identifier: CC0-1.0
language: minimal
services:
- docker
before_install:
- docker pull fsfe/reuse:latest
- docker run --name reuse -v ${TRAVIS_BUILD_DIR}:/repo fsfe/reuse /bin/sh -c "cd /repo; reuse lint"

View File

@@ -1,21 +1,33 @@
<!-- <!--
SPDX-Copyright: 2019 Max Mehl <mail [at] mehl [dot] mx> SPDX-FileCopyrightText: 2019 Max Mehl <mail [at] mehl [dot] mx>
SPDX-License-Identifier: GPL-3.0-or-later SPDX-License-Identifier: GPL-3.0-or-later
--> -->
# Uberspace Backup # Uberspace Backup
This Bash script is able to backup directories from Uberspace users (and also other SSH resources). For Uberspace hosts it can also backup MySQL databases by copying the backups Uberspace cleverly created for you. [![REUSE compliant](https://api.reuse.software/badge/src.mehl.mx/mxmehl/uberspace-backup)](https://api.reuse.software/info/src.mehl.mx/mxmehl/uberspace-backup)
It is designed to work automatically on another server with enough harddisk space. This Bash script is able to backup directories from Uberspace users (and also
other SSH resources). For Uberspace hosts it can also backup MySQL databases by
copying the backups Uberspace cleverly created for you.
It is designed to work automatically on another server with enough harddisk
space.
## Features ## Features
- Transfers files securely via rsync over SSH. - Transfers files securely via rsync over SSH.
- Encrypts backups with GnuPG, using a public key only. Once a backup is encrypted it can only be decrypted with a private key. Make sure to delete the private key from the backupping server (after saving it on a more secure space of course) to keep your backups even safer. - Encrypts backups with GnuPG, using a public key only. Once a backup is
- If desired, it can delete older backups and only retain a configurable amount of backups. encrypted it can only be decrypted with a private key. Make sure to delete the
private key from the backupping server (after saving it on a more secure space
of course) to keep your backups even safer.
- If desired, it can delete older backups and only retain a configurable amount
of backups.
- Rather verbose logs will be written to backup.log. - Rather verbose logs will be written to backup.log.
- With the helper script `ssh-checker.sh` one can automatically test whether the hosts provided in the hosts file can be accessed. If not, the little helper is trying to put your public SSH key to the remote hosts' authorized_keys files by letting you type in the password manually once. - With the helper script `ssh-checker.sh` one can automatically test whether the
hosts provided in the hosts file can be accessed. If not, the little helper is
trying to put your public SSH key to the remote hosts' authorized_keys files
by letting you type in the password manually once.
## Configuration ## Configuration
@@ -23,24 +35,60 @@ Configuration happens in two files: config.cfg and hosts.csv.
### config.cfg ### config.cfg
Everything should be self-explanatory with the comments. Make sure to use the correct GPG fingerprint, and make sure to have its public key imported by the user executing the script. No private key has to be installed on the backupping system (but on the decrypting one of course). Everything should be self-explanatory with the comments. Make sure to use the
correct GPG fingerprint, and make sure to have its public key imported by the
user executing the script. No private key has to be installed on the backupping
system (but on the decrypting one of course).
### hosts.csv ### hosts.csv
This file contains the hosts and its directories that shall be saved. It consists of two rows separated by `;`. The first one contains a `username@hostname` combination that will be used to sync files via SSH, and also as the backup destination directory name. This file contains the hosts and its directories that shall be saved. It
consists of two rows separated by `;`. The first one contains a
`username@hostname` combination that will be used to sync files via SSH, and
also as the backup destination directory name.
The latter one contains all source directories that shall be transferred. This can be absolute file paths, or if it's a Uberspace host some special shortcuts: The latter one contains all source directories that shall be transferred. This
can be absolute file paths, or if it's a Uberspace host some special
shortcuts:
- `%virtual` backups the virtual folder of your uberspace host (`/var/www/virtual/username/`) where for example the `html` folder is located in. - `%virtual` backups the virtual folder of your uberspace host
- `%mysql` downloads the latest backup of your MySQL databases that have been created by Uberspace themselves (their backup system is quite sophisticated). (`/var/www/virtual/username/`) where for example the `html` folder is located
- `%mails` downloads the directory `users` in the home directory which contains all email files of virtual mail users. in.
- `%home` simply downloads the whole user's home directory. - `%mysql` downloads the latest backup of your MySQL databases that have been
created by Uberspace themselves (their backup system is quite sophisticated).
- `%mails` downloads the directory `users` in the home directory which contains
all email files of virtual mail users.
- `%home` simply downloads the whole user's home directory.
You can give multiple locations that shall be backed up. Just separate them by `|` characters. See the example file for more. You can give multiple locations that shall be backed up. Just separate them by
`|` characters. See the example file for more.
## Process
The script runs the following most important steps:
1. For each host in `hosts.csv`
1. Check SSH connection
1. Compose SSH host settings
1. For each backup source
1. Resolve special backup sources
1. Create backup destination
1. rsync source to destination
1. tar the destination
1. gpg-encrypt the destination
1. Delete older backups
1. Output completion info
## Manual run
You can run `ssh-checker.sh` and `uberspace-backup.sh` manually. Without any arguments given, both will check/backup all hosts.
You can provide an argument to check/backup a specific host. This argument has
to fully match a server's `user@hostname[:port]` declaration as on `hosts.csv`.
## Automatic runs ## Automatic runs
In order to let the script run regularily, simply put the script's absolute path in a cron file. For example, run `crontab -e` and insert at the bottom: In order to let the script run regularily, simply put the script's absolute path
in a cron file. For example, run `crontab -e` and insert at the bottom:
``` ```
10 3 * * * /home/archiver/uberspace-backup/uberspace-backup.sh 10 3 * * * /home/archiver/uberspace-backup/uberspace-backup.sh
@@ -50,5 +98,8 @@ This will run the backups every night at 3:10.
## Known limitations ## Known limitations
- Please note that paths like `~` or `$HOME` haven't been tested yet. Use absolute paths instead. - Please note that paths like `~` or `$HOME` haven't been tested yet. Use
- At the moment, the backups don't follow symbolic links. That's why for example error logs aren't downloaded when using `%virtual`. Make sure to regularily check your backups to make sure all important files are saved. absolute paths instead.
- At the moment, the backups don't follow symbolic links. That's why for example
error logs aren't downloaded when using `%virtual`. Make sure to regularly
check your backups to make sure all important files are saved.

View File

@@ -1,13 +1,16 @@
# SPDX-Copyright: 2019 Max Mehl <mail [at] mehl [dot] mx> # SPDX-FileCopyrightText: 2019 Max Mehl <mail [at] mehl [dot] mx>
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
# File with hosts and their backup source paths # File with hosts and their backup source paths
HOSTS="$CURDIR"/hosts.csv HOSTS="$CURDIR"/hosts.csv
# root dir where backups shall be saved to # Temporary download destination for backups
BACKUPDIR=/var/backups/uberspace TEMPDIR=/tmp/uberspace-backup
# GPG fingerprint of key used for encryption # root dir where backups shall be saved to
BACKUPDIR=/mnt/remotesrv/uberspace
# GPG fingerprint of key used for encryption
GPG=6775E8DDD8CEABCC83E38CEHE6334BCA29DF8192 GPG=6775E8DDD8CEABCC83E38CEHE6334BCA29DF8192
# Maximum number of backups that shall be retained (0 to disable automatic deletion) # Maximum number of backups that shall be retained (0 to disable automatic deletion)
@@ -15,3 +18,6 @@ MAXBAK=3
# SSH key # SSH key
#SSH_KEY="~/.ssh/mykey_rsa" #SSH_KEY="~/.ssh/mykey_rsa"
# Logfile. Default: $CURDIR/backup.log
# LOG_FILE=/var/log/uberspace-backup.log

View File

@@ -1,6 +1,6 @@
# SPDX-Copyright: 2019 Max Mehl <mail [at] mehl [dot] mx> # SPDX-FileCopyrightText: 2019 Max Mehl <mail [at] mehl [dot] mx>
# SPDX-License-Identifier: CC0-1.0 # SPDX-License-Identifier: CC0-1.0
# Username@Hostname; Path1 | Path2 | Path3 # Username@Hostname[:Port]; Path1 | Path2 | Path3; Uberspace version (default = 7)
root@server; /home
user@host.uberspace.de; %virtual | %mysql | /home/user/service user@host.uberspace.de; %virtual | %mysql | /home/user/service
root@server:2222; /home

View File

@@ -1,12 +1,12 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# SPDX-Copyright: 2019 Max Mehl <mail [at] mehl [dot] mx> # SPDX-FileCopyrightText: 2019 Max Mehl <mail [at] mehl [dot] mx>
# SPDX-License-Identifier: GPL-3.0-or-later # SPDX-License-Identifier: GPL-3.0-or-later
######################################################################## ########################################################################
# #
# Reads hosts file and checks SSH access. If not possible with public # Reads hosts file and checks SSH access. If not possible with public
# key, this script tries to place the system's public key on the host # key, this script tries to place the system's public key on the host
# via a normal (password-based) SSH access attempt. # via a normal (password-based) SSH access attempt.
# #
######################################################################## ########################################################################
CURDIR=$(dirname "$(readlink -f "$0")") CURDIR=$(dirname "$(readlink -f "$0")")
@@ -15,7 +15,7 @@ source "$CURDIR"/config.cfg
if [ ! -e "${HOSTS}" ]; then echo "Missing hosts file. Please set a correct value of HOSTS= in your config file. Current value: ${HOSTS}"; exit 1; fi if [ ! -e "${HOSTS}" ]; then echo "Missing hosts file. Please set a correct value of HOSTS= in your config file. Current value: ${HOSTS}"; exit 1; fi
if [ ! -z "${SSH_KEY}" ]; then if [ -n "${SSH_KEY}" ]; then
SSH_KEY_ARG="-i ${SSH_KEY}" SSH_KEY_ARG="-i ${SSH_KEY}"
else else
# defaults # defaults
@@ -29,19 +29,31 @@ function trim {
sed -r -e 's/^\s*//g' -e 's/\s*$//g' sed -r -e 's/^\s*//g' -e 's/\s*$//g'
} }
while read line; do while read -r line; do
# if line is a comment, go to next line # if line is a comment, go to next line
if $(echo "$line" | grep -qE "^\s*#"); then continue; fi if echo "$line" | grep -qE "^\s*#"; then continue; fi
RHOST=$(echo "$line" | cut -d";" -f1 | trim) RHOST=$(echo "$line" | cut -d";" -f1 | trim)
# Jump to next line if this line's host does not match host of ARG1 (if given)
if [[ "${ARG1}" != "" ]] && [[ "${ARG1}" != "${RHOST}" ]]; then if [[ "${ARG1}" != "" ]] && [[ "${ARG1}" != "${RHOST}" ]]; then
continue continue
fi fi
# Get SSH port if needed
if echo "$RHOST" | grep -q ":"; then
RPORT=$(echo "$RHOST" | cut -d":" -f2)
RHOST=$(echo "$RHOST" | cut -d":" -f1)
RPORT_ARG="-p ${RPORT}"
else
# defaults
RPORT=""
RPORT_ARG=""
fi
echo "[INFO] Trying ${RHOST}" echo "[INFO] Trying ${RHOST}"
STATUS=$(ssh -n -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 ${SSH_KEY_ARG} ${RHOST} "echo -n"; echo $?) STATUS=$(ssh -n -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 ${RPORT_ARG} ${SSH_KEY_ARG} "${RHOST}" "echo -n"; echo $?)
if [ $STATUS != 0 ]; then if [ $STATUS != 0 ]; then
echo -n "[ERROR] No SSH login possible for ${RHOST}. " echo -n "[ERROR] No SSH login possible for ${RHOST}. "
@@ -50,12 +62,10 @@ while read line; do
exit 1 exit 1
else else
echo "Adding public key with password: " echo "Adding public key with password: "
cat "${SSH_KEY}".pub | ssh ${RHOST} 'cat >> ~/.ssh/authorized_keys' cat "${SSH_KEY}".pub | ssh -o StrictHostKeyChecking=no ${RPORT_ARG} ${SSH_KEY_ARG} "${RHOST}" 'cat >> ~/.ssh/authorized_keys'
fi fi
else else
echo "[SUCCESS] SSH login possible for ${RHOST}." echo "[SUCCESS] SSH login possible for ${RHOST}."
fi fi
echo
done < "$HOSTS" done < "$HOSTS"

View File

@@ -1,21 +1,30 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# SPDX-Copyright: 2019 Max Mehl <mail [at] mehl [dot] mx> # SPDX-FileCopyrightText: 2019 Max Mehl <mail [at] mehl [dot] mx>
# SPDX-License-Identifier: GPL-3.0-or-later # SPDX-License-Identifier: GPL-3.0-or-later
######################################################################## ########################################################################
# #
# Saves specific files and directories from a remote server via SSH. # Saves specific files and directories from a remote server via SSH.
# Provides easy shortcuts for Uberspace.de hosts. # Provides easy shortcuts for Uberspace.de hosts.
# README.md provides more details. # README.md provides more details.
# #
######################################################################## ########################################################################
# Fail fast on errors
set -Eeuo pipefail
# Set correct UTF-8 encoding (for FreeBSD jail)
export LC_ALL=en_US.UTF-8
# Initialise variables
LOG_FILE=
CURDIR=$(dirname "$(readlink -f "$0")") CURDIR=$(dirname "$(readlink -f "$0")")
if [ ! -e "$CURDIR"/config.cfg ]; then echo "Missing config.cfg file. Edit and rename config.cfg.sample"; exit 1; fi if [ ! -e "$CURDIR"/config.cfg ]; then echo "Missing config.cfg file. Edit and rename config.cfg.sample"; exit 1; fi
source "$CURDIR"/config.cfg source "$CURDIR"/config.cfg
if [ ! -e "${HOSTS}" ]; then echo "Missing hosts file. Please set a correct value of HOSTS= in your config file. Current value: ${HOSTS}"; exit 1; fi if [ ! -e "${HOSTS}" ]; then echo "Missing hosts file. Please set a correct value of HOSTS= in your config file. Current value: ${HOSTS}"; exit 1; fi
if [ ! -z "${SSH_KEY}" ]; then if [ -n "${SSH_KEY}" ]; then
SSH_KEY_ARG="-i ${SSH_KEY}" SSH_KEY_ARG="-i ${SSH_KEY}"
else else
# defaults # defaults
@@ -23,9 +32,13 @@ else
SSH_KEY=~/.ssh/id_rsa SSH_KEY=~/.ssh/id_rsa
fi fi
if [ -z "${LOG_FILE}" ]; then
# defaults
LOG_FILE="$CURDIR"/backup.log
fi
# Get current date # Get current date
DATE=$(date +"%Y-%m-%d_%H-%M") DATE=$(date +"%Y-%m-%d_%H-%M")
LOG="$CURDIR"/backup.log
function trim { function trim {
sed -r -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*$//g' sed -r -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*$//g'
@@ -36,75 +49,120 @@ function pdate {
} }
function logecho { function logecho {
# Echo string and copy it to log while attaching the current date # Echo string and copy it to log while attaching the current date
echo "$(pdate) $@" echo "$(pdate) $*"
echo "$(pdate) $@" >> "$LOG" echo "$(pdate) $*" >> "$LOG_FILE"
} }
while read line; do # Loop over all hosts
# if line is a comment, go to next line while read -r line; do
if $(echo "$line" | grep -qE "^\s*#"); then continue; fi # if line is a comment or blank, go to next line
if echo "$line" | grep -qE "^\s*(#|$)"; then continue; fi
RHOST=$(echo "$line" | cut -d";" -f1 | trim) RHOST=$(echo "$line" | cut -d";" -f1 | trim)
RUSER=$(echo "$RHOST" | cut -d"@" -f1)
ALLRDIR=$(echo "$line" | cut -d";" -f2 | trim)
logecho "${RHOST}: Starting backups"
# Jump to next line if this line's host does not match host of first argument (if given)
if [[ "${1-}" != "" ]] && [[ "${1-}" != "${RHOST}" ]]; then
continue
fi
# Task ssh-checker.sh to check this host
if ! "${CURDIR}"/ssh-checker.sh "${RHOST}"; then if ! "${CURDIR}"/ssh-checker.sh "${RHOST}"; then
logecho "${RHOST}: ERROR when connecting via SSH. Please run ssh-checker.sh to debug." logecho "${RHOST}: ERROR when connecting via SSH. Please run ssh-checker.sh to debug."
logecho "${RHOST}: Aborting backup after an error." logecho "${RHOST}: Aborting backup after an error."
continue continue
fi fi
NORDIR=$(echo $ALLRDIR | grep -o "|" | wc -l) RUSER=$(echo "$RHOST" | cut -d"@" -f1)
NORDIR=$[$NORDIR + 1] ALLRDIR=$(echo "$line" | cut -d";" -f2 | trim)
US_VERSION=$(echo "$line" | cut -d";" -f3 | trim)
# Get SSH port if needed
if echo "$RHOST" | grep -q ":"; then
RPORT=$(echo "$RHOST" | cut -d":" -f2)
RHOST=$(echo "$RHOST" | cut -d":" -f1)
RPORT_ARG="-p ${RPORT}"
else
# defaults
RPORT=""
RPORT_ARG=""
fi
logecho "${RHOST}: Starting backups"
logecho "${RHOST}: Deleting host's temporary directories in ${TEMPDIR}"
rm -rf "${TEMPDIR:?}/${RHOST:?}/"*
NORDIR=$(echo "$ALLRDIR" | grep -o "|" | wc -l || true)
NORDIR=$(($NORDIR + 1))
# Loop through all backup sources
for ((i = 1; i <= $NORDIR; i++)); do for ((i = 1; i <= $NORDIR; i++)); do
RDIR=$(echo "$ALLRDIR" | cut -d"|" -f${i} | trim) RDIR=$(echo "$ALLRDIR" | cut -d"|" -f${i} | trim)
# Set a relative destination directory
if [ "${RDIR}" == "%virtual" ]; then if [ "${RDIR}" == "%virtual" ]; then
RDIR=/var/www/virtual/${RUSER} RDIR=/var/www/virtual/${RUSER}
DEST="$BACKUPDIR/$RHOST/$DATE/virtual" DEST_REL="$RHOST/$DATE/virtual"
elif [ "${RDIR}" == "%mysql" ]; then elif [ "${RDIR}" == "%mysql" ]; then
RDIR=mysql RDIR=mysql
DEST="$BACKUPDIR/$RHOST/$DATE/$(basename "${RDIR}")" DEST_REL="$RHOST/$DATE/$(basename "${RDIR}")"
elif [ "${RDIR}" == "%mails" ]; then elif [ "${RDIR}" == "%mails" ]; then
RDIR=/home/${RUSER}/users RDIR=/home/${RUSER}/users
DEST="$BACKUPDIR/$RHOST/$DATE/mails" DEST_REL="$RHOST/$DATE/mails"
elif [ "${RDIR}" == "%home" ]; then elif [ "${RDIR}" == "%home" ]; then
RDIR=/home/${RUSER} RDIR=/home/${RUSER}
DEST="$BACKUPDIR/$RHOST/$DATE/home" DEST_REL="$RHOST/$DATE/home"
else else
DEST="$BACKUPDIR/$RHOST/$DATE/$(basename "${RDIR}")" DEST_REL="$RHOST/$DATE/$(basename "${RDIR}")"
fi fi
# Define absolute temporary and final backup destination paths
# Example:
# DEST=/tmp/uberspace-backup/user@example.com/2019-01-01/virtual
# DEST_FINAL=/media/Uberspace/user@example.com/2019-01-01/
DEST="${TEMPDIR}/${DEST_REL}"
DEST_FINAL="$(dirname "${BACKUPDIR}/${DEST_REL}")"
# Set Source directory, and make exception for %mysql # Set Source directory, and make exception for %mysql
SOURCE="${RDIR}" SOURCE="${RDIR}"
if [ "${RDIR}" == "mysql" ]; then SOURCE=/mysqlbackup/latest/${RUSER}; fi if [ "${RDIR}" == "mysql" ]; then
if [[ $US_VERSION == 6 ]]; then
# Create backup destination if necessary SOURCE=/mysqlbackup/latest/${RUSER}
else
SOURCE=/mysql_backup/current/${RUSER}
fi
fi
# Create temporary and final backup destination if necessary
if [ ! -e "${DEST}" ]; then mkdir -p "${DEST}"; fi if [ ! -e "${DEST}" ]; then mkdir -p "${DEST}"; fi
if [ ! -e "${DEST_FINAL}" ]; then mkdir -p "${DEST_FINAL}"; fi
# RSYNC # RSYNC
logecho "${RHOST}: Downloading ${SOURCE} to ${DEST}" logecho "${RHOST}: Downloading ${SOURCE} to ${DEST}"
rsync -a -e "ssh -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o Compression=no -T -x ${SSH_KEY_ARG}" ${RHOST}:${SOURCE}/ "${DEST}"/ rsync -a -e "ssh -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o Compression=no -T -x ${RPORT_ARG} ${SSH_KEY_ARG}" "${RHOST}:${SOURCE}"/ "${DEST}"/
# Pack backup directory, and delete uncompressed one # Pack backup directory, and delete uncompressed one
logecho "${RHOST}: Archiving $(basename ${DEST})" logecho "${RHOST}: Archiving $(basename "${DEST}")"
tar cf ${DEST}.tar -C $(echo ${DEST} | sed "s|$(basename ${DEST})$||") $(basename ${DEST}) # TODO: avoid absolute paths tar cf "${DEST}".tar -C $(echo ${DEST} | sed "s|$(basename ${DEST})$||") $(basename ${DEST}) # TODO: avoid absolute paths
rm -rf ${DEST} rm -rf "${DEST}"
# Encrypt archive with GPG (it compresses at the same time) # Encrypt archive with GPG (it compresses at the same time)
logecho "${RHOST}: Encrypting and compressing $(basename ${DEST})" logecho "${RHOST}: Encrypting and compressing $(basename "${DEST}")"
gpg --output ${DEST}.tar.gpg --encrypt --recipient ${GPG} ${DEST}.tar gpg --output "${DEST}".tar.gpg --encrypt --recipient ${GPG} "${DEST}".tar
rm ${DEST}.tar rm "${DEST}".tar
# Delete all old directories except the $MAXBAK most recent # Push encrypted backup to final backup destination
if [ $(ls -tp "${BACKUPDIR}"/"${RHOST}"/ | grep '/$' | wc -l | tr -d ' ') -gt $MAXBAK ]; then logecho "${RHOST}: Moving $(basename "${DEST}") to ${DEST_FINAL}"
logecho "${RHOST}: Removing older backups of $(basename ${DEST})" cp "${DEST}".tar.gpg "${DEST_FINAL}/"
ls -tpd "${BACKUPDIR}"/"${RHOST}"/* | grep '/$' | tail -n +$[$MAXBAK + 1] | xargs -0 | xargs rm -r -- rm "${DEST}".tar.gpg
fi
done done # End of loop through all backup sources
# Delete all old directories except the $MAXBAK most recent
if [ $(ls -tp "${BACKUPDIR}"/"${RHOST}"/ | grep '/$' | wc -l | tr -d ' ') -gt $MAXBAK ]; then
oldbackups=$(ls -tp "${BACKUPDIR}"/"${RHOST}"/ | grep '/$' | tail -n +$(($MAXBAK + 1)))
logecho "${RHOST}: Removing older backup directories: ${oldbackups}"
ls -tpd "${BACKUPDIR}"/"${RHOST}"/* | grep '/$' | tail -n +$(($MAXBAK + 1)) | xargs -0 | xargs rm -r --
fi
done < "$HOSTS" done < "$HOSTS"