Rdiff-image-get
NAME
rdiff-image-get - restore an rdiff-image backup
SYNOPSIS
rdiff-image-get
[options]
image-dir
DESCRIPTION
rdiff-image-get
downloads backups created by
rdiff-image-cron(1)
to a cache directory,
unpacks (ie restore) backups in the cache directory
and cleans up the cache.
The cache directories stores backups download in previous runs
and in particular the latest full backup.
Since the full backups are produced infrequently
this usually means only the much smaller difference file
needs to be downloaded on a second run, saving a lot of time.
rdiff-image-get
always does a full restores of a backup.
It doesn't support partial restores - eg restoring one file.
OPTIONS
- -b backup-name, --backup=backup-name
-
Instead of downloading and unpacking the latest backups found use backup
backup-name.
backup-name
has the format yyyymmdd-hhmmss, which is the date-time the backup was made.
The last digits may be omitted if the resulting name is unambiguous.
- -c cache-dir, --cache=cache-dir
-
Use cache directory
cache-dir
instead of the current directory.
- -d, --dry-run
-
Go through the motions, but don't alter anything.
Usually combined with
--echo
or
--progress.
- -e, --echo
-
Print shell commands describing each action.
The S3 commands assume you have
s3-bash
installed.
- -i, --index
-
Instead of downloading images, print an index of images available.
- -l, --cleanup
-
Remove all backups that were not used from the cache.
In other words, remove all bar the latest backups if
--backup
wasn't used, or all bar the backup specified by
--backup
if it was supplied.
- -n, --no-secret
-
Don't download / unpack / cleanup secret backups.
- -o, --overwrite
-
When doing an unpack overwrite
image-dir
if it exists.
If this option isn't given the program will refused to unpack
over an existing directory.
- -p, --progress
-
Print a description of each step as it is about to be done.
- -r root-cmd, --root-cmd=root-cmd
-
Use the command
root-cmd
to unpack the archive as root.
Root is usually needed to unpack archives as they contain device files.
sudo(8)
and
fakeroot(1)
are suitable
root-cmd's.
- -s bucket[:key:secret], --s3=bucket[:key:secret]
-
Download the backups into the Amazon S3 bucket whose name is
bucket.
Key
and
secret
are the credentials needed to log into S3.
They can alternatively be supplied using the RDIFF_IMAGE_S3_CREDENTIALS
environment variable.
An alternative to using this option
is to download the backups to the cache directory
using say Firefox's s3fox plugin.
- -t tarfile, --tar=tarfile
-
Write a copy of reconstructed backup to
tarfile
.
Tarfile
has the string
{{backup}}
in it replaced with the date-time (yyyymmdd-hhmmss) the
backup was made, and the string
{{kind}}
replaced with "base" or "secret".
- -u status-url[,image-url], --url=status-url[,image-url]
-
Download the images using HTTP.
Status-url
is an index page containing a list of the currently available backup files.
Those files must be located in the directory
image-url.
If
image-url
isn't given the backup files must lie in the same directory as
status-url.
An alternative to using this option
is to download the backups and their .sha1 files
to the cache directory using say
wget(1),
and manually checking the result with
sha1sum(1).
ARGUMENTS
- image-dir
-
If supplied a backup will be unpacked to this directory.
If it contains the string
{{backup}}
all occurrences will be replaced with the date-time the backup was made.
The date-time has the format yyyymmmdd-hhmmss.
ENVIRONMENT VARIABLES
- RDIFF_IMAGE_S3_CREDENTIALS
-
If the S3 credentials aren't supplied to the
--s3
option they can be supplied in this environment variable.
The format is
key:secret.
- http_proxy
-
If present, the
--url
option will do its HTTP downloads via this proxy.
The format of the variable is: http://home.name:port
EXAMPLES
To list the backups available on S3:
-
rdiff-image-get \
--index \
--s3=bucket:key:secret
Download the latest backup from via HTTP,
unpack it to a directory named after the backup,
and after all that is done remove all bar the backups unpacked from the cache:
-
rdiff-image-get \
--url=http://host.name/backups/ \
--cleanup backup-{{backup}}
AUTHOR
Russell Stuart <russell-rdiffimage@stuart.id.au>
SEE ALSO
You may wish to use
rdiff-image-boot(1)
to "boot" your newly restored image.
rdiff-image.conf(5),
rdiff-image-backup(1),
rdiff-image-cron(1),
rdiff-image-get(1),
rdiff-image-s3(1),
rdiff-image-tarutil(1).
The
README.txt
that comes with the rdiff-image package.
The
s3-fox
(http://www.s3fox.net)
extension for Mozilla's Firefox browser provides convienent
way of looking at the backups on S3.
If you want to run the shell commands
--echo
prints, you will need
s3-bash:
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=943.