Work with ADBM CLI
Overview
Starting with 2.0.4, Arenadata DB Backup Manager supports a command-line tool for backup management — ADBM CLI. ADBM CLI is installed automatically in the Enterprise Edition of ADB along with the ADBM service. To use ADBM CLI, connect to the ADB master host under the gpadmin
user and run adbm
:
$ sudo su - gpadmin
$ adbm <command_name> <command_options>
where:
-
<command_name>
— a name of the command. All supported commands are described in detail below. -
<command_options>
— options of the selected command.
To check the current version of ADBM CLI, you can run adbm --version | -v
:
$ adbm -v
Result:
adbm-cli-2.0.4
IMPORTANT
|
backup repo-config create
The adbm backup repo-config create
command creates a new repository configuration based on the passed options. You need repository configurations to create backup configurations via the backup config create command.
Syntax:
$ adbm backup repo-config create \
--backup-format | -f <backupFormat> \
--repo-type <repoType> \
[--repo-path <repoPath>] \
[--file <file>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--repo-type |
A type of the data repository. Possible values:
|
Yes |
— |
--repo-path |
The directory where all necessary backup files (metadata files and data files) are stored
. Cannot be specified if |
Yes if |
— |
--file |
A path to the JSON configuration file with the repository configuration parameters. When this option is used, the configuration parameters are taken from the specified file. See Parameters of the input JSON file below |
Yes if |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
Parameters of the input JSON file (--file
) depend on the repository type (--repo-type
).
Template:
{
"bucket": "string",
"endpoint": "string",
"region": "string",
"keyType": "string(enum)",
"key": "string",
"keySecret": "string",
"caFile": "string",
"uriType": "string(enum)",
"path": "string"
}
Field | Description | Mandatory |
---|---|---|
bucket |
A bucket name in the S3 storage |
Yes |
endpoint |
An endpoint of the S3 storage |
Yes |
region |
A region of the S3 storage |
Yes |
keyType |
A key type of the S3 storage:
|
Yes |
key |
A key to access the S3 storage |
Yes |
keySecret |
A secret key to access the S3 storage |
Yes |
caFile |
A Certificate Authority (CA) PEM file to access the S3 storage |
Yes |
uriType |
An endpoint type (see
|
Yes |
path |
A repository where backups and archived WAL segments are stored |
Yes |
Template:
{
"hostname": "string",
"username": "string",
"password": "string",
"storageUnit": "string",
"directory": "string",
"writeBufferSize": integer,
"readBufferSize": integer,
"executablePath": "string"
}
Field | Description | Mandatory |
---|---|---|
hostname |
An IP address or name of the host that provides operations with DDBoost |
Yes |
username |
A name of the user who is granted permissions to work with DDBoost. The user is configured on the DDBoost side. This is neither operating system nor ADB user |
Yes |
password |
A password of the user who is granted permissions to work with DDBoost |
Yes |
storageUnit |
A name of the storage unit that is configured on the DDBoost side |
Yes |
directory |
A name of the directory in the DDBoost file system. That directory is used to store all backup files created when the gpbackup utility is run with the DDBoost plugin. In the /<storageUnit>/<directory> folder, the plugin automatically creates all subdirectories corresponding to the creation date and time of each backup: /<storageUnit>/<directory>/YYYYMMDD/YYYYMMDDHHmmSS/ |
Yes |
writeBufferSize |
A buffer size for writing data to DDBoost (in bytes) . Values from the following range are allowed: 64 <= writeBufferSize <= 1048576 |
Yes |
readBufferSize |
A buffer size for reading data from DDBoost (in bytes) . Values from the following range are allowed: 64 <= readBufferSize <= 1048576 |
Yes |
executablePath |
An absolute path to the plugin in the file system of ADB hosts (along with the plugin executable file name). The plugin should be installed on all cluster hosts in the same directory . The default path to the plugin executable file is $GPHOME/bin/adb_ddp_plugin |
No |
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains an identifier of the created repository configuration:
Backup repository configuration <repoConfigId> created
$ adbm backup repo-config create --backup-format logical --repo-path /dasha --repo-type posix
Result:
Create backup repository configuration in progress. Action id = afd1f77f-6060-4f28-b3bb-94ee3515a48f. Performing "Start 'Create logical backup repository configuration' action".done Backup repository configuration 1 created
-
Create a JSON file with configuration parameters:
$ vi s3.json
{ "bucket": "test", "endpoint": "storage.yandexcloud.net", "region": "ru-central1", "keyType": "shared", "key": "XXXXXXXXXXXXXXXXXXXX", "keySecret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "caFile": "test.pem", "uriType": "host", "path": "/test" }
-
Run the following command to create a repository configuration:
$ adbm backup repo-config create --backup-format logical --repo-type s3 --file s3.json
Result:
Create backup repository configuration in progress. Action id = 8dc913ca-4595-452f-8c37-2a483c7032a6. Performing "Start 'Create logical backup repository configuration' action".done Backup repository configuration 9 created
backup repo-config ls
The adbm backup repo-config ls
command lists existing repository configurations (previously created via the backup repo-config create command).
Syntax:
$ adbm backup repo-config ls \
--backup-format | -f <backupFormat> \
[--repo-config-id | -rc <repoConfigId>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--repo-config-id | -rc |
An identifier of the repository configuration. If set, the command returns information only for the specified configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
The command returns a JSON file with an array of existing repository configurations, whose fields depend on the repository type.
Template:
[
{
"id" : integer,
"repoType" : "cifs",
"cifs" : {
"id" : integer,
"path" : "string"}
},
{
"id" : integer,
"repoType" : "s3",
"s3" : {
"id" : integer,
"path" : "string",
"bucket": "string",
"endpoint": "string",
"region": "string",
"keyType": "string(enum)",
"key": "string",
"keySecret": "string",
"caFile": "string",
"uriType": "string(enum)",
"path": "string"}
},
{
"id" : integer,
"repoType" : "posix",
"posix" : {
"id" : integer,
"path" : "string"}
},
{
"id" : integer,
"repoType" : "dds",
"dds": {
"id" : integer,
"path" : "string",
"hostname": "string",
"username": "string",
"password": "string",
"storageUnit": "string",
"directory": "string",
"writeBufferSize": integer,
"readBufferSize": integer,
"executablePath": "string"}
}
]
JSON fields
Field | Description |
---|---|
id |
An identifier of the repository configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
repoType |
A type of the data repository — |
cifs.id |
An identifier of the cifs repository configuration |
cifs.path |
The directory where all necessary backup files (metadata files and data files) are stored |
Field | Description |
---|---|
id |
An identifier of the repository configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
repoType |
A type of the data repository — |
s3.id |
An identifier of the s3 repository configuration |
s3.path |
A repository where backups and archived WAL segments are stored |
s3.bucket |
A bucket name in the S3 storage |
s3.endpoint |
An endpoint of the S3 storage |
s3.region |
A region of the S3 storage |
s3.keyType |
A key type of the S3 storage:
|
s3.key |
A key to access the S3 storage |
s3.keySecret |
A secret key to access the S3 storage |
s3.caFile |
A Certificate Authority (CA) PEM file to access the S3 storage |
s3.uriType |
An endpoint type (see
|
Field | Description |
---|---|
id |
An identifier of the repository configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
repoType |
A type of the data repository — |
posix.id |
An identifier of the posix repository configuration |
posix.path |
The directory where all necessary backup files (metadata files and data files) are stored |
Field | Description |
---|---|
id |
An identifier of the repository configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
repoType |
A type of the data repository — |
dds.id |
An identifier of the Data Domain Storage repository configuration |
dds.path |
The directory where all necessary backup files (metadata files and data files) are stored |
dds.hostname |
An IP address or name of the host that provides operations with DDBoost |
dds.username |
A name of the user who is granted permissions to work with DDBoost. The user is configured on the DDBoost side. This is neither operating system nor ADB user |
dds.password |
A password of the user who is granted permissions to work with DDBoost |
dds.storageUnit |
A name of the storage unit that is configured on the DDBoost side |
dds.directory |
A name of the directory in the DDBoost file system. That directory is used to store all backup files created when the gpbackup utility is run with the DDBoost plugin. In the /<storageUnit>/<directory> folder, the plugin automatically creates all subdirectories corresponding to the creation date and time of each backup: /<storageUnit>/<directory>/YYYYMMDD/YYYYMMDDHHmmSS/ |
dds.writeBufferSize |
A buffer size for writing data to DDBoost (in bytes) |
dds.readBufferSize |
A buffer size for reading data from DDBoost (in bytes) |
dds.executablePath |
An absolute path to the plugin in the file system of ADB hosts (along with the plugin executable file name). The plugin should be installed on all cluster hosts in the same directory |
$ adbm backup repo-config ls --backup-format logical
Result:
[ {
"id" : 1,
"repoType" : "posix",
"posix" : {
"id" : 1,
"path" : "/dasha"
}
}, {
"id" : 9,
"repoType" : "s3",
"s3" : {
"id" : 1,
"path" : "/test",
"bucket" : "test",
"endpoint" : "storage.yandexcloud.net",
"region" : "ru-central1",
"keyType" : "shared",
"key" : "XXXXXXXXXXXXXXXXXXXX",
"keySecret" : "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"caFile" : "test.pem",
"uriType" : "host"
}
} ]
backup repo-config delete
The adbm backup repo-config delete
command allows you to delete a repository configuration with the specified identifier (previously created via the backup repo-config create command).
Syntax:
$ adbm backup repo-config delete \
--backup-format | -f <backupFormat> \
--repo-config-id | -rc <repoConfigId> \
[-a]
[--log-level <logLevel>]
[--log-path <logPath>]
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--repo-config-id | -rc |
An identifier of the repository configuration that should be deleted. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
Yes |
— |
-a |
Defines whether to launch the command automatically (without user confirmation) |
No |
false |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains an identifier of the removed repository configuration:
Deletion of <repoConfigId> completed
$ adbm backup repo-config delete --backup-format logical --repo-config-id 3
Result:
Are you sure you want to delete logical backup repository configuration? (y/n):y Delete backup repository configuration in progress. Action id = 36e2b9d2-e8a3-4e5f-a0e3-ab1fad401f52. Performing "Start 'Delete logical backup repository configuration' action".done Deletion of 3 completed
backup config create
The adbm backup config create
command creates a new backup configuration based on the passed options. You can use backup configurations to create backups via the backup create command.
Syntax:
$ adbm backup config create \
--backup-format | -f <backupFormat> \
--repo-config-id | -rc <repoConfigId> \
[--file <file>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
IMPORTANT
In the absence of the |
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--repo-config-id | -rc |
An identifier of the repository configuration that should be used to create a backup configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
Yes |
— |
--file |
A path to the JSON configuration file with the backup configuration parameters. When this option is used, the configuration parameters are taken from the specified file. See Input JSON file parameters below |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
Template:
{
"compressionLevel": integer,
"compressionType": "string(enum)",
"copyQueueSize": integer,
"jobs": integer,
"pluginConfig": "string",
"excludeSchema": [
"string"
],
"excludeTable": [
"string"
],
"excludeSchemaFile": "string",
"excludeTableFile": "string",
"includeSchema": [
"string"
],
"includeTable": [
"string"
],
"includeSchemaFile": "string",
"includeTableFile": "string",
"isDataOnly": boolean,
"isDebug": boolean,
"isIncremental": boolean,
"isWithoutGlobals": boolean,
"isLeafPartitionData": boolean,
"isMetadataOnly": boolean,
"isNoCompression": boolean,
"isNoInherits": boolean,
"isNoHistory": boolean,
"isSingleDataFile": boolean,
"isWithStats": boolean,
"isMultiFormatBackups": boolean
}
Field | Description | Default |
---|---|---|
compressionLevel |
Specifies the compression level used to compress data files. Allowed values: If the option value is not set, gpbackup uses the |
1 |
compressionType |
Specifies the compression type used to compress data files. Possible values:
If the option value is not set, gpbackup uses the |
gzip |
copyQueueSize |
Specifies the number of Requires |
— |
jobs |
Specifies the number of jobs to run in parallel when backing up tables. By default, one connection is used. Increasing this number can improve the speed of backing up data. You cannot use this option in combination with the |
1 |
pluginConfig |
The location of the gpbackup plugin configuration file (YAML-formatted text file). If you specify the Cannot be combined with |
— |
excludeSchema |
An array of schemas to exclude from the backup. You cannot combine this option with other schema or table filtering options |
{} |
excludeTable |
An array of tables to exclude from the backup. Each table should be in the format If you specify a leaf partition name, gpbackup ignores it. The leaf partition is not excluded. You cannot combine this option with other schema or table filtering options |
{} |
excludeSchemaFile |
Specifies a text file that contains a list of schemas to exclude from the backup. Each line in the text file should define a single schema. The file should not include trailing lines. If a schema name uses any character other than a lowercase letter, number, or an underscore character, include that name in double quotes. You cannot combine this option with other schema or table filtering options |
— |
excludeTableFile |
Specifies a text file that contains a list of tables to exclude from the backup. Each line in the text file should define a single table using the format If you specify a leaf partition name, gpbackup ignores it. The leaf partition is not excluded. You cannot combine this option with other schema or table filtering options |
— |
includeSchema |
An array of schemas to include in the backup. Other schemas are omitted from the backup. You cannot combine this option with other schema or table filtering options |
{} |
includeTable |
An array of tables to include in the backup. Other tables are omitted from the backup. Each table should be in the format You can also specify the qualified name of a sequence, a view, or a materialized view. In that case, you should also explicitly specify the dependent objects that are required. Optionally, if you use the You cannot combine this option with other schema or table filtering options |
{} |
includeSchemaFile |
Specifies a text file that contains a list of schemas to include in the backup. Other schemas are omitted from the backup. Each line in the text file should define a single schema. The file should not include trailing lines. If a schema name uses any character other than a lowercase letter, number, or an underscore character, include that name in double quotes. You cannot combine this option with other schema or table filtering options |
— |
includeTableFile |
Specifies a text file that contains a list of tables to include in the backup. Other tables are omitted from the backup. Each line in the text file should define a single table using the format You can also specify the qualified name of a sequence, a view, or a materialized view. In that case, you should also explicitly specify the dependent objects that are required. Optionally, if you use the You cannot combine this option with other schema or table filtering options |
— |
isDataOnly |
Defines whether to back up only the table data into CSV files — without metadata files needed to recreate the tables and other database objects |
false |
isDebug |
Defines whether to display verbose debug messages during the backup |
false |
isIncremental |
Defines whether to make an incremental backup |
false |
isWithoutGlobals |
Defines whether to omit the global system objects during the backup |
false |
isLeafPartitionData |
For partitioned tables, defines whether to create one data file per leaf partition instead of one data file for the entire table |
false |
isMetadataOnly |
Defines whether to create only the metadata files (DDL) needed to recreate the database objects without backing up the actual table data |
false |
isNoCompression |
Defines whether not to compress the CSV files with table data |
false |
isNoInherits |
When invoked, only the metadata of the table itself is backed up, ignoring any inheritance relationships with other tables that would normally cause those tables to also be included in the backup set. Only works when invoked with either the |
false |
isNoHistory |
When invoked, gpbackup does not write backup run metadata to the history database |
false |
isSingleDataFile |
Defines whether to create a single data file on each segment host for all tables backed up on that segment. By default, gpbackup creates one CSV file for each table that is backed up on the segment |
false |
isWithStats |
Defines whether to include query plan statistics in the backup set |
false |
isMultiFormatBackups |
Defines whether to launch binary and logical backups simultaneously |
false |
NOTE
All fields in the JSON file are optional.
|
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains an identifier of the created backup configuration:
Backup configuration <backupConfigId> created
-
Run the following command to create a backup configuration:
$ adbm backup config create --repo-config-id 1 --backup-format logical
Result:
Create backup configuration in progress. Action id = 745a2a51-1593-4511-a98f-69260fc22227. Performing "Start 'Create logical backup configuration' action".done Backup configuration 2 created
-
Check that the default options were applied to the backup configuration, since you did not pass the
--file
option. To do this, run the backup config ls command:$ adbm backup config ls --backup-format logical --config-id 2
Result:
{ "id" : 2, "repoConfigId" : 1, "configParams" : { "excludeSchema" : [ ], "excludeTable" : [ ], "includeSchema" : [ ], "includeTable" : [ ], "isDataOnly" : false, "isDebug" : false, "isIncremental" : false, "isWithoutGlobals" : false, "isLeafPartitionData" : false, "isMetadataOnly" : false, "isNoCompression" : false, "isNoInherits" : false, "isNoHistory" : false, "isSingleDataFile" : false, "isWithStats" : false, "isMultiFormatBackups" : false } }
-
Create a JSON file with configuration parameters:
$ vi backup-conf.json
{ "compressionLevel": 1, "compressionType": "gzip", "includeSchema": [ "public" ] }
-
Run the following command to create a backup configuration:
$ adbm backup config create --repo-config-id 1 --backup-format logical --file backup-conf.json
Result:
Create backup configuration in progress. Action id = 073fdb77-4a11-43fb-b643-905b56152643. Performing "Start 'Create logical backup configuration' action".done Backup configuration 3 created
-
Check that the options you passed were applied to the backup configuration — via the backup config ls command:
$ adbm backup config ls --backup-format logical --config-id 3
Result:
{ "id" : 3, "repoConfigId" : 1, "configParams" : { "compressionLevel" : 1, "compressionType" : "gzip", "excludeSchema" : [ ], "excludeTable" : [ ], "includeSchema" : [ "public" ], "includeTable" : [ ], "isDataOnly" : false, "isDebug" : false, "isIncremental" : false, "isWithoutGlobals" : false, "isLeafPartitionData" : false, "isMetadataOnly" : false, "isNoCompression" : false, "isNoInherits" : false, "isNoHistory" : false, "isSingleDataFile" : false, "isWithStats" : false, "isMultiFormatBackups" : false } }
backup config ls
The adbm backup config ls
command lists existing backup configurations (previously created via the backup config create command).
Syntax:
$ adbm backup config ls \
--backup-format | -f <backupFormat> \
[--config-id | -c <backupConfigId>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--config-id | -c |
An identifier of the backup configuration. If set, the command returns information only for the specified configuration. It is returned by the backup config create command and can be obtained with the backup config ls command |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
The command returns a JSON file with an array of existing backup configurations.
Template:
[ {
"id" : integer,
"repoConfigId" : integer,
"configParams" : {
"compressionLevel": integer,
"compressionType": "string(enum)",
"copyQueueSize": integer,
"jobs": integer,
"pluginConfig": "string",
"excludeSchema": [
"string"
],
"excludeTable": [
"string"
],
"excludeSchemaFile": "string",
"excludeTableFile": "string",
"includeSchema": [
"string"
],
"includeTable": [
"string"
],
"includeSchemaFile": "string",
"includeTableFile": "string",
"isDataOnly": boolean,
"isDebug": boolean,
"isIncremental": boolean,
"isWithoutGlobals": boolean,
"isLeafPartitionData": boolean,
"isMetadataOnly": boolean,
"isNoCompression": boolean,
"isNoInherits": boolean,
"isNoHistory": boolean,
"isSingleDataFile": boolean,
"isWithStats": boolean,
"isMultiFormatBackups": boolean
}
} ]
Field | Description |
---|---|
id |
An identifier of the backup configuration. It is returned by the backup config create command and can be obtained with the backup config ls command |
repoConfigId |
An identifier of the repository configuration. It is returned by the backup repo-config create command and can be obtained with the backup repo-config ls command |
configParams |
Options that were used to create the backup configuration. For information on each available option, see Input JSON file parameters for the All parameters are displayed when the command is used for the specific backup configuration ( |
$ adbm backup config ls --backup-format logical
Result:
[ {
"id" : 2,
"repoConfigId" : 1,
"configParams" : {
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ ],
"includeTable" : [ ]
}
}, {
"id" : 3,
"repoConfigId" : 1,
"configParams" : {
"compressionLevel" : 1,
"compressionType" : "gzip",
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ "public" ],
"includeTable" : [ ]
}
} ]
$ adbm backup config ls --backup-format logical --config-id 3
Result:
{
"id" : 3,
"repoConfigId" : 1,
"configParams" : {
"compressionLevel" : 1,
"compressionType" : "gzip",
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ "public" ],
"includeTable" : [ ],
"isDataOnly" : false,
"isDebug" : false,
"isIncremental" : false,
"isWithoutGlobals" : false,
"isLeafPartitionData" : false,
"isMetadataOnly" : false,
"isNoCompression" : false,
"isNoInherits" : false,
"isNoHistory" : false,
"isSingleDataFile" : false,
"isWithStats" : false,
"isMultiFormatBackups" : false
}
}
backup config delete
The adbm backup config delete
command allows you to delete a backup configuration with the specified identifier (previously created via the backup config create command).
Syntax:
$ adbm backup config delete \
--backup-format | -f <backupFormat> \
--config-id | -c <backupConfigId> \
[-a]
[--log-level <logLevel>]
[--log-path <logPath>]
[--help | -h]
IMPORTANT
You cannot delete the most current backup configuration. When attempting to do this, you will get the following error: The last logical backup config cannot be deleted |
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--config-id | -c |
An identifier of the backup configuration that should be deleted. It is returned by the backup config create command and can be obtained with the backup config ls command |
Yes |
— |
-a |
Defines whether to launch the command automatically (without user confirmation) |
No |
false |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains an identifier of the removed backup configuration:
Deletion of <backupConfigId> completed
$ adbm backup config delete --backup-format logical --config-id 1
Result:
Are you sure you want to delete logical backup configuration? (y/n):y Delete backup configuration in progress. Action id = 89034934-3500-4d1c-93b8-1d91b4ac1c17. Performing "Start 'Delete logical backup configuration' action".done Deletion of 1 completed
backup create
The adbm backup create
command creates a database backup based on the passed options. For logical backups, the gpbackup utility is used.
Syntax:
$ adbm backup create \
--backup-format | -f <backupFormat> \
--dbname | --database | -d <dbName> \
[--config-id | -c <backupConfigId>]
[--file <file>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
IMPORTANT
|
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--dbname | --database | -d |
A name of the ADB database that should be backed up |
Yes |
— |
--config-id | -c |
An identifier of the backup configuration that should be used for the current backup. It is returned by the backup config create command and can be obtained with the backup config ls command |
No |
— |
--file |
A path to the JSON configuration file with the backup configuration parameters. See Input JSON file parameters below |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
Template:
{
"compressionLevel": integer,
"compressionType": "string(enum)",
"copyQueueSize": integer,
"jobs": integer,
"pluginConfig": "string",
"excludeSchema": [
"string"
],
"excludeTable": [
"string"
],
"excludeSchemaFile": "string",
"excludeTableFile": "string",
"includeSchema": [
"string"
],
"includeTable": [
"string"
],
"includeSchemaFile": "string",
"includeTableFile": "string",
"isDataOnly": boolean,
"isDebug": boolean,
"isIncremental": boolean,
"isWithoutGlobals": boolean,
"isLeafPartitionData": boolean,
"isMetadataOnly": boolean,
"isNoCompression": boolean,
"isNoInherits": boolean,
"isNoHistory": boolean,
"isSingleDataFile": boolean,
"isWithStats": boolean,
"isMultiFormatBackups": boolean
}
Field | Description | Default |
---|---|---|
compressionLevel |
Specifies the compression level used to compress data files. Allowed values: If the option value is not set, gpbackup uses the |
1 |
compressionType |
Specifies the compression type used to compress data files. Possible values:
If the option value is not set, gpbackup uses the |
gzip |
copyQueueSize |
Specifies the number of Requires |
— |
jobs |
Specifies the number of jobs to run in parallel when backing up tables. By default, one connection is used. Increasing this number can improve the speed of backing up data. You cannot use this option in combination with the |
1 |
pluginConfig |
The location of the gpbackup plugin configuration file (YAML-formatted text file). If you specify the Cannot be combined with |
— |
excludeSchema |
An array of schemas to exclude from the backup. You cannot combine this option with other schema or table filtering options |
{} |
excludeTable |
An array of tables to exclude from the backup. Each table should be in the format If you specify a leaf partition name, gpbackup ignores it. The leaf partition is not excluded. You cannot combine this option with other schema or table filtering options |
{} |
excludeSchemaFile |
Specifies a text file that contains a list of schemas to exclude from the backup. Each line in the text file should define a single schema. The file should not include trailing lines. If a schema name uses any character other than a lowercase letter, number, or an underscore character, include that name in double quotes. You cannot combine this option with other schema or table filtering options |
— |
excludeTableFile |
Specifies a text file that contains a list of tables to exclude from the backup. Each line in the text file should define a single table using the format If you specify a leaf partition name, gpbackup ignores it. The leaf partition is not excluded. You cannot combine this option with other schema or table filtering options |
— |
includeSchema |
An array of schemas to include in the backup. Other schemas are omitted from the backup. You cannot combine this option with other schema or table filtering options |
{} |
includeTable |
An array of tables to include in the backup. Other tables are omitted from the backup. Each table should be in the format You can also specify the qualified name of a sequence, a view, or a materialized view. In that case, you should also explicitly specify the dependent objects that are required. Optionally, if you use the You cannot combine this option with other schema or table filtering options |
{} |
includeSchemaFile |
Specifies a text file that contains a list of schemas to include in the backup. Other schemas are omitted from the backup. Each line in the text file should define a single schema. The file should not include trailing lines. If a schema name uses any character other than a lowercase letter, number, or an underscore character, include that name in double quotes. You cannot combine this option with other schema or table filtering options |
— |
includeTableFile |
Specifies a text file that contains a list of tables to include in the backup. Other tables are omitted from the backup. Each line in the text file should define a single table using the format You can also specify the qualified name of a sequence, a view, or a materialized view. In that case, you should also explicitly specify the dependent objects that are required. Optionally, if you use the You cannot combine this option with other schema or table filtering options |
— |
isDataOnly |
Defines whether to back up only the table data into CSV files — without metadata files needed to recreate the tables and other database objects |
false |
isDebug |
Defines whether to display verbose debug messages during the backup |
false |
isIncremental |
Defines whether to make an incremental backup |
false |
isWithoutGlobals |
Defines whether to omit the global system objects during the backup |
false |
isLeafPartitionData |
For partitioned tables, defines whether to create one data file per leaf partition instead of one data file for the entire table |
false |
isMetadataOnly |
Defines whether to create only the metadata files (DDL) needed to recreate the database objects without backing up the actual table data |
false |
isNoCompression |
Defines whether not to compress the CSV files with table data |
false |
isNoInherits |
When invoked, only the metadata of the table itself is backed up, ignoring any inheritance relationships with other tables that would normally cause those tables to also be included in the backup set. Only works when invoked with either the |
false |
isNoHistory |
When invoked, gpbackup does not write backup run metadata to the history database |
false |
isSingleDataFile |
Defines whether to create a single data file on each segment host for all tables backed up on that segment. By default, gpbackup creates one CSV file for each table that is backed up on the segment |
false |
isWithStats |
Defines whether to include query plan statistics in the backup set |
false |
isMultiFormatBackups |
Defines whether to launch binary and logical backups simultaneously |
false |
NOTE
All fields in the JSON file are optional.
|
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains a name of the created backup:
Logical backup <backupName> created
-
Run the following command to create a backup:
$ adbm backup create --dbname test --backup-format logical --config-id 2
Result:
Create backup in progress. Action id = 5b9ffc21-1578-4e9d-a152-066cf2c02102. Performing "Start 'Create logical backup' action".done Logical backup 20240726105135 created
-
Check that options from the specified backup configuration (
--config-id=2
) were applied to the backup, since the--file
option was not used. To do this, run the backup ls command. In the example below, all options have default values, since--file
was not used when creating the backup configuration as well:$ adbm backup ls --backup-format logical --backup-name 20240726105135
Result:
{ "id" : 9, "name" : "20240726105135", "configId" : 2, "clusterId" : 1, "startTime" : 1721991095000, "updateTime" : 1721991110827, "status" : "done", "type" : "full", "version" : "1.30.3", "database" : "test", "configParams" : { "excludeSchema" : [ ], "excludeTable" : [ ], "includeSchema" : [ ], "includeTable" : [ ], "isDataOnly" : false, "isDebug" : false, "isIncremental" : false, "isWithoutGlobals" : false, "isLeafPartitionData" : false, "isMetadataOnly" : false, "isNoCompression" : false, "isNoInherits" : false, "isNoHistory" : false, "isSingleDataFile" : false, "isWithStats" : false, "isMultiFormatBackups" : false } }
-
Create a JSON file with configuration parameters:
$ vi backup.json
{ "compressionLevel": 2, "compressionType": "zstd", "isMetadataOnly": true }
-
Run the following command to create a backup:
$ adbm backup create --dbname test --backup-format logical --config-id 3 --file backup.json
Result:
Create backup in progress. Action id = 9e95c4c2-f9f9-46ea-a0b5-cad35d8961f8. Performing "Start 'Create logical backup' action".done Logical backup 20240726095356 created
-
Check that the options you passed were applied to the backup — via the backup ls command:
$ adbm backup ls --backup-format logical --backup-name 20240726095356
Result:
{ "id" : 3, "name" : "20240726095356", "configId" : 3, "clusterId" : 1, "startTime" : 1721987636000, "updateTime" : 1721987651035, "status" : "done", "type" : "full", "version" : "1.30.3", "database" : "test", "configParams" : { "compressionLevel" : 2, "compressionType" : "zstd", "excludeSchema" : [ ], "excludeTable" : [ ], "includeSchema" : [ "public" ], "includeTable" : [ ], "isDataOnly" : false, "isDebug" : false, "isIncremental" : false, "isWithoutGlobals" : false, "isLeafPartitionData" : false, "isMetadataOnly" : true, "isNoCompression" : false, "isNoInherits" : false, "isNoHistory" : false, "isSingleDataFile" : false, "isWithStats" : false, "isMultiFormatBackups" : false } }
backup restore
The adbm backup restore
command restores data from the specified backup (previously created via the backup create command) based on the passed options. For logical backups, the gprestore utility is used.
Syntax:
$ adbm backup restore \
--backup-format | -f <backupFormat> \
--backup-name <backupName> \
[--file <file>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--backup-name |
|
Yes |
— |
--file |
A path to the JSON configuration file with the restore operation parameters. When this option is used, the configuration parameters are taken from the specified file. See Input JSON file parameters below |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
Template:
{
"copyQueueSize": integer,
"jobs": integer,
"pluginConfig": "string",
"redirectSchema": "string",
"redirectDb": "string",
"excludeSchema": [
"string"
],
"excludeTable": [
"string"
],
"excludeSchemaFile": "string",
"excludeTableFile": "string",
"includeSchema": [
"string"
],
"includeTable": [
"string"
],
"includeSchemaFile": "string",
"includeTableFile": "string",
"isDebug": boolean,
"isDataOnly": boolean,
"isIncremental": boolean,
"isMetadataOnly": boolean,
"isWithStats": boolean,
"isCreateDb": boolean,
"isTruncateTable": boolean,
"isResizeCluster": boolean,
"isOnErrorContinue": boolean,
"isWithGlobals": boolean,
"isRunAnalyze": boolean
}
Field | Description | Default |
---|---|---|
copyQueueSize |
Specifies the number of Can be used only if the backup was taken with the |
— |
jobs |
Specifies the number of parallel connections to use when restoring table data and metadata. By default, one connection is used. Increasing this number can improve the speed of restoring data. If the backup was created with the |
1 |
pluginConfig |
The location of the gpbackup plugin configuration file (YAML-formatted text file). If you specify the Cannot be combined with |
— |
redirectSchema |
Defines whether to restore data in the specified schema instead of the original schemas. The selected schema should exist. This option should be used with any option that includes tables or schemas: You cannot use this option with an option that excludes schemas or tables such as You can use this option with the |
— |
redirectDb |
Defines whether to restore data in the specified database instead of the database that was backed up |
— |
excludeSchema |
Specifies an array of database schemas to exclude from the restore operation. You cannot combine this option with other schema or table filtering options |
{} |
excludeTable |
Specifies an array of tables to exclude from the restore operation. Each table should be in the format You cannot combine this option with other schema or table filtering options |
{} |
excludeSchemaFile |
Specifies a text file that contains a list of database schemas to exclude from the restore operation. Each line in the text file should define a single schema. The file should not include trailing lines. If a schema name uses any character other than a lowercase letter, number, or an underscore character, include that name in double quotes. You cannot combine this option with other schema or table filtering options |
— |
excludeTableFile |
Specifies a text file that contains a list of tables to exclude from the restore operation. Each line in the text file should define a single table using the format You cannot specify a leaf partition of a partitioned table. You cannot combine this option with other schema or table filtering options |
— |
includeSchema |
Specifies an array of database schemas to restore. Other schemas are omitted from the restore operation. You cannot combine this option with other schema or table filtering options |
{} |
includeTable |
Specifies an array of tables to restore. Other tables are omitted from the restore operation. Each table should be in the format You can also specify the qualified name of a sequence, a view, or a materialized view. In that case, you should also explicitly specify the dependent objects that are required. You cannot combine this option with other schema or table filtering options |
{} |
includeSchemaFile |
Specifies a text file that contains a list of database schemas to restore. Other schemas are omitted from the restore operation. Each line in the text file should define a single schema. The file should not include trailing lines. If a schema name uses any character other than a lowercase letter, number, or an underscore character, include that name in double quotes. You cannot combine this option with other schema or table filtering options |
— |
includeTableFile |
Specifies a text file that contains a list of tables to restore. Other tables are omitted from the restore operation. Each line in the text file should define a single table using the format You cannot specify a leaf partition of a partitioned table. You can also specify the qualified name of a sequence, a view, or a materialized view. In that case, you should also explicitly specify the dependent objects that are required. You cannot combine this option with other schema or table filtering options |
— |
isDataOnly |
Allows you to restore table data from a backup without creating the database tables and other objects. This option assumes the database objects exist in the target database |
false |
isDebug |
Defines whether to display verbose and debug log messages during a restore operation |
false |
isIncremental |
Allows you to restore only the table data in the incremental backup specified by the Requires the This is a beta feature and is not recommended for production environments |
false |
isMetadataOnly |
Allows you to create database tables and other objects from a backup without restoring the table data. This option assumes the database objects do not exist in the target database |
false |
isWithStats |
Defines whether to restore query plan statistics from the backup set. If the selected backup was not created with the Cannot be combined with |
false |
isCreateDb |
Defines whether to create the database before restoring the database object metadata. The database is created by cloning the empty standard system database |
false |
isTruncateTable |
Defines whether to truncate data from a set of tables before restoring the table data from a backup. This option lets you replace table data with data from a backup. Otherwise, table data might be duplicated. You should specify the set of tables with either the option Can be combined with the |
false |
isResizeCluster |
Enables restoring data to a cluster that has a different number of segments than the cluster from which the data was backed up |
false |
isOnErrorContinue |
Defines whether to continue the restore operation if an SQL error occurs when creating database metadata (such as tables, roles, or functions) or restoring data. If another type of error occurs, the utility exits. The default is to exit on the first error. If the option is specified, the Additionally, when the option is set, the gprestore utility writes text files to the backup directory that contain a list of tables that generated SQL errors |
false |
isWithGlobals |
Defines whether to restore ADB system objects in the backup set, in addition to database objects |
false |
isRunAnalyze |
Defines whether to run If Cannot be combined with Depending on the tables being restored, running |
false |
NOTE
All fields in the JSON file are optional.
|
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains a name of the backup, which data is restored:
Restore of <backupName> completed
$ adbm backup restore --backup-name 20240715151947 --backup-format logical
Result:
Create backup restore in progress. Action id = 9d9230ec-758e-4f3f-821e-74aa54ce9f80. Performing "Start 'Create logical restore' action".done Restore of 20240715151947 completed
-
Create a JSON file with configuration parameters:
$ vi restore.json
{ "redirectDb": "test2", "isCreateDb": true, "isRunAnalyze": true }
-
Run the following command to restore data from the specified backup:
$ adbm backup restore --backup-name 20240726121436 --backup-format logical --file restore.json
Result:
Create backup restore in progress. Action id = 618badbe-1b9d-4d90-8812-7112c99517b8. Performing "Start 'Create logical restore' action".done Restore of 20240726121436 completed
backup ls
The adbm backup ls
command lists existing backups (previously created via the backup create command).
Syntax:
$ adbm backup ls \
--backup-format | -f <backupFormat> \
[--database | -d <database>] \
[--backup-type <backupType>] \
[--backup-status <backupStatus>] \
[--backup-name <backupName>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--database | -d |
A database name |
No |
— |
--backup-type |
A backup type. Possible values for logical backups:
|
No |
— |
--backup-status |
A backup status. Possible values for logical backups:
|
No |
— |
--backup-name |
A backup name. If set, the command displays information only for the specified backup. It is returned by the backup create command and can be obtained with the backup ls command |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
The command returns a JSON file with an array of existing backups.
Template:
[ {
"id" : integer,
"name" : "string",
"configId" : integer,
"clusterId" : integer,
"startTime" : timestamp,
"updateTime" : timestamp,
"status" : "string(enum)",
"type" : "string(enum)",
"version" : "string",
"database" : "string",
"configParams" : {
"compressionLevel": integer,
"compressionType": "string(enum)",
"copyQueueSize": integer,
"jobs": integer,
"pluginConfig": "string",
"excludeSchema": [
"string"
],
"excludeTable": [
"string"
],
"excludeSchemaFile": "string",
"excludeTableFile": "string",
"includeSchema": [
"string"
],
"includeTable": [
"string"
],
"includeSchemaFile": "string",
"includeTableFile": "string",
"isDataOnly": boolean,
"isDebug": boolean,
"isIncremental": boolean,
"isWithoutGlobals": boolean,
"isLeafPartitionData": boolean,
"isMetadataOnly": boolean,
"isNoCompression": boolean,
"isNoInherits": boolean,
"isNoHistory": boolean,
"isSingleDataFile": boolean,
"isWithStats": boolean,
"isMultiFormatBackups": boolean
}
} ]
Field | Description |
---|---|
id |
A backup identifier |
name |
A backup name. It is returned by the backup create command and can be obtained with the backup ls command |
configId |
A backup configuration identifier. It is returned by the backup config create command and can be obtained with the backup config ls command |
clusterId |
A cluster identifier |
startTime |
Record creation time (timestamp of the backup start) |
updateTime |
Record modification time (timestamp of the backup status update) |
status |
A backup status. Possible values for logical backups:
|
type |
A backup type. Possible values for logical backups:
|
version |
The version of the gpbackup utility that was used to create a backup |
database |
A database name |
configParams |
Configuration options that were used for the backup. For information on each available option, see Input JSON file parameters for the All parameters are displayed when the command is used for the specific backup ( |
$ adbm backup ls --backup-format logical
Result:
[ {
"id" : 1,
"name" : "20240715151947",
"configId" : 2,
"clusterId" : 1,
"startTime" : 1721056787000,
"updateTime" : 1721056792351,
"status" : "done",
"type" : "full",
"version" : "1.30.3",
"database" : "test",
"configParams" : {
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ ],
"includeTable" : [ ]
}
}, {
"id" : 2,
"name" : "20240726095046",
"configId" : 3,
"clusterId" : 1,
"startTime" : 1721987446000,
"updateTime" : 1721987462071,
"status" : "done",
"type" : "full",
"version" : "1.30.3",
"database" : "test",
"configParams" : {
"compressionLevel" : 1,
"compressionType" : "gzip",
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ "public" ],
"includeTable" : [ ]
}
}, {
"id" : 3,
"name" : "20240726095356",
"configId" : 3,
"clusterId" : 1,
"startTime" : 1721987636000,
"updateTime" : 1721987651035,
"status" : "done",
"type" : "full",
"version" : "1.30.3",
"database" : "test",
"configParams" : {
"compressionLevel" : 2,
"compressionType" : "zstd",
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ "public" ],
"includeTable" : [ ]
}
} ]
$ adbm backup ls --backup-format logical --backup-name 20240726095356
Result:
{
"id" : 3,
"name" : "20240726095356",
"configId" : 3,
"clusterId" : 1,
"startTime" : 1721987636000,
"updateTime" : 1721987651035,
"status" : "done",
"type" : "full",
"version" : "1.30.3",
"database" : "test",
"configParams" : {
"compressionLevel" : 2,
"compressionType" : "zstd",
"excludeSchema" : [ ],
"excludeTable" : [ ],
"includeSchema" : [ "public" ],
"includeTable" : [ ],
"isDataOnly" : false,
"isDebug" : false,
"isIncremental" : false,
"isWithoutGlobals" : false,
"isLeafPartitionData" : false,
"isMetadataOnly" : true,
"isNoCompression" : false,
"isNoInherits" : false,
"isNoHistory" : false,
"isSingleDataFile" : false,
"isWithStats" : false,
"isMultiFormatBackups" : false
}
}
backup report
The adbm backup report
command displays a report for the specified backup (previously created via the backup create command).
Syntax:
$ adbm backup report \
--backup-format | -f <backupFormat> \
--backup-name <backupName> \
[--output <outputFormat>] \
[--log-level <logLevel>] \
[--log-path <logPath>] \
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--backup-name |
|
Yes |
— |
--output |
The command output format. Possible values:
|
No |
text |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
The command returns a detailed report for the specified backup (in the text or JSON form depending on the --output
option value). This report contains the ADB and gpbackup versions; command options used to create the selected backup; the backup start and end timestamps, and other useful information.
$ adbm backup report --backup-name 20241206091238 --backup-format logical --output text
Result:
Greenplum Database Backup Report timestamp key: 20241206091238 gpdb version: 6.27.1_arenadata59 build 3956+gitc4500d63cbe gpbackup version: 1.30.5 database name: test command line: gpbackup --dbname test --backup-dir /dasha --compression-level 2 --compression-type zstd --verbose compression: zstd plugin executable: None backup section: All Sections object filtering: None includes statistics: No data file format: Multiple Data Files Per Segment incremental: False start time: Fri Dec 06 2024 09:12:38 end time: Fri Dec 06 2024 09:12:40 duration: 0:00:02 backup status: Success database size: 167 MB segment count: 8 count of database objects in backup: aggregates 0 casts 0 collations 0 constraints 4 conversions 0 default privileges 0 database gucs 0 event triggers 0 extensions 1 foreign data wrappers 0 foreign servers 0 functions 0 indexes 2 operator classes 0 operator families 0 operators 0 procedural languages 0 protocols 0 resource groups 2 resource queues 1 roles 3 rules 0 schemas 2 sequences 2 tables 6 tablespaces 0 text search configurations 0 text search dictionaries 0 text search parsers 0 text search templates 0 triggers 0 types 0 user mappings 0 views 0
$ adbm backup report --backup-name 20241206091238 --backup-format logical --output json
Result:
{
"report" : "Greenplum Database Backup Report\n\ntimestamp key: 20241206091238\ngpdb version: 6.27.1_arenadata59 build 3956+gitc4500d63cbe\ngpbackup version: 1.30.5\n\ndatabase name: test\ncommand line: gpbackup --dbname test --backup-dir /dasha --compression-level 2 --compression-type zstd --verbose\ncompression: zstd\nplugin executable: None\nbackup section: All Sections\nobject filtering: None\nincludes statistics: No\ndata file format: Multiple Data Files Per Segment\nincremental: False\n\nstart time: Fri Dec 06 2024 09:12:38\nend time: Fri Dec 06 2024 09:12:40\nduration: 0:00:02\n\nbackup status: Success\n\ndatabase size: 167 MB\nsegment count: 8\n\ncount of database objects in backup:\naggregates 0\ncasts 0\ncollations 0\nconstraints 4\nconversions 0\ndefault privileges 0\ndatabase gucs 0\nevent triggers 0\nextensions 1\nforeign data wrappers 0\nforeign servers 0\nfunctions 0\nindexes 2\noperator classes 0\noperator families 0\noperators 0\nprocedural languages 0\nprotocols 0\nresource groups 2\nresource queues 1\nroles 3\nrules 0\nschemas 2\nsequences 2\ntables 6\ntablespaces 0\ntext search configurations 0\ntext search dictionaries 0\ntext search parsers 0\ntext search templates 0\ntriggers 0\ntypes 0\nuser mappings 0\nviews 0\n"
}
backup delete
The adbm backup delete
command allows you to delete a backup with the specified name (previously created via the backup create command).
Syntax:
$ adbm backup delete \
--backup-format | -f <backupFormat> \
--backup-name <backupName> \
[--before]
[-a]
[--log-level <logLevel>]
[--log-path <logPath>]
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--backup-name |
|
Yes |
— |
before |
Indicates whether to delete all backups older than the given one |
No |
— |
-a |
Defines whether to launch the command automatically (without user confirmation) |
No |
false |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
In the command output, you can see an identifier of the ADBM action in the following format:
Action id = <actionId>
The successful result also contains a name of the removed backup:
Deletion of <backupName> completed
$ adbm backup delete --backup-format logical --backup-name 20240715151947
Result:
Are you sure you want to delete logical backup? (y/n):y Delete backup in progress. Action id = 682cc090-e64a-4332-b084-8b1728dcf71a. Performing "Start 'Delete logical backup' action".done Deletion of 20240715151947 completed
backup status
The adbm backup status
command returns information about the specified action that was launched by one of the following commands:
Syntax:
$ adbm backup status \
--backup-format | -f <backupFormat> \
--action-id <actionId> \
[--all]
[--log-level <logLevel>]
[--log-path <logPath>]
[--help | -h]
Option | Description | Mandatory | Default |
---|---|---|---|
--backup-format | -f |
Determines whether the command operates with binary or logical backups. Possible values:
|
Yes |
— |
--action-id |
A unique identifier of the action. It is returned by the following commands: |
Yes |
— |
--all |
Indicates whether to include information about child actions into the command output (in the |
No |
— |
--log-level |
A logging level for the command |
No |
info |
--log-path |
A path where the command log files are stored |
No |
/home/gpadmin/gpAdminLogs |
--help | -h |
Displays a helper with a list of options supported by the command |
No |
— |
The command returns a JSON file with information on the specified action.
Template:
{
"clusterName" : "string",
"startTime" : timestamp,
"endTime" : timestamp,
"result": string,
"actionData" : {
"id" : "string",
"name" : "string",
"status" : "string(enum)",
"type" : "string(enum)",
"username" : "string"
},
"childActions" : [
{
"clusterName" : "string",
"startTime" : timestamp,
"endTime" : timestamp,
"result": string,
"actionData" : {
"id" : "string",
"name" : "string",
"status" : "string(enum)",
"type" : "string(enum)",
"username" : "string"
},
"childActions" : [],
"level" : 1
},
...
],
"level" : 0
}
Field | Description |
---|---|
clusterName |
A cluster name |
startTime |
An action start timestamp |
endTime |
An action end timestamp |
result |
An action result (if any) |
actionData.id |
An action identifier |
actionData.name |
An action name |
actionData.status |
An action status. Possible values:
|
actionData.type |
An action type. Possible values:
|
actionData.username |
The user who performed the selected action |
childActions |
An array of the child actions (if any). Filled in if the |
level |
An action level. |
$ adbm backup status --action-id 682cc090-e64a-4332-b084-8b1728dcf71a --backup-format logical
Result:
{
"clusterName" : "Test ADB cluster",
"startTime" : 1721146147611,
"endTime" : 1721146155632,
"actionData" : {
"id" : "682cc090-e64a-4332-b084-8b1728dcf71a",
"name" : "Start 'Delete logical backup' action",
"status" : "done",
"type" : "api",
"username" : "adcc"
},
"childActions" : [ ],
"level" : 0
}
$ adbm backup status --action-id 682cc090-e64a-4332-b084-8b1728dcf71a --backup-format logical --all
Result:
{
"clusterName" : "Test ADB cluster",
"startTime" : 1721146147611,
"endTime" : 1721146155632,
"actionData" : {
"id" : "682cc090-e64a-4332-b084-8b1728dcf71a",
"name" : "Start 'Delete logical backup' action",
"status" : "done",
"type" : "api",
"username" : "adcc"
},
"childActions" : [ {
"clusterName" : "Test ADB cluster",
"startTime" : 1721146147709,
"endTime" : 1721146147745,
"actionData" : {
"id" : "e59a4f9b-fbc2-45a2-ab5a-9014c3ee3dca",
"name" : "Get cluster topology",
"status" : "done",
"type" : "internal",
"username" : "adcc"
},
"childActions" : [ ],
"level" : 1
}, {
"clusterName" : "Test ADB cluster",
"startTime" : 1721146147751,
"endTime" : 1721146147778,
"actionData" : {
"id" : "0c43d0e1-0122-49e1-8409-540608c9e571",
"name" : "Save topology metadata",
"status" : "done",
"type" : "internal",
"username" : "adcc"
},
"childActions" : [ ],
"level" : 1
}, {
"clusterName" : "Test ADB cluster",
"startTime" : 1721146147789,
"endTime" : 1721146147808,
"actionData" : {
"id" : "6623b6f4-61df-4428-8443-0b0b86813d10",
"name" : "Get logical backup for deletion",
"status" : "done",
"type" : "internal",
"username" : "adcc"
},
"childActions" : [ ],
"level" : 1
}, {
"clusterName" : "Test ADB cluster",
"startTime" : 1721146147818,
"endTime" : 1721146155591,
"result" : "[{\"target\":\"10.92.43.169:6571\",\"requestId\":\"d7dfc140-efc6-4cff-a6de-b7cc68802455\",\"actionId\":\"100f4faf-e130-4ce7-b6c2-19b5ffe052b8\",\"clusterId\":1,\"success\":true,\"jsonBody\":null,\"errors\":[],\"segmentIds\":[-1],\"commands\":[{\"command\":\"delete logical backups from gpbackup_history.db\",\"segmentId\":-1}]},{\"target\":\"10.92.40.79:6571\",\"requestId\":\"38395ef8-0fea-4098-82f2-e78c74c1c959\",\"actionId\":\"100f4faf-e130-4ce7-b6c2-19b5ffe052b8\",\"clusterId\":1,\"success\":true,\"jsonBody\":null,\"errors\":[],\"segmentIds\":[4,5,6,7],\"commands\":[{\"command\":\"delete logical backups from gpbackup_history.db\",\"segmentId\":-1}]},{\"target\":\"10.92.40.109:6571\",\"requestId\":\"93a8b36b-4e5f-4d98-9625-eb28868e2a2a\",\"actionId\":\"100f4faf-e130-4ce7-b6c2-19b5ffe052b8\",\"clusterId\":1,\"success\":true,\"jsonBody\":null,\"errors\":[],\"segmentIds\":[0,1,2,3],\"commands\":[{\"command\":\"delete logical backups from gpbackup_history.db\",\"segmentId\":-1}]}]",
"actionData" : {
"id" : "100f4faf-e130-4ce7-b6c2-19b5ffe052b8",
"name" : "Physical removal of logical backups",
"status" : "done",
"type" : "internal",
"username" : "adcc"
},
"childActions" : [ ],
"level" : 1
}, {
"clusterName" : "Test ADB cluster",
"startTime" : 1721146155626,
"endTime" : 1721146155630,
"actionData" : {
"id" : "edc27555-fb92-4f37-81e5-961bae3f7347",
"name" : "Finish 'Delete logical backup' action",
"status" : "done",
"type" : "internal",
"username" : "adcc"
},
"childActions" : [ ],
"level" : 1
} ],
"level" : 0
}