Configure job policies
The Configuration → Job policy page in the ADB Control web interface allows you to configure schedules for different system jobs. This page includes two tabs, each of which is described in detail below.
Data cleanup
On the Configuration → Job policy → Data cleanup tab, you can configure a schedule for removing old metrics from ADB Control.

To configure the job, follow the steps:
-
Activate the Enable switcher.
-
Fill in the following parameters in the Main parameters section.
Field Description Default value Batch size
A maximum number of rows in the data batch
500
Expire duration
A maximum time period that is used to store metrics in ADB Control. If this period expires, the metrics will be removed during the next job launch. Use the following format —
<value><unit>
(without spaces), where<unit>
can have the following values:-
ms
— milliseconds; -
sec
— seconds; -
min
— minutes; -
hr
— hours; -
d
— days; -
w
— weeks.
Examples:
5min
,1hr
,1d
. The minimal value is100ms
; the maximum is1w
5d
Data cleanup schedule
A schedule for automatic data cleanup
0 0 23 * * ? *
-
-
Click Apply. The Revert button is designed to undo the changes that have not been yet saved by clicking Apply.
If all steps are completed successfully, the next launch time of the Data cleanup
job is updated on the Jobs → ADCC → Schedule page.
CAUTION
|
Metrics offload
On the Configuration → Job policy → Metrics offload tab, you can configure a schedule for uploading ADB Control metrics and events to an external database. To configure the job, follow the steps:
-
On the target host, ensure you have a database to store the uploaded metrics. Its name will be used when configuring the JDBC connection later.
-
Provide the ADB Control access to the selected external database. For example, PostgreSQL requires the following record in the pga_hba.conf file:
host <database_name> <user_name> <adbc_address> trust
where:
-
<database_name>
— a database name in PostgreSQL. For example,adbc_metrics
. -
<user_name>
— a user name in PostgreSQL. For example,postgres
. -
<adbc_address>
— an IP address of the host where ADB Control is deployed (with a subnet number). For example,10.92.18.1/32
.
-
-
Go to the Configuration → Job policy → Metrics offload tab in the ADB Control interface.
The "Configuration → Job policy → Metrics offload" tab -
Activate the Service enabled switcher.
-
Click Edit connection in the Analytic database connection section.
-
In the window that opens, fill in the following fields:
-
JDBC URL — a URL of the JDBC connection to the host where the target database is located. For PostgreSQL, use the following format:
jdbc:postgresql://<external_ip>:5432/<db_name>
where:
-
<external_ip>
— an IP address of the host where the target database is located. For example,10.92.6.225
. -
<db_name>
— a target database name. For example,adbc_metrics
.
-
-
User — a user name. In the following example, the default user
postgres
is used. -
Password — a user password.
JDBC connection parameters
-
-
Click Save.
-
Fill in the following fields in the Main parameters section of the previous form.
Field Description Default value Batch size
A maximum number of rows in the data batch that is sent to an external database
1000
Offload metrics job schedule
A schedule for the automatic data export to an external database
0 0 * * * ? *
-
Click Apply. The Revert button is designed to undo the changes that have not been yet saved by clicking Apply.
Data is filled -
Get the following message on the successful result.
The successful result -
As a result, the
adcc
schema should be created in the external database. The tables in this schema will store metrics and events collected from ADB Control. To check whether all necessary objects are created, you can run the following query in PostgreSQL:SELECT table_name FROM information_schema.tables WHERE table_schema = 'adcc' ORDER BY table_name;
Result:
table_name ----------------------------- audit_auth audit_auth_adb audit_operation cluster cluster_query_metric query query_error resgroup_config_audit resgroup_status resgroup_status_per_segment transaction (11 rows)
If all steps are completed successfully, the new scheduled job Export job
should be added to the Jobs → ADCC → Schedule page.
IMPORTANT
In addition to the job schedule, you should specify the clusters and databases which metrics should be exported to the external database. For more information, see Cluster management. |