| In case of requests that have a slice executing on the coordinator, spill file entries in gp_toolkit.gp_workfile_entriesandgp_toolkit.gp_workfile_usage_per_querymight be duplicated (with different PIDs), resulting in incorrect total size calculation |  —  |  —  | 
Use SELECT DISTINCT ON (segid, sess_id, command_cnt, prefix)in queries against the mentioned views to remove duplicate entries. For example: | 
| 
There is an error in the description of the CPUSEToption of theALTER RESOURCE GROUPcommand in the VMWare documentation. 
Actually, the segment_coresvalue comes first, and then themaster_coresvalue |  —  |  —  | Use the correct description provided in the first table column | 
| For the timestamp with time zonetype, PXF does not always perform push-down | 6.27.1.57 |  —  | To ensure the correct handling of the timestamp with time zonetype, an external table definition must include thedate_wide_range=trueparameter | 
| Some ulimitparameters (e.g.nofile limit) may be not set when a cluster is started via thegpstartconsole utility under thegpadminuser (occurs in Astra Linux) | 6.25.2.52.b1 |  —  | 
When switching to gpadmin, do not usesudo -iu gpadmin. Usesudo suinstead: | 
| 
If you made a backup via gpbackupfor ADB cluster of a version preceding 6.25.2.52.b1 and try to restore a database from that backup in ADB 6.25.2.52.b1 viagprestore, you can get the following error: 
[CRITICAL]:-ERROR: language "plpythonu" already exists (SQLSTATE 42710) | 6.25.2.52.b1 |  —  | Drop PROCEDURAL LANGUAGE plpythonuin a target database before runninggprestore | 
| 
If you made a backup via gpbackupfor ADB cluster of a version preceding 6.25.2.52.b1  and try to restore a database from that backup in ADB 6.25.2.52.b1 viagprestore, you can get the following error: 
[CRITICAL]:-ERROR: function public.dblink_connect_no_auth(text) does not exist (SQLSTATE 42883) | 6.25.2.52.b1 |  —  | 
Run gprestorewith the--metadata-onlyand--on-error-continueoptions.
Check that there are not any critical errors in error log (if errors are present, add the necessary relations manually).
Run gprestorewith the--data-onlyoption to restore data. | 
| 
If you made a backup via gpbackupfor ADB cluster of a version preceding 6.25.2.52.b1  and try to restore a database from that backup in ADB 6.25.2.52.b1 viagprestore, you can get the following errors: 
[CRITICAL]:-ERROR: incompatible library "/usr/lib/gpdb/lib/postgresql/gptkh.so": version mismatch (dfmgr.c:367) (SQLSTATE XX000) 
[CRITICAL]:-ERROR: required extension "pxf_fdw" is not installed (SQLSTATE 42704) | 6.25.2.52.b1 |  —  | Update the extension packages gptkhandpxf_fdwin each database that uses these extensions and then update the extensions. To restore a backup, you only need to update the packages | 
| 
The SELECTquery toarenadata_toolkit.__db_files_currentfails with the following error: 
psql:/tmp/tmpLWiv6c:1: ERROR:  illegal rescan of motion node: invalid plan (nodeMotion.c:1712)  (seg105 slice5 10.183.104.153:10003 pid=22864) (nodeMotion.c:1712)
HINT:  Likely caused by bad NL-join, try setting enable_nestloop to off | 6.25.1.49 |  —  | 
Via psql, connect to the database where you gather statistics (e.g.adb) under a superuser: 
$ psql adb -U <superuser>
Run the following command to alter the function: 
ALTER FUNCTION arenadata_toolkit.adb_get_relfilenodes(tablespace_oid OID) ROWS 30000000;
 | 
| The ADB bundle update action can fail with the following error: Failed: invalid syntax, for details look pg_hba.conf on master host | 6.24.3.48 |  —  | 
Backup logs from the pg_log directory.
Clear or compress log files.
Restart ADB. | 
| 
After updating ADB to version 6.24.3.48.b1 or higher, you can get the following error when using gpbackup/gprestore: 
[CRITICAL]:-table backups has no column named segment_count 
The error occurs due to the fact that in gpbackupof version 1.28.1 thesegment_countfield was added to thebackupstable of the service SQLite databasebackup_history.db. 
This SQLite database is used to: 
Find a previous completed backup with the same options to create an incremental backup if the --from-timestampparameter is omitted.
During the restore process, find a plugin version (e.g. gpbackup_s3_plugin) that was used to create the specified backup. 
Another side effect of the column absence is the incorrect value for a number of segments in backups that were already made. It leads to an error during a resize restore operation | 6.24.3.48.b1 |  —  | 
Use one of the following options (each one comes with certain limitations): 
Option 1. Use the --from-timestampand--no-historygpbackupparameters together for incremental backups. This option is not helpful when a plugin is used.
Option 2. Drop the old history database file backup_history.db after upgrade to gpbackupof version 1.28.1 or higher: 
$ rm $MASTER_DATA_DIR/gpseg-1/gpbackup_history.db
 
You have to proceed with full backups to restore data from incremental backups and provide the backup plugin work.
Option 3. Alter the backupstable schema in thebackup_history.dbSQLite database: 
$ sqlite3 $MASTER_DATA_DIR/gpseg-1/gpbackup_history.db
 
ALTER TABLE backups ADD COLUMN segment_count INT;
pragma table_info('backups');
 
You have to provide a valid number of segments if the resize restore operation is requested. | 
| 
When data is selected from ClickHouse via ADB ClickHouse Connector, the following error occurs if the password for the defaultuser is defined: 
DB::Exception: default: Authentication failed: password is incorrect, or there is no user with such name. | 6.24.3.47 |  —  | Do not use a password for the defaultuser | 
| Due to technical limitations in the ADB DDBoost plugin, the restore_subsetplugin option is automatically set tooffwhen theon-error-continueoption is specified | 6.22.0.38 |  —  | Do not use the on-error-continueoption during data restore | 
| Expanding a cluster may fail if fully qualified domain names (FQDN) are used for the cluster hosts and the Check array flag is set | 6.16.1.20 |  —  | Unset the Check array flag during the cluster expansion process if FQDNs are used in the cluster | 
| 
When resource management based on resource groups is active, the memory allotted to a segment host is equally shared by active primary segments. Assigning of memory to primary segments happens when the segment takes the primary role. The initial memory allotment to a primary segment is calculated on starting a cluster (with gpstart) and does not change, even when number of primary segments increases during the failover situation. This may result in a segment host utilizing more memory than thegp_resource_group_memory_limitsetting permits. | 6.12.1.11 |  —  | Restart your cluster to recalculate memory allotment in case of increased number of primary segments on hosts (e.g. after failover) | 
| There is an error in the COPYcommand description in the VMWare documentation. TheFILL MISSING FIELDSoption should be replaced withFILL_MISSING_FIELDS | 6.12.1.11 |  —  | Use the correct option provided in the first table column | 
| 
ADB Monitoring may raise an issue for metrics storage if RAID is used with the following error in /var/log/diamond/diamond.log: 
error [MainThread] ATTRIBUTE_NAME not found in second column of smartctl output between lines 4 and 9 | 6.12.1.11 |  —  | Set /etc/diamond/collectors/SmartCollector.conf parameter enabled = False | 
| 
diskquotamay lose table statistics when being paused during ADB cluster restart. As a result, quotas may be violated.
 
diskquotacalculates table sizes and stores that information in thediskquota.table_sizetable periodically with a pause equal todiskquota.naptime(2 seconds by default). If you restart the ADB cluster during this interval of time, thendiskquotawill lose all changes that have occurred since the last data save to thediskquota.table_sizetable. For example, you could make a large insert into the tables, restart the cluster, anddiskquotawould not know about such a large insert. This happens because thediskquota.table_sizetable does not store information about data insert. Until these tables are active again,diskquotawill not update their size. It can be useful to reinitializediskquotaby calling thediskquota.init_table_size_tablefunction, but on large clusters this can take a significant amount of time
 | 6.12.1.11 |  —  | No workaround. There is no way to reliably persist in-memory changes from coordinator and segments when processes get a termination signal | 
| The diskquota.pause()function description in the VMWare documentation is not correct. The right one is:diskquota.pause()makes bgworkers skip refreshing quotas entirely to avoid issues with wasting computation resources. Table sizes are updated correctly after resume | 6.12.1.11 |  —  | Use the correct description provided in the first table column | 
| 
The formula for specifying the maximum number of table segments from the VMWare documentation is not correct. The right one is: 
diskquota.max_table_segments = <maximum_number_of_tables> * ((<number_of_segments> mod 100) + 1) * 100 | 6.12.1.11 |  —  | Use the correct formula provided in the first table column | 
| 
When trying to insert a row into a leaf partition of a partitioned table, you get the following error: 
SQL Error [23514]: ERROR: trying to insert row into wrong partition 
The issue reveals itself when some columns are dropped from a partitioned table and then partitions are added via ADD PARTITIONorEXCHANGE PARTITION | 6.12.1.11 |  —  | Recreate a table or, if possible, do not drop columns from a partitioned table if you are going to add new partitions after that | 
| In CentOS and Astra Linux, you cannot install or update ADB to 6.27.1.58 without manual installation of Java 17 | 6.27.1.58 | 6.27.1.58 | Before ADB installation/update, install Java 17 on cluster hosts and fill in the JAVA_HOME parameter on the cluster configuration page | 
| ADB to ADB Connector installation fails due to the lack of extension files on segments | 6.25.1.51.b1 | 6.25.2.52.b1 | Install the adb-fdw package for all ADB segment nodes manually, e.g. install it en masse via running gpsshfrom master or run installation on each node | 
| 
During installation of the ADB cluster (version < 6.23.3.44) with external ADB Control (version from 6.23.3.44 to 6.25.1.49), you get inconsistency of agents with server endpoint and the failed bundle task: 
AssertionError: Action Install finished execution with unexpected result - 'failed'. Expected - 'success'
  TASK [adcc_client : Add ADB connection to ADCC] | 6.23.3.44 | 6.25.1.49 | No workaround. Only upgrade to the Fixed version resolves the issue | 
| In ADB 6 starting with 6.22.1.41, Madlib uses a new path for its libraries — /usr/local/madlib-adb6. After upgrading from previous ADB versions, Madlib stops working | 6.22.1.41 | N/A | 
Backup the madlibschema if necessary.
Drop the madlibschema in the cascade mode.
Run the Install Madlib action for ADB service via ADCM.
Restore schema objects from a backup. | 
| The shared library diskquota-2.0.so doesn’t appear in shared_preload_librariesduring upgrade with the following pg_log error on master:FATAL: could not access file ""diskquota"": no such file or directory | 6.22.1.41 | 6.24.3.48 | 
Ensure that the dry-run of the sedutility works correctly: 
$ sed 's/diskquota/diskquota-2.0/g' /data1/*/gpseg*/postgresql.conf
Run the sedutility with the-iparameter to apply changes: 
$ sed -i 's/diskquota/diskquota-2.0/g' /data1/*/gpseg*/postgresql.conf
 | 
| During installation of the encryption module, you get the following error: ERROR: Encryption library was not found in /var/lib/pxf/lib/ directory | 6.22.1.40 | 6.22.1.41 | 
Copy the necessary library from /usr/lib/pxf/lib/: 
$ cp /usr/lib/pxf/lib/encryption-1.0.0-exec.jar /var/lib/pxf/lib/encryption-1.0.0-exec.jar
 | 
| If you upgrade your cluster to 6.22.1.40, remember that during this process the arenadata_toolkit.db_files_historytable is being recreated with loading partitions/compression options and all data into the new table. This process may take a long time for a huge table | 6.22.1.40 | 6.22.1.41 | 
If you do not want to include the arenadata_toolkit.db_files_historytable migration into the upgrade process, rename this table before upgrade and return the source name after: 
Run the query: 
ALTER TABLE arenadata_toolkit.db_files_history RENAME TO db_files_history_skip;
Run the Upgrade cluster action via ADCM and wait for a successful upgrade.
Run the query: 
ALTER TABLE arenadata_toolkit.db_files_history_skip RENAME TO db_files_history;
 | 
| If you upgrade your cluster to 6.22.1.40 with installed ADCC and the UI LDAP authentication parameter switched on, the newer ADB server is not registered in ADCC and its version on the Information page in ADCC remains the same as for the older ADB server. It does not affect any functionality | 6.22.1.40 | 6.22.1.41 | 
Uninstall ADCC before upgrading the ADB cluster to 6.22.1.40.
Run Upgrade.
Reinstall ADCC after upgrade. | 
| [6X issue 13067] Gradual memory leak on mirror segments | 6.19.1.31 | 6.19.3.32 | Increase monitoring for memory consumption in a cluster and restart a cluster during the maintenance if possible. This issue will be patched in the next release | 
| madlibon Power8 (ppc64le arch) breaks agpdbbuild
 | 6.17.2.26 | 6.17.5.26 | No workaround. madlibis not included into a Power8 (ppc64le arch) build | 
| kafka-fdw of versions 0.11-0.12 (they were ported in 6.15.0.17) does not work correctly when a batch size exceeds 40000 - <msg_count> * 40bytes (where<msg_count>is a count of messages). In this case, theSELECToperation from the KafkaToADB external table causesSEGFAULT. The fix on the issue is expected in the next ADB release | 6.16.3.24 | 6.17.1.25 | Do not use a batch size larger than 40000 - <msg_count> * 40bytes if you read more than one message. There are no limits for reading a single message | 
| The gpbackuputility of versions 1.20.1 — 1.20.4 does not work correctly when foreign tables are present in a database. Backup fails with theSQLSTATE 42809error. The fix on the issue is expected in the next ADB release | 6.16.1.20 | 6.16.2.21 | Replace the current gpbackupwith version 1.20.0 which is included in ADB 6.13.0.12 | 
| 
Attempt to upgrade a cluster with diskquotaenabled andgp_default_storage_options='appendoptimized=true'results in the following error: 
ERROR: "append-only tables do not support unique indexes" | 6.12.1.11 | 6.25.1.49 | 
Set gp_default_storage_options='appendoptimized=false'before upgrade.
Upgrade a cluster.
Revert to gp_default_storage_options='appendoptimized=true'. | 
| The Upgrade action fails if appendoptimizedis set totruein thegp_default_storage_optionsparameter for any database | 6.12.1.11 | 6.24.3.48.b1 | Revert the gp_default_storage_optionsvalue toappendoptimized=falsebefore the upgrade and set it totrueafter | 
| gpperfmoncan flood with logs to $MASTER_DATA_DIR/gpseg-1/gpperfmon/q*.txt even if it is not in use
 | 6.12.1.11 | 6.23.5.45.b1 | 
Switch off the gp_enable_gpperfmonGUC: 
$ gpconfig -c gp_enable_gpperfmon -v off
Purge the $MASTER_DATA_DIR/gpseg-1/gpperfmon/ directory. | 
| When you expand ADB by running the Expand cluster action, you can get errors if cluster hosts were registered with full qualified domain names (FQDN). In that case, the hostnameandaddresscolumns of thegp_segment_configurationtable contain different values, which may cause errors during the Expand action | 6.12.1.11 | 6.23.3.44 | 
Before you expand an ADB cluster: 
Stop ADB, for example, via the following command:
Start a master segment in the utility mode:
Set the allow_system_table_modsGUC value totrue: 
SET allow_system_table_mods = 'true';
Fix gp_segment_configurationso that thehostnameandaddresscolumn values are the same: 
UPDATE gp_segment_configuration SET hostname = address;
Stop and start ADB in a standard mode: |