ADH releases
2.1.7
Date: 20.12.2022
Added the livy-spark3 component to the Spark3 service |
Added Hive delegation token |
Added the Apply configs from ADCM checkbox for all services |
Flink build 1.15.1 is available |
Added the ability to connect to Flink JobManager in the high availability mode |
Fixed the High Availability mode activation for YARN ResourceManager |
Fixed: failed to enable SSL with Airflow2 on AltLinux |
Fixed: Flink TaskManager doesn’t start in the High Availability mode |
Fixed: no metrics when Monitoring service was installed in a wrong order |
Fixed: missing Spark3 actions Add Spark3 Livy, Remove Spark3 Livy |
Fixed: Flink upgrade from 2.1.6 to 2.1.7 fails if Job Manager is not collocated with the HDFS Client |
Fixed: missing parameter |
Fixed: Flink installation failed |
Fixed the Manage ranger plugin action in HBase |
Fixed SSL settings for the following services:
|
Fixed: the Enable SSL action failed due to absence of ranger-hbase-plugin |
Fixed the upgrade from 2.1.6.b4-1 to 2.1.7.b1-pre_rc |
Fixed: incorrect minimum version for an upgrade |
Fixed: the Enable HA/Disable HA actions lead to a configuration divergence of hive-site.xml on servers |
Fixed the Disable HA HiveServer2 action error |
Fixed HBase SASL error while connecting to a kerberized ZooKeeper (no hbase-jaas.conf) |
Fixed the error when expanding Hive Server 2 on a host with Hive Metastore |
Fixed the Add Hive Metastore action |
Fixed: no description for the Reliability Control → timeout field |
Fixed: the Scheduler type does not pass to the yarn_scheduler_jmx filter |
Fixed: passwords shown in the Ansible log during the Enable SSL action |
Fixed: wrong Spnego keytab value when expanding Hive Server 2 |
Fixed components list on the Hosts - Components page |
Fixed: need to install the jdbc-mysql driver on nodes with the hive-client service when internal MariaDB is used |
The Enable Ranger plugin action for Hive failed if no YARN service and politics are defined in Ranger |
Fixed: enabling SSL repeatedly failed |
Fixed: the Spark Thriftserver check fails due to FairScheduler being used |
Fixed: YARN reconfiguration fails if FairScheduler is enabled |
Fixed errors with the order of hosts decommissioning |
Fixed the bug that prevented running Flink on Yarn |
Added package checks optimizations for the installation |
Added a cleanup of MIT credentials after disabling Kerberos in an ADH cluster for the following services:
|
Passwords hidden for actions Config check, Enable SSL for the following services:
|
Refactored Livy impersonation options |
Added the ability to configure Livy impersonation |
Added additional |
The Disable HA HiveServer2 action uses the |
Performed actions optimizations |
Added the ability to delete a service in the |
Added |
Fixed naming for service configs (HDFS, HBase, Hive) |
Fixed naming for ADH Service actions |
Fixed naming and typos for ADH install actions |
Changed the sequence of actions for Kerberos, SSL, and cluster installation |
2.1.6
Date: 16.09.2022
Added support for AltLinux 8.4 |
Added support for customization of ldap.conf via ADCM |
Improved errors handling using cURL |
Hive logging refactored |
Fixed: cannot change parameters in ranger-yarn-policymgr-ssl.xml (YARN) |
Fixed: |
Fixed a configuration name after Flink restarts |
Fixed: Zeppelin user does not exist when enabling Ranger for YARN |
Fixed: cannot change nameservice when faulty installed (HDFS) |
Fixed: a cluster upgrade always switches state to |
Fixed: a Spark 3 upgrade does not disable old repositories on Alt Linux |
Fixed: the Reconfigure Kerberos action has no |
Fixed: YARN Resource Manager does not start on Alt 8.4 |
Fixed |
Fixed inconsistency between security settings in Spark and Spark3 configs |
Changed check code due to Scala version upgrade |
2.1.4
Date: 08.08.2022
Added the ability to specify external nameservices |
Added the ability to connect to HiveServer2 in the fault-tolerant mode |
Cluster components states refactored |
Refactored the order of Stop, Start, and Restart actions for the HDFS service |
Enhanced monitoring metrics collection by YARN queues |
Removed read-only attribute from the |
Fixed the cluster kerberization status error after a bundle upgrade |
Fixed: Ansible variable is not resolved during HDFS installation |
The |
Fixed permissions for the |
Date: 16.06.2022
The ability to install ADH components from custom Docker registry is added |
The check box Rewrite current service SSL parameters is added for the Enable SSL action |
New parameters are added for the Zeppelin Hive interpreter — in order to enable SSL and Kerberos |
Retries for generating Kerberos principals are implemented |
Custom authentication (LDAP/AD) is enabled for Hive2Server |
The Move action is added for Spark Livy Server |
The Move action is added for Spark History Server |
The Move action is added for Flink Job Manager |
The Move action is added for Sqoop Metastore |
The Move action is added for YARN Timeline Server |
The Move action is added for YARN MapReduce History Server |
The Ranger plugin for Solr authorization is added |
The ability to remove services from the cluster is added |
The ability to customize configuration files via ADCM is added |
The support of Kerberos REALM is added |
The Solr connection for audits is changed from Solr server to ZK node |
Changing the property type from |
SSL default configuration parameters are changed from invisible to read-only |
Fixed: duplicate dictionary keys in the config.yaml file that did not pass the new YAML validation in ADCM |
Fixed: the error with Zeppelin installation without Hive |
Fixed: users could change some read-only Kerberos-related parameters in services |
Fixed failing jobs when enabling the GPU on YARN property |
Fixed the error with applying the Remove service action to Spark3 |
Fixed the error with incorrect value in interpereter.json in Zeppelin when SSL is off |
Fixed: after enabling SSL policies did not work in Ranger |
Fixed: the Container DN format when applying Enable Kerberos |
Fixed the error with incorrect saving the |
Fixed: YARN failed to copy container-executor.cfg |
Fixed: job status and result did not match when deleting optional (unnecessary) settings in the ZooKeeper service configuration |
Fixed: the action Check applied to Flink failed if hosts in ADCM had uppercase letters |
Fixed: services did not collect policy from Ranger in SSL |
Fixed: applying the action Remove internal database actually removed the service itself |
A fixed Ranger Solr plugins repository is added for ADH 2.1.4 |
The order of bundle upgrades is changed from particular to general |
Dependencies between components and services at the ADCM level are implemented |
Ranger Plugins are bumped to 1.0.3 |
The ability to download ADH offline packs from the Arenadata source directory to the customer proxy repository is added |
Date: 31.03.2022
The Kerberos authentication is enabled for Web UI |
SSL for Ranger plugins is enabled |
SSL for Flink is enabled |
SSL for Sqoop is enabled |
The rollback operation is enabled in the case of the failed kerberization process |
SSL for Zeppelin is enabled |
SSL for Airflow is enabled |
SSL for Spark is enabled |
SSL for Solr is enabled |
SSL for Hive is enabled |
SSL for HBase is enabled |
SSL for YARN is enabled |
The ability to configure SSL in the Hadoop clusters is added |
SSL for HDFS is enabled |
The Custom hive-site.xml block is placed after the hive-site.xml block in the configuration settings |
The links to NameNodes and HttpFS are moved to the top of the HDFS web links list |
The order of cluster stop actions is reversed |
The Reconfig and restart action is replaced by the Restart action that runs three operations: stops the service, applies configuration parameters, and starts the service |
The ability to execute the resourcemanager_enable_ha action without changing |
The ability to execute the resourcemanager_expand action without enabling High Availability (HA) is disabled for Resource Manager, since it does not work without enabling HA |
Fixed: the parameters from the httpfs-site.xml configuration file did not apply to the HttpFS service |
Fixed: SQL queries launched from Spark3 or Spark2 did not work correctly with the Ranger Hive plugin being enabled |
Fixed: the |
Fixed: the Phoenix Query Server could not work in the thin mode in the kerberized environment |
Fixed: the error with running spark-thrift-server-checker after enabling and disabling Kerberos |
Fixed: the error with saving the Flink configuration parameters after installation in the kerberized environment |
The ability to work with kerberized ADH clusters is fixed for the Windows operation system |
Fixed the |
Fixed the error with checking privileges to the |
The cluster installation errors at the HBase check stage are fixed |
Fixed: application logs were unavailable in the legacy Resource Manager UI of the kerberized clusters |
Fixed: Kerberization on AD failed if two instances of Ranger Admin were installed |
Configuring SSL settings is added before enabling SSL in autotests |
Web links are rewritten to support the http/https schema change |
Date: 21.12.2021
Refactoring of the database for Hive Metastore checks is done |
The error with the |
Fixed the error with MapReduce jobs launched in the kerberized cluster not under the |
Fixed: mapped but not installed services caused the errors via installation of other services |
The HTTP mode is added for HiveServer2 |
The AD/LDAP/SIMPLE authorization is added for Zeppelin |
The HBase REST Server component is added for HBase |
The ability to use Active Directory as Kerberos storage is implemented |
The ability to set Kerberos principal for running Spark jobs via YARN is added. Before that Spark always launched tasks using the |
Fixed the error with DataNodes expanding via the Add DataNode action in the kerberized environment |
Fixed the error with the Livy interpreter working in the kerberized environment |
Fixed the error with the Hive interpreter working in the kerberized environment |
Fixed the error with Zeppelin checks after Kerberos activation |
Fixed the error with removing the monitoring component jmxtrans |
Fixed: the Enable Resource Manager HA action failed in the clusters with Kerberos and Ranger plugin being enabled |
Fixed the error with Airflow installation in the kerberized environment |
Fixed the error with enabling Kerberos after its disabling (enable → disable → enable) |
The full stack testing for using the RedHat 7.9 enterprise license in ADH is added |
Date: 01.11.2021
The Reinstall status-checker action is implemented. It runs the status-checker deployment scripts for services as well as for Docker containers |
The Solr check is changed: the number of live Nodes is compared instead of lists |
The timeout/retry count is increased for the Zeppelin check |
Fixed: the bundle version update error |
Fixed: the Ranger plugin worked incorrectly in the case of using some characters in the cluster name |
Fixed: the error with the inconsistent state of the actual DataNodes maintenance state after upgrading ADH from 2.1.3 to 2.1.4 |
Fixed: Keytabs permissions changed during some actions |
Fixed: the error with per-service installation in the kerberized ADH clusters |
Fixed: the error with parsing the list of containers by the Docker status checker in Airflow |
The error with the heap size test is fixed |
The broken compatibility with the current dev version of ADCM is fixed |
The test logic for per-service installation in the kerberized ADH clusters is changed: before each service installing it is necessary to add the service to the cluster and add its components to hosts (instead of adding all components to all hosts) |
Date: 30.09.2021
The MIT Kerberos integration is implemented in ADCM |
The ability to add the custom port for Kerberos Server is added |
Ranger plugin and kerberized YARN are integrated |
Ranger plugin and kerberized Hive are integrated |
Ranger plugin and kerberized HBase are integrated |
Ranger plugin and kerberized HDFS are integrated |
The Ranger plugin is made operable on kerberized services |
The split memory option is added for Hive services: resource management options can be configured for HiveMetastore and HiveServer2 separately |
The edit memory size option is added for Flink components |
The edit memory size option is added for Solr components |
The edit memory size option is added for Sqoop components |
The edit memory size option is added for Spark components |
The edit memory size option is added for Zeppelin components |
The edit memory size option is added for HBase components |
The edit memory size option is added for YARN components |
The Add/Remove actions are added for YARN Timeline server |
The Add/Remove actions are added for Sqoop Metastore |
The edit memory size option is added for HDFS components |
The ADH memory management option is added |
The Add/Remove actions are added for Flink Job Manager |
The Add/Remove actions are added for Spark Thrift Server |
The Add/Remove actions are added for Spark Livy Server |
The Add/Remove actions are added for Spark History Server |
The Add/Remove actions are added for Hive Tez UI |
The Move action is added for YARN MapReduce History Server |
Kerberos is implemented for ADH in ADCM |
The ability to move any service component to another Node or remove it from the cluster is added |
The unnecessary repository/packages check at the HDFS installation step for ADH EE is removed |
The path for docker-status-checker files is changed |
Fixed the error with the |
Fixed the error with Solr not working after applying host actions |
Fixed the error with the Enable Resource Manager HA action in the kerberized environment |
Fixed the error with Spark expanding/shrinking in the kerberized environment |
Fixed the error with Solr shrinking in the kerberized environment |
Fixed the error with YARN Node Manager expanding in the kerberized environment |
Fixed the error |
Fixed the error with Flink Server expanding/shrinking |
Fixed the error with Flink JobManager Server Port 6123 availability |
Fixed the error with Sqoop expanding in the kerberized environment |
Fixed the error with YARN Server expanding/shrinking |
Fixed: the Solr role tried to import the absent monitoring role even with the monitoring service being not installed |
Fixed the error with running the Install service action for Solr |
Fixed the error with Spark Livy server expanding in the kerberized environment |
Fixed the error with Spark Thrift Server shutting down in the kerberized environment |
The Solr shared memory reservation error is fixed |
Fixed the error with Solr kerberization with no Hadoop services being added |
Fixed the error with starting jmxtrans after the host reboot |
The incorrect URL for the Hive Server Web UI is fixed |
Fixed the error with opening link to the HiveServer2 UI after ADH installation |
Fixed the error with availability of host actions after the cluster upgrade from 2.1.3.0 to 2.1.4.b2 |
Fixed: the Enterprise cluster installation failing during the per-service action |
Fixed: the Reconfig and restart action failed for the Monitoring service with Airflow being installed |
Fixed: the Spark ThriftServer process did not stop after the Spark context being killed |
In order to speed up autotests and development process, the packages check is made optional for the specified environments |
The http and registry versions are bumped to the current ET release |
The specifications for new Spark and YARN MapReduce History Server actions are added |
Date: 20.07.2021
The ability to use external MySQL in AirFlow is added |
The ability to use external PostgreSQL in Airflow is added |
Host actions are added for the Spark3 service. Hosts actions here and below mean the actions managed at the host level |
Host actions are added for the Monitoring service |
Host actions are added for the Sqoop service |
Host actions are added for the Airflow service |
Host actions are added for the Solr service |
Host actions are added for the Flink service |
Host actions are added for the Zeppelin service |
Host actions are added for the Spark service |
Host actions are added for the MySQL service |
Host actions are added for the Hive service |
Host actions are added for the HBase service |
Host actions are added for the YARN service |
Host actions are added for the HDFS service |
The Sqoop Check action is modified according to the new Hive external DB variables |
Host actions are renamed |
The unnecessary solr-tools.jar file is removed from the Solr submodule in the bundle, as it caused errors in CI |
The error with offline installation in the operation system RH 7.9 is fixed |
The error with applying the Reconfig and restart action to the Monitoring service is fixed |
Fixed the error with installing MySQL on the host from which it was removed earlier |
Fixed the error with the |
Fixed the error with the Maintenance DataNode action that occurred due to the incorrect content of the dfs.hosts file (if another DataNode has been switched to the maintenance state earlier) |
For debugging possible problems via Allure reports, logs collecting is implemented for Airflow service |
Fixed the wrong description in the autotest that implements migration to the external MySQL database |
Specifications for testing host actions are changed |
Tests for host actions are added |
Specifications and autotests are added for the ADH shrink scenarios |
Date: 22.06.2021
The ability to define custom HBase environment variables is added |
The action for removing MySQL from the ADH cluster is added |
The ability to use external PostgreSQL in Hive Metastore is added |
The ability to change the Hive Metastore |
The ability to configure Java Heap for HiveServer2 is added |
The ability to add/change/remove configuration options from the httpfs-site.xml file via ADCM is added |
Start checks for JournalNodes and NameNodes are added |
Spark 3.1.1 is implemented for ADH 2.X |
The offline installation is implemented for ADH |
The Check action is improved for Sqoop |
The built process for Solr is changed: Arenadata repositories are used instead of external Maven repositories |
The ability to use Docker Registry from Arenadata repository is implemented |
In order to install services (e.g. Airflow) without DNS, resolving host names in the Docker containers is implemented |
Airflow installation without DNS is implemented |
Implemented the DN check/wait for membership in the cluster from the DN itself |
The Spark component Spark History Server is made mandatory |
Refactoring of the ZooKeeper service is done |
Fixed: Hive installation failed when adding the Tez component without TezUI |
Fixed: the duplicate key |
Fixed: YARN applications could not run jobs after the Ranger plugin being enabled |
Fixed the error with the Enable Resource manager HA action in the ADH Community Edition |
Fixed the error with Sqoop installation after Ranger being installed |
Fixed problems with HBase logging |
Docker images in the package specifications are changed according to the new naming convention |
Packages for the 2.1.4 release are uploaded to the Google repository |
Fixed the logs collecting for HttpFS during autotests |
The repository for the 2.1.4 version of the product is created |
The ability to update a bundle without rebuilding packages is added |
Unnecessary garbage files are removed from the bundle build archive |
To resolve ansible |
Tests for checking the integration between ADH and Ranger are added |
Build 2.1.3.1 ADH |
2.1.3
Date: 14.01.2021
The Remove/Add Hive Tez actions are added |
The Add diamond and Remove diamond actions are added |
Build Ranger 2.0.0 |
The logic of the YARN Resource Manager expanding is changed |
The validation logic for Spark Client is changed from |
ADPS integration: the new ADPS bundle that contains Ranger has to be re-integrated with ADH after moving Ranger from it |
Fixed the error with closing Hive tasks after finishing checks |
Livy checks are temporarily disabled |
Fixed the error with bad cluster name that occurred when creating the HDFS service via Ranger Admin |
Fixed the error with the HDFS action Remove Client |
Fixed unsuccessful Hive CLI checks after the Ranger plugin being enabled |
Fixed the error with connecting multiple clusters to one ADPS |
The repository for plugins is added to the release bundle |
Packages for the 2.1.3.0 release are uploaded to the Google repository |
Build 2.1.3.0 ADH |
ADH is bumped to 2.1.3.0 |
New repositories for Ranger plugins are added |
Specifications on the workaround for the error with HBase expanding after HDFS expanding are edited |
Specifications on expanding ADH services are created |
2.1.2
Date: 19.11.2020
Client components for Flink are added |
Client components for HDFS are added |
Client components for YARN are added |
The timeout for the |
The default port number for MySQL in the Airflow Metastore is changed |
The volume for Hadoop configurations (e.g. /etc/hadoop/conf/, /etc/hive/conf, etc.) in Docker images with Airflow is increased |
Fixed the race condition within Sqoop checks (part 2 — with multiple Clients) |
Fixed the cluster installation error that occurred when MySQL being installed |
Fixed the cluster installation error that occurred when checking after installing Spark |
Packages for the 2.1.2.5 release are uploaded to the Google repository |
Building the offline package for ADH to the ADH repository is implemented |
ADH is bumped to 2.1.2.5 |
Tests for the YARN/HDFS Client are created |
Specifications for autotests of the YARN/HDFS Client are added |
Changes in the Hadoop tests related to uploading bundles are made |
Autotests for Airflow are created |
All file accesses are made independent from the current working directory in autotests |
The dev repository for ADH 2.1.2.5 is initialized |
Date: 15.09.2020
The ADH bundle is divided into community and enterprise versions |
The High Availability for NameNodes is implemented |
Fixed the error that occurred at the Restart NameNodes step during the Remove NameNode action |
Fixed the error with checking Hive Tez on multiple hosts |
Fixed the error with switching dynamic allocation |
Fixed: ZKFC ignored the |
Packages for the 2.1.2.3 release are uploaded to the Google repository |
All specifications and BOMs related to ADH20 are moved to the prj_adh. Publishing of artifacts to the artifactory is changed |
The release and develop repositories are segregated in bundles |
Date: 05.06.2020
The |
The race condition within Sqoop checks is fixed |
Fixed the error with running the cluster Check action |
Packages for the 2.1.2.2 release are uploaded to the Google repository |
ADH is bumped to 2.1.2.2 |
Nginx is copied from the Epel repository to the ADH2 repository |
Date: 21.05.2020
Sqoop deployment is ported to ALT Linux |
Solr deployment is ported to ALT Linux |
Flink deployment is ported to ALT Linux |
The public ALT Linux repository for ZooKeeper 3.4.14 is created |
Airflow deployment is ported to ALT Linux |
The ability to set |
Sqoop is added into the ADH bundle |
ADH 2.X packages are built for ALT Linux |
Solr 8.2.0 is added for ADH 2.2 |
Refactoring of the ADH deployment process for ALT Linux is made |
The error with commissioning/decommissioning Nodes via ADCM is fixed |
Fixed the error |
Fixed the ordering of Generic components |
Fixed: web links in ADCM did not refresh after HDFS DataNodes or YARN Node Manager shrinking |
Fixed the error that occurred with YARN 3.1.2 in ALT Linux during ansible tasks |
Fixed the absence of |
The /var/run/sqoop directory is created for Sqoop Metastore |
The missing dependency for Flink-related packages is added |
Fixed the error with installing HBase and Solr when using the external ZooKeeper |
Airflow deployment is disabled (visible in ADCM and only for ALT Linux) |
The public repository for the release is changed |
Packages for the 2.1.2.1 release are uploaded to the Google repository |
Changes for libisal are merged |
Changes for bigtop-groovy, bigtop-jsvc, bigtop-tomcat, and bigtop-utils are merged |
Changes for Bigtop are merged |
Changes for Livy are merged |
Changes for ZooKeeper are merged |
Changes for Zeppelin are merged |
Changes for Spark are merged |
Changes for Phoenix are merged |
Changes for Tez are merged |
Changes for Hive are merged |
Changes for HBase are merged |
Changes for Hadoop are merged |
Bigtop branches for CentOS and ALT linux are manually merged |
The repository url is changed to 2.1.2 |
Autotests for ADH services are reviewed according to the current stack version |
Date: 19.02.2020
The ability to configure Hive ACID is added |
SELinux is disabled for all components during installation |
Support of the Flink 1.8.0 is implemented for ADCM |
Flink is added into the ADH bundle |
The logic of the Shrink action is improved |
GPU support is enabled for YARN |
Airflow is added into the ADH bundle |
The UI link is added for Solr at the main ADCM page |
The Shrink/Expand actions are implemented for HDFS HttpFS |
HDFS HttpFS checks are implemented |
The Solr Cloud Mode is implemented |
The Solr deployment is implemented |
Solr is added into the ADH bundle |
Tez libraries are installed on Hive Client Nodes |
Fixed: it was impossible to use Hive with Tez due to the configuration mismatch |
Fixed the error with saving configurations for HDFS and YARN |
Fixed the error with HBase checks after installation |
Fixed the error with YARN checks in the HA mode |
Tests/example DAGs for checking the Airflow functionality are added |
Tests for checking the Solr functionality are added |
2.1.1
Date: 21.11.2019
YARN Scheduler configuration is implemented |
HDFS mover is implemented |
The cluster-wide Install button is added to the ADCM UI |
The ability to define the external ZooKeeper in the core-site.xml file is added |
The ability to add custom/advanced configuration parameters to the *-site.xml files is added |
YARN Node labels are implemented |
HDFS HttpFS is implemented |
HDFS Short-Circuit Local Reads are implemented |
HDFS Disk Balancer is implemented |
HDFS Balancer is implemented |
The *-site.xml files are unified |
Asserts and fails are replaced with adcm_check |
Monitoring is refactored: code/dashboards are unified, metrics are redesigned, etc. |
The hostname variable is removed from the Zeppelin PID definition |
The HDFS dashboard is divided into HDFS and YARN dashboards in Grafana |
Hadoop PID file names are changed |
Manual testing of the ADH 2.1 installation is performed according to the documentation |
2.1.0
Date: 10.10.2019
Implemented the ability to get a status for the following services:
|
Implemented service management for the following services:
|
Prepared deployment scripts for the following services:
|
Implemented service checks for the following services:
|
Implemented the deployment of the following services:
|
The following builds are available:
|
Monitoring features are implemented for the following services:
|
Necessary configurations for Hive/Tez are added |
The |
Necessary configurations for Hadoop services are added |
Quick links for services are added |
The HDFS rack awareness is implemented via custom scripts |
The YARN and MapReduce services are combined into single one |
The Resource Manager High Availability is implemented |
Checks for Decommission/Recommission for Node Managers are implemented |
Checks for Decommission/Recommission for DataNodes are implemented |
Zeppelin is bumped to 0.8.1 |
Zeppelin is implemented for ADCM |
Tez UI is implemented for ADCM |
The ability to add a new Node Manager to ADH is added |
The ability to add new DataNodes to ADH clusters is added |
Spark is implemented for ADCM |
Ranger is bumped to 1.1 |
The ZooKeeper Quorum configuration is added |
The MySQL role is added to the ADH bundle (as a service) |
Multiple configuration directories for Nodes are implemented |
The YARN logs aggregation is enabled |
Spark and Hive roles are reviewed |
The Hadoop role is divided into HDFS, YARN, MapReduce |
The Hadoop role for ADCM is refactored |
Separate roles for Hadoop are implemented |
The ZooKeeper service role is ported from the ADS Bundle |
Basic YARN service features are refactored |
Fixed the error with the |
Pre-release preparations are made for ADH 2.1.0 |
The EULA.txt file is added to the bundle root |
The repository for ZooKeeper packages is added to ADH |
All ADH bundle submodules are switched to Master |
Documentation on Decommission/Recomission/HA is prepared |
Documentation on HBase deployment via ADCM is prepared |
Documentation on Spark deployment via ADCM is prepared |
Documentation on Hive deployment via ADCM is prepared |
Documentation on YARN deployment via ADCM is prepared |
Documentation on HDFS deployment via ADCM is prepared |
Documentation for the ADH bundle is prepared |
Spark autotests are implemented |
Hive autotests are implemented |
YARN autotests are implemented |
HDFS autotests are implemented |
Smoke tests for the Livy Server service check are prepared |
Smoke tests for the Spark Thrift Server service check are prepared |
Smoke tests for the Spark Server service check are prepared |
Smoke tests for the MySQL service check are prepared |
Smoke tests for the HBase service check are prepared |
Smoke tests for the Phoenix service check are prepared |
Smoke tests for the Hive service check are prepared |
Smoke tests for the HDFS service check are prepared |
The latest stable packages for ADH are built |