Core configuration parameters

To configure the service, use the following configuration parameters in ADCM.

NOTE
  • Some of the parameters become visible in the ADCM UI after the Advanced flag has been set.

  • The parameters that are set in the Custom group will overwrite the existing parameters even if they are read-only.

core-site.xml
Parameter Description Default value

fs.defaultFS

Name of the default file system. A URI whose scheme and authority determine the file system implementation

hdfs://hdfs

fs.trash.checkpoint.interval

Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. If zero, the value is set to the value of fs.trash.interval. Every time the checkpointer runs, it creates a new checkpoint and removes checkpoints created more than fs.trash.interval minutes ago

60

fs.trash.interval

Number of minutes after which the checkpoint gets deleted. If zero, the trash feature is disabled. This option may be configured both on the server and the client. If trash is disabled on the server side, the client side configuration is checked. If trash is enabled on the server side, the value configured on the server is used and the client configuration value is ignored

1440

hadoop.tmp.dir

A base for temporary directories

/tmp/hadoop-${user.name}

hadoop.zk.address

ZooKeeper server host and port

 — 

io.file.buffer.size

Size of the buffer to use in sequence files. The size value should be a multiple of hardware page size (e.g. 4096 on Intel x86) and it determines how much data is buffered during the read and write operations

131072

net.topology.script.file.name

Script name that should be invoked to resolve the DNS to NetworkTopology name mapping

 — 

hadoop.proxyuser.hbase.groups

Comma-separated list of groups users from which are allowed to be impersonated by HBase

*

hadoop.proxyuser.hbase.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by HBase

*

hadoop.proxyuser.hue.groups

Comma-separated list of groups users from which are allowed to be impersonated by HUE

*

hadoop.proxyuser.hue.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by HUE

*

hadoop.proxyuser.hbase-phoenix_queryserver.groups

Comma-separated list of groups users from which are allowed to be impersonated by Phoenix Query Server

*

hadoop.proxyuser.hbase-phoenix_queryserver.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Phoenix Query Server

*

hadoop.proxyuser.hive.groups

Comma-separated list of groups users from which are allowed to be impersonated by Hive

*

hadoop.proxyuser.hive.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Hive

*

hadoop.proxyuser.httpfs.groups

Comma-separated list of groups users from which are allowed to be impersonated by HttpFS

*

hadoop.proxyuser.httpfs.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by HttpFS

*

hadoop.proxyuser.HTTP.groups

Comma-separated list of groups users from which are allowed to be impersonated by HTTP keytab services

*

hadoop.proxyuser.HTTP.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by HTTP keytab services

*

hadoop.proxyuser.knox.groups

Comma-separated list of groups users from which are allowed to be impersonated by Knox

*

hadoop.proxyuser.knox.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Knox

*

hadoop.proxyuser.kyuubi.groups

Comma-separated list of groups users from which are allowed to be impersonated by Kyuubi

*

hadoop.proxyuser.kyuubi.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Kyuubi

*

hadoop.proxyuser.livy.groups

Comma-separated list of groups users from which are allowed to be impersonated by Livy

*

hadoop.proxyuser.livy.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Livy

*

hadoop.proxyuser.yarn.groups

Comma-separated list of groups users from which are allowed to be impersonated by YARN

*

hadoop.proxyuser.yarn.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by YARN

*

hadoop.proxyuser.zeppelin.groups

Comma-separated list of groups users from which are allowed to be impersonated by Zeppelin

*

hadoop.proxyuser.zeppelin.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Zeppelin

*

hadoop.proxyuser.trino.groups

Comma-separated list of groups users from which are allowed to be impersonated by Trino

*

hadoop.proxyuser.trino.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Trino

*

fs.s3a.endpoint

AWS S3 endpoint URL

 — 

fs.s3a.access.key

AWS S3 access key

 — 

fs.s3a.secret.key

AWS S3 secret key

 — 

fs.s3a.impl

AWS S3 filesystem class

org.apache.hadoop.fs.s3a.S3AFileSystem

fs.s3a.fast.upload

Defines whether the Fast Upload feature is enabled

true

fs.s3a.connection.ssl.enabled

Defines whether SSL for connections to AWS services is enabled

false

fs.s3a.path.style.access

Defines whether S3 path-style access is enabled

true

hadoop.proxyuser.om.groups

Comma-separated list of groups users from which are allowed to be impersonated by Ozone Manager

*

hadoop.proxyuser.om.hosts

Comma-separated list of hosts. Users connecting from these hosts are allowed to be impersonated by Ozone Manager

*

ha.zookeeper.quorum

A comma-separated list of ZooKeeper server addresses that are to be used by ZKFailoverController in automatic failover

 — 

ipc.client.fallback-to-simple-auth-allowed

Controls whether the client will accept the instruction to switch to the SASL SIMPLE (unsecure) authentication. When set to false, the client will not allow the fallback to SIMPLE authentication and will abort the connection

true

hadoop.security.authentication

Authentication type. Possible values:

  • simple — no authentication;

  • kerberos — Kerberos authentication.

simple

hadoop.security.authorization

Controls whether to allow the RPC service-level authorization

false

hadoop.rpc.protection

List of the active protection features. Possible values:

  • authentication — authentication only;

  • integrity — integrity check in addition to authentication;

  • privacy — data encryption in addition to integrity.

authentication

hadoop.security.auth_to_local

General rule for mapping principal names to local user names

 — 

User managed hadoop.security.auth_to_local

Controls whether to enable automatic generation of hadoop.security.auth_to_local

false

hadoop.http.authentication.type

Defines the authentication used for the HTTP web-consoles. Possible values:

  • simple — simple authentication;

  • kerberos — Kerberos authentication;

  • [AUTHENTICATION_HANDLER_CLASSNAME] — custom authentication.

simple

hadoop.http.authentication.kerberos.principal

Kerberos principal to be used with the kerberos authentication. The principal short name must be HTTP per Kerberos HTTP SPNEGO specification. _HOST (if present) is replaced with the bind address of the HTTP server

HTTP/_HOST@$LOCALHOST

hadoop.http.authentication.kerberos.keytab

Location of the keytab file with the credentials for the Kerberos principal

/etc/security/keytabs/HTTP.service.keytab

ha.zookeeper.acl

ACLs for all znodes

 — 

hadoop.http.filter.initializers

Add to this property the org.apache.hadoop.security.AuthenticationFilterInitializer initializer class

org.apache.hadoop.security.AuthenticationFilterInitializer

hadoop.http.authentication.signature.secret.file

The signature secret file for signing the authentication tokens. If not set, a random secret is generated during the startup. The same secret should be used for all nodes in the cluster, JobTracker, NameNode, DataNode, and TastTracker. This file should be readable only by the Unix user running the daemons

/etc/security/http_secret

hadoop.http.authentication.cookie.domain

The domain to use for the HTTP cookie that stores the authentication token. In order for authentication to work properly across all nodes in the cluster, the domain must be correctly set. There is no default value, the HTTP cookie will not have a domain working only with the hostname issuing the HTTP cookie

 — 

hadoop.ssl.require.client.cert

Controls whether the client certificates are required

false

hadoop.ssl.hostname.verifier

The hostname verifier to provide for HttpsURLConnections. Possible values:

  • DEFAULT

  • STIRCT

  • STRICT_IE6

  • DEFAULT_AND_LOCALHOST

  • ALLOW_ALL

DEFAULT

hadoop.ssl.keystores.factory.class

The KeyStoresFactory implementation to use

org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory

hadoop.ssl.server.conf

Resource file from which the SSL server keystore information will be extracted. This file is looked up in the CLASSPATH. Typically, it should be in Hadoop’s conf/ directory

ssl-server.xml

hadoop.ssl.client.conf

Resource file from which the SSL client keystore information will be extracted. This file is looked up in the CLASSPATH. Typically, it should be in Hadoop’s conf/ directory

ssl-client.xml

hadoop.ssl.enabled.protocols

The supported SSL protocols

TLSv1.2

fs.AbstractFileSystem.ofs.impl

AbstractFileSystem for Rooted Ozone file system (ofs) URI

org.apache.hadoop.fs.ozone.RootedOzFs

fs.ofs.impl

The implementation class of the ofs file system

org.apache.hadoop.fs.ozone.RootedOzoneFileSystem

ssl-server.xml
Parameter Description Default value

ssl.server.truststore.location

Location of the truststore file to be used by NameNodes and DataNodes

 — 

ssl.server.truststore.password

Truststore password

 — 

ssl.server.truststore.type

Truststore file format

jks

ssl.server.truststore.reload.interval

Truststore reload check interval in milliseconds

10000

ssl.server.keystore.location

Location of the keystore file to be used by NameNodes and DataNodes

 — 

ssl.server.keystore.password

Keystore password

 — 

ssl.server.keystore.keypassword

Password to the key in the keystore

 — 

ssl.server.keystore.type

Keystore file format

jks

ssl-client.xml
Parameter Description Default value

ssl.client.truststore.location

Location of the truststore file to be used by NameNodes and DataNodes

 — 

ssl.client.truststore.password

Truststore password

 — 

ssl.client.truststore.type

Truststore file format

jks

ssl.client.truststore.reload.interval

Truststore reload check interval in milliseconds

10000

ssl.client.keystore.location

Location of the keystore file to be used by NameNodes and DataNodes

 — 

ssl.client.keystore.password

Keystore password

 — 

ssl.client.keystore.keypassword

Password to the key in the keystore

 — 

ssl.client.keystore.type

Keystore file format

jks

Other
Parameter Description Default value

Custom core-site.xml

In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the core-site.xml configuration file

 — 

Custom hadoop-env.sh

In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the hadoop-env.sh configuration file

 — 

Custom ssl-server.xml

In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the ssl-server.xml configuration file

 — 

Custom ssl-client.xml

In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the ssl-client.xml configuration file

 — 

Custom log4j.properties

Custom logging properties

log.conf

The Configuration server component
nginx.conf
Parameter Description Default value

root_config_dir

Root location for storing configurations

/srv/config

nginx_http_port

HTTP port for Nginx

9998

nginx_https_port

HTTPS port for Nginx

9998

ssl_certificate

Path to SSL certificate for Nginx

/etc/ssl/certs/host_cert.cert

ssl_certificate_key

Path to SSL certificate key for Nginx

/etc/ssl/host_cert.key

ssl_protocols

SSL protocol version to be supported for the SSL transport

TLSv1.2

Found a mistake? Seleсt text and press Ctrl+Enter to report it