Hue configuration parameters

To configure the service, use the following configuration parameters in ADCM.

NOTE
  • Some of the parameters become visible in the ADCM UI after the Advanced flag has been set.

  • The parameters that are set in the Custom group will overwrite the existing parameters even if they are read-only.

The HUE Server component
hue.ini syntax

The hue.ini configuration file displayed in ADCM has different syntax from its original syntax. In the original file, the nesting level is determined by placing the section names into the corresponding number of square brackets. Example:

[notebook]
show_notebooks=true
[[interpreters]]
[[[mysql]]]
name = MySQL
interface=sqlalchemy
options='{"url": "mysql://root:secret@database:3306/hue"}'
[[[hive]]]
name=Hive
interface=hiveserver2

In ADCM, the nesting level is determined by separating the section names with periods. The structure from the above example will look the following way:

notebook.show_notebooks: true
notebook.interpreters.mysql.name: MySQL
notebook.interpreters.mysql.interface: sqlalchemy
notebook.interpreters.mysql.options: '{"url": "mysql://root:secret@database:3306/hue"}'
notebook.interpreters.hive.name: Hive
notebook.interpreters.hive.interface: hiveserver2
hue.ini
Parameter Description Default value

desktop.enable_prometheus

Defines whether Prometheus metrics are enabled

false

desktop.http_host

HUE Server listening IP address

0.0.0.0

desktop.http_port

HUE Server listening port

8000

desktop.use_cherrypy_server

Defines whether CherryPy (true) or Gunicorn (false) is used as the webserver

false

desktop.gunicorn_work_class

Gunicorn work class: gevent, eventlet, gthread, or sync

gthread

desktop.secret_key

Random string used for secure hashing in the session store

jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o

desktop.enable_xff_for_hive_impala

Defines whether the X-Forwarded-For header is used if Hive or Impala require it

false

desktop.enable_x_csrf_token_for_hive_impala

Defines whether the X-CSRF-Token header is used if Hive or Impala require it

false

desktop.app_blacklist

Comma-separated list of apps to not load at server startup

security,pig,sqoop,oozie,hbase,search

desktop.auth.backend

Comma-separated list of authentication backend combinations in order of priority

desktop.auth.backend.AllowFirstUserDjangoBackend

desktop.database.host

HUE Server database network or IP address

{{ groups['adpg.adpg'][0] | d(omit) }}

desktop.database.port

HUE Server database network port

5432

desktop.database.engine

Engine used by the HUE Server database

postgresql_psycopg2

desktop.database.user

Admin username for the HUE Server database

hue

desktop.database.name

HUE Server database name

hue

desktop.database.password

Password for the desktop.database.user username

 — 

desktop.auth_username

Username for authentication in HUE UI

 — 

desktop.auth_password

Password for authentication in HUE UI

 — 

Interpreter Impala
Parameter Description Default value

notebook.interpreters.impala.name

Impala interpreter name

impala

notebook.interpreters.impala.interface

Interface for the Impala interpreter

hiveserver2

impala.server_host

Host of the Impala Server (one of the Impala Daemon components)

 — 

impala.server_port

Port of the Impala Server

21050

impala.impersonation_enabled

Enables the impersonation mechanism during interaction with Impala

true

impala.impala_conf_dir

Path to the Impala configuration directory that contains the impalad_flags file

/etc/hue/conf

impala.ssl.cacerts

Path to the CA certificates

/etc/pki/tls/certs/ca-bundle.crt

impala.ssl.validate

Defines whether HUE should validate certificates received from the server

false

impala.ssl.enabled

Enables SSL communication for this server

false

impala.impala_principal

Kerberos principal name for Impala

 — 

impala.auth_username

Username for authentication in Impala

 — 

impala.auth_password

Password for authentication in Impala

 — 

Interpreter HDFS
Parameter Description Default value

hadoop.hdfs_clusters.default.webhdfs_url

WebHDFS or HttpFS endpoint link for accessing HDFS data

 — 

hadoop.hdfs_clusters.default.hadoop_conf_dir

Path to the directory of the Hadoop configuration files

/etc/hadoop/conf

hadoop.hdfs_clusters.default.security_enabled

Defines whether the Hyperwave cluster is secured by Kerberos

false

hadoop.hdfs_clusters.default.ssl_cert_ca_verify

Defines whether to verify SSL certificates against the CA

false

Interpreter Hive
Parameter Description Default value

notebook.interpreters.hive.name

Hive interpreter name

hive

notebook.interpreters.hive.interface

Interface for the Hive interpreter

hiveserver2

beeswax.hive_discovery_hs2

Defines whether to use service discovery for HiveServer2

true

beeswax.hive_conf_dir

Path to the Hive configuration directory containing the hive-site.xml file

/etc/hive/conf

beeswax.use_sasl

Defines whether to use the SASL framework to establish connection to host

true

beeswax.hive_discovery_hiveserver2_znode

Hostname of the znode of the HiveServer2 if Hive is using ZooKeeper service discovery mode

hive.server2.zookeeper.namespace

libzookeeper.ensemble

List of ZooKeeper ensemble members hosts and ports

host1:2181,host2:2181,host3:2181

libzookeeper.principal_name

Kerberos principal name for ZooKeeper

 — 

beeswax.auth_username

Username for authentication in Hive

 — 

beeswax.auth_password

Password for authentication in Hive

 — 

Interpreter YARN
Parameter Description Default value

hadoop.yarn_clusters.default.resourcemanager_host

Network address of the host where the Resource Manager is running

 — 

hadoop.yarn_clusters.default.resourcemanager_port

Port listened by the Resource Manager IPC

8031

hadoop.yarn_clusters.default.submit_to

Defines whether the jobs are submitted to this cluster

true

hadoop.yarn_clusters.default.logical_name

Resource Manager logical name (required for High Availability mode)

 — 

hadoop.yarn_clusters.default.security_enabled

Defines whether the YARN cluster is secured by Kerberos

false

hadoop.yarn_clusters.default.ssl_cert_ca_verify

Defines whether to verify the SSL certificates from YARN Rest APIs against the CA when using the secure mode (HTTPS)

false

hadoop.yarn_clusters.default.resourcemanager_api_url

URL of the Resource Manager API

 — 

hadoop.yarn_clusters.default.proxy_api_url

URL of the first Resource Manager API

 — 

hadoop.yarn_clusters.default.history_server_api_url

URL of the History Server API

 — 

hadoop.yarn_clusters.default.spark_history_server_url

URL of the Spark History Server

 — 

hadoop.yarn_clusters.default.spark_history_server_security_enabled

Defines whether the Spark History Server is secured by Kerberos

false

hadoop.yarn_clusters.ha.resourcemanager_host

Network address of the host where the Resource Manager is running (High Availability mode)

 — 

hadoop.yarn_clusters.ha.resourcemanager_port

Port listened by the Resource Manager IPC (High Availability mode)

 — 

hadoop.yarn_clusters.ha.logical_name

Resource Manager logical name (required for High Availability mode)

 — 

hadoop.yarn_clusters.ha.security_enabled

Defines whether the YARN cluster is secured by Kerberos (High Availability mode)

false

hadoop.yarn_clusters.ha.submit_to

Defines whether the jobs are submitted to this cluster (High Availability mode)

true

hadoop.yarn_clusters.ha.ssl_cert_ca_verify

Defines whether to verify the SSL certificates from YARN Rest APIs against the CA when using the secure mode (HTTPS) (High Availability mode)

false

hadoop.yarn_clusters.ha.resourcemanager_api_url

URL of the Resource Manager API (High Availability mode)

 — 

hadoop.yarn_clusters.ha.history_server_api_url

URL of the History Server API (High Availability mode)

 — 

hadoop.yarn_clusters.ha.spark_history_server_url

URL of the Spark History Server (High Availability mode)

 — 

hadoop.yarn_clusters.ha.spark_history_server_security_enabled

Defines whether the Spark History Server is secured by Kerberos (High Availability mode)

false

Interpreter Kyuubi
Parameter Description Default value

notebook.dbproxy_extra_classpath

Classpath to be appended to the default DBProxy server classpath

/usr/share/java/kyuubi-hive-jdbc.jar

notebook.interpreters.kyuubi.name

Kyuubi interpreter name

Kyuubi[Spark3]

notebook.interpreters.kyuubi.options

Special parameters for connection to the Kyuubi server

 — 

notebook.interpreters.kyuubi.interface

Interface for the Kyuubi service

jdbc

Interpreter Trino
Parameter Description Default value

notebook.interpreters.trino.name

Trino interpreter name

Trino

notebook.interpreters.trino.interface

Interface for the Trino service

trino

notebook.interpreters.trino.options

Special parameters for connection to the Trino server

{
    "url": "",
    "auth_type": "basic",
    "kerberos_principal": "",
    "kerberos_service_name": "HTTP",
    "kerberos_force_preemptive": true,
    "kerberos_delegate": true,
    "ssl_cert_ca_verify": false
}
Interpreter Ozone
Parameter Description Default value

desktop.ozone.default.webhdfs_url

WebHDFS or HttpFS endpoint for accessing HDFS data

 — 

desktop.ozone.default.ozone_conf_dir

Path to the Ozone configuration directory

/etc/ozone/conf

desktop.ozone.default.security_enabled

Defines whether the Ozone cluster is secured by Kerberos

false

desktop.ozone.default.ssl_cert_ca_verify

Defines whether to verify SSL certificates against the CA

false

desktop.ozone.default.fs_defaultfs

Ozone service ID

 — 

hue.ini kerberos config
Parameter Description Default value

desktop.kerberos.hue_keytab

Path to HUE Kerberos keytab file

 — 

desktop.kerberos.hue_principal

Kerberos principal name for HUE

 — 

desktop.kerberos.kinit_path

Path to kinit utility

/usr/bin/kinit

desktop.kerberos.reinit_frequency

Time interval in seconds for HUE to renew its keytab

3600

desktop.kerberos.ccache_path

Path to cached Kerberos credentials

/tmp/hue_krb5_ccache

desktop.kerberos.krb5_renewlifetime_enabled

This must be set to false if the renew_lifetime parameter in krb5.conf file is set to 0m

false

desktop.auth.auth

Authentication type

 — 

Authentication on WEB UIs
Parameter Description Default value

desktop.kerberos.kerberos_auth

Defines whether to use Kerberos authentication for HTTP clients based on the current ticket

false

desktop.kerberos.spnego_principal

Default Kerberos principal name for the HTTP client

 — 

hue.ini SSL config
Parameter Description Default value

desktop.ssl_certificate

Path to the SSL certificate file

/etc/ssl/certs/host_cert.cert

desktop.ssl_private_key

Path to the SSL RSA private key file

/etc/ssl/host_cert.key

desktop.ssl_password

SSL certificate password

 — 

desktop.ssl_no_renegotiation

Disables all renegotiation in TLSv1.2 and earlier

true

desktop.ssl_validate

Defines whether HUE should validate certificates received from the server

false

desktop.ssl_cacerts

This must be set to false if the renew_lifetime parameter in krb5.conf file is set to 0m

/etc/pki/tls/certs/ca-bundle.crt

desktop.session.secure

Defines whether the cookie containing the user’s session ID and csrf cookie will use the secure flag

true

desktop.session.http_only

Defines whether the cookie containing the user’s session ID and csrf cookie will use the HTTP only flag

false

LDAP security
Parameter Description Default value

desktop.ldap.ldap_url

URL of the LDAP server

 — 

desktop.ldap.base_dn

The search base for finding users and groups

"DC=mycompany,DC=com"

desktop.ldap.nt_domain

The NT domain used for LDAP authentication

mycompany.com

desktop.ldap.ldap_cert

Certificate files in PEM format for the CA that HUE will trust for authentication over TLS

 — 

desktop.ldap.use_start_tls

Set this to true if you are not using Secure LDAP (LDAPS) but want to establish secure connections using TLS

true

desktop.ldap.bind_dn

Distinguished name of the user to bind as

"CN=ServiceAccount,DC=mycompany,DC=com"

desktop.ldap.bind_password

Password of the bind user

 — 

desktop.ldap.ldap_username_pattern

Pattern for username search. Specify the <username> placeholder for this parameter

"uid=<username>,ou=People,dc=mycompany,dc=com"

desktop.ldap.create_users_on_login

Defines whether to create users in HUE when they try to login with their LDAP credentials

true

desktop.ldap.sync_groups_on_login

Defines whether to synchronize users groups when they login

true

desktop.ldap.login_groups

A comma-separated list of LDAP groups containing users that are allowed to login

 — 

desktop.ldap.ignore_username_case

Defines whether to ignore the case of usernames when searching for existing users

true

desktop.ldap.force_username_lowercase

Defines whether to force use lowercase for usernames when creating new users from LDAP

true

desktop.ldap.force_username_uppercase

Defines whether to force use uppercase for usernames when creating new users from LDAP. This parameter cannot be combined with desktop.ldap.force_username_lowercase

false

desktop.ldap.search_bind_authentication

Enables search bind authentication

true

desktop.ldap.subgroups

Specifies the kind of subgrouping to use: nested or suboordinate (deprecated)

nested

desktop.ldap.nested_members_search_depth

Number of levels to search for nested members

10

desktop.ldap.follow_referrals

Defines whether to follow referrals

false

desktop.ldap.users.user_filter

Base filter for users search

"objectclass=*"

desktop.ldap.users.user_name_attr

The username attribute in the LDAP schema

sAMAccountName

desktop.ldap.groups.group_filter

Base filter for groups search

"objectclass=*"

desktop.ldap.groups.group_name_attr

The group name attribute in the LDAP schema

cn

desktop.ldap.groups.group_member_attr

The attribute of the group object that identifies the group members

member

Others
Parameter Description Default value

Enable custom ulimits

Switch on the corresponding toggle button to specify resource limits (ulimits) for the current process. If you do not set these values, the default system settings are used. Ulimit settings are described in the table below

[Service]
LimitCPU=
LimitFSIZE=
LimitDATA=
LimitSTACK=
LimitCORE=
LimitRSS=
LimitNOFILE=
LimitAS=
LimitNPROC=
LimitMEMLOCK=
LimitLOCKS=
LimitSIGPENDING=
LimitMSGQUEUE=
LimitNICE=
LimitRTPRIO=
LimitRTTIME=

Custom hue.ini

In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file hue.ini. List of available parameters can be found in the HUE documentation

 — 

log.conf

A configuration file with definitions for various logging objects

log.conf

Custom impalad_flags

Custom parameter values that overwrite the original ones

 — 

Ulimit settings
Parameter Description Corresponding option of the ulimit command in CentOS

LimitCPU

A limit in seconds on the amount of CPU time that a process can consume

cpu time ( -t)

LimitFSIZE

Maximum size of files that a process can create, in 512-byte blocks

file size ( -f)

LimitDATA

Maximum size of a process’s data segment, in kilobytes

data seg size ( -d)

LimitSTACK

Maximum stack size allocated to a process, in kilobytes

stack size ( -s)

LimitCORE

Maximum size of a core dump file allowed for a process, in 512-byte blocks

core file size ( -c)

LimitRSS

The maximum amount of RAM memory (resident set size) that can be allocated to a process, in kilobytes

max memory size ( -m)

LimitNOFILE

Maximum number of open file descriptors allowed for the process

open files ( -n)

LimitAS

Maximum size of the process virtual memory (address space), in kilobytes

virtual memory ( -v)

LimitNPROC

Maximum number of processes

max user processes ( -u)

LimitMEMLOCK

Maximum memory size that can be locked for the process, in kilobytes. Memory locking ensures the memory is always in RAM and a swap file is not used

max locked memory ( -l)

LimitLOCKS

Maximum number of files locked by a process

file locks ( -x)

LimitSIGPENDING

Maximum number of signals that are pending for delivery to the calling thread

pending signals ( -i)

LimitMSGQUEUE

Maximum number of bytes in POSIX message queues. POSIX message queues allow processes to exchange data in the form of messages

POSIX message queues ( -q)

LimitNICE

Maximum NICE priority level that can be assigned to a process

scheduling priority ( -e)

LimitRTPRIO

Maximum real-time scheduling priority level

real-time priority ( -r)

LimitRTTIME

Maximum pipe buffer size, in 512-byte blocks

pipe size ( -p)

Found a mistake? Seleсt text and press Ctrl+Enter to report it