YARN configuration parameters
To configure the service, use the following configuration parameters in ADCM.
|
NOTE
|
| Parameter | Description | Default value |
|---|---|---|
mapreduce.application.classpath |
Classpath for MapReduce applications.
A list of files/directories to be added to the classpath.
To add more items to the classpath, click Parameter expansion marker will be replaced by NodeManager on container launch, based on the underlying OS accordingly |
|
mapreduce.cluster.local.dir |
Local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk I/O. Directories that do not exist, are ignored |
/srv/hadoop-yarn/mr-local |
mapreduce.framework.name |
Runtime framework for executing MapReduce jobs.
Can be one of |
yarn |
mapreduce.jobhistory.address |
MapReduce JobHistory Server IPC (<host>:<port>) |
— |
mapreduce.jobhistory.bind-host |
Setting the value to |
0.0.0.0 |
mapreduce.jobhistory.webapp.address |
MapReduce JobHistory Server Web UI (<host>:<port>) |
— |
mapreduce.map.env |
Environment variables for the map task processes added by a user, specified as a comma separated list.
Example: |
HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce |
mapreduce.reduce.env |
Environment variables for the reduce task processes added by a user, specified as a comma separated list.
Example: |
HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce |
yarn.app.mapreduce.am.env |
Environment variables for the MapReduce App Master processes added by a user. Examples:
|
HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce |
yarn.app.mapreduce.am.staging-dir |
The staging directory used while submitting jobs |
/user |
mapreduce.jobhistory.keytab |
The location of the Kerberos keytab file for the MapReduce JobHistory Server |
/etc/security/keytabs/mapreduce-historyserver.service.keytab |
mapreduce.jobhistory.principal |
Kerberos principal name for the MapReduce JobHistory Server |
mapreduce-historyserver/_HOST@REALM |
mapreduce.jobhistory.http.policy |
Configures the HTTP endpoint for JobHistoryServer web UI. The following values are supported:
|
HTTP_ONLY |
mapreduce.jobhistory.webapp.https.address |
The HTTPS address where MapReduce JobHistory Server WebApp is running |
0.0.0.0:19890 |
mapreduce.shuffle.ssl.enabled |
Defines whether to use SSL for for the Shuffle HTTP endpoints |
false |
| Parameter | Description | Default value |
|---|---|---|
xasecure.audit.destination.solr.batch.filespool.dir |
Spool directory path |
/srv/ranger/hdfs_plugin/audit_solr_spool |
xasecure.audit.destination.solr.urls |
A URL of the Solr server to store audit events.
Leave this property value empty or set it to |
— |
xasecure.audit.destination.solr.zookeepers |
Specifies the ZooKeeper connection string for the Solr destination |
— |
xasecure.audit.destination.solr.force.use.inmemory.jaas.config |
Whether to use in-memory JAAS configuration file to connect to Solr |
— |
xasecure.audit.is.enabled |
Enables Ranger audit |
true |
xasecure.audit.jaas.Client.loginModuleControlFlag |
Specifies whether the success of the module is |
— |
xasecure.audit.jaas.Client.loginModuleName |
Name of the authenticator class |
— |
xasecure.audit.jaas.Client.option.keyTab |
Name of the keytab file to get the principal’s secret key |
— |
xasecure.audit.jaas.Client.option.principal |
Name of the principal to be used |
— |
xasecure.audit.jaas.Client.option.serviceName |
Name of a user or a service that wants to log in |
— |
xasecure.audit.jaas.Client.option.storeKey |
Set this to |
false |
xasecure.audit.jaas.Client.option.useKeyTab |
Set this to |
false |
| Parameter | Description | Default value |
|---|---|---|
ranger.plugin.yarn.policy.rest.url |
The URL to Ranger Admin |
— |
ranger.plugin.yarn.service.name |
The name of the Ranger service containing policies for this instance |
— |
ranger.plugin.yarn.policy.cache.dir |
The directory where Ranger policies are cached after successful retrieval from the source |
/srv/ranger/yarn/policycache |
ranger.plugin.yarn.policy.pollIntervalMs |
Defines how often to poll for changes in policies |
30000 |
ranger.plugin.yarn.policy.rest.client.connection.timeoutMs |
The YARN Plugin RangerRestClient connection timeout (in milliseconds) |
120000 |
ranger.plugin.yarn.policy.rest.client.read.timeoutMs |
The YARN Plugin RangerRestClient read timeout (in milliseconds) |
30000 |
ranger.add-yarn-authorization |
Set |
false |
ranger.plugin.yarn.policy.rest.ssl.config.file |
Path to the RangerRestClient SSL config file for the YARN plugin |
/etc/yarn/conf/ranger-yarn-policymgr-ssl.xml |
| Parameter | Description | Default value |
|---|---|---|
yarn.application.classpath |
The classpath for YARN applications.
A list of files/directories to be added to the classpath.
To add more items to the classpath, click |
|
yarn.cluster.max-application-priority |
Defines the maximum application priority in a cluster. Leaf Queue-level priority: each leaf queue provides default priority by the administrator. The queue default priority will be used for any application submitted without a specified priority. $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml is the configuration file for queue-level priority |
0 |
yarn.log.server.url |
The URL for log aggregation Server |
— |
yarn.log-aggregation-enable |
Whether to enable log aggregation.
Log aggregation collects logs from each container and moves these logs onto a file system, for example HDFS, after the application processing completes.
Users can configure the |
true |
yarn.log-aggregation.retain-seconds |
Defines how long to keep aggregation logs before deleting them.
The value of |
172800 |
yarn.nodemanager.local-dirs |
The list of directories to store localized. An application localized file directory will be found in: ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}. Individual containers work directories, called container_${contid}, will be subdirectories of this |
/srv/hadoop-yarn/nm-local |
yarn.node-labels.enabled |
Enables node labels feature |
true |
yarn.node-labels.fs-store.root-dir |
The URI for NodeLabelManager.
The default value is |
hdfs:///system/yarn/node-labels |
yarn.timeline-service.bind-host |
The actual address the server will bind to.
If this optional address is set, the RPC and Webapp servers will bind to this address and the port, specified in |
0.0.0.0 |
yarn.timeline-service.leveldb-timeline-store.path |
Stores file name for leveldb Timeline store |
/srv/hadoop-yarn/leveldb-timeline-store |
yarn.nodemanager.address |
The address of the container manager in the NodeManager |
0.0.0.0:8041 |
yarn.nodemanager.aux-services |
A comma-separated list of services, where service name should only contain |
mapreduce_shuffle,spark_shuffle |
yarn.nodemanager.aux-services.mapreduce_shuffle.class |
The auxiliary service class to use |
org.apache.hadoop.mapred.ShuffleHandler |
yarn.nodemanager.aux-services.spark_shuffle.class |
The class name of YarnShuffleService — an external shuffle service for Spark3 on YARN |
org.apache.spark.network.yarn.YarnShuffleService |
yarn.nodemanager.aux-services.spark_shuffle.classpath |
The classpath for external Spark3 shuffle-service in YARN.
A list of files/directories to be added to the classpath.
To add more items to the classpath, click |
|
yarn.nodemanager.recovery.enabled |
Enables the NodeManager to recover after starting |
true |
yarn.nodemanager.recovery.dir |
The local filesystem directory, in which the NodeManager will store state, when recovery is enabled |
/srv/hadoop-yarn/nm-recovery |
yarn.nodemanager.remote-app-log-dir |
Defines a directory for logs aggregation |
/logs |
yarn.nodemanager.resource-plugins |
Enables additional discovery/isolation of resources on the NodeManager.
By default, this parameters is empty.
Acceptable values: |
— |
yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables |
When |
/usr/bin/nvidia-smi |
yarn.nodemanager.resource.detect-hardware-capabilities |
Enables auto-detection of node capabilities such as memory and CPU |
true |
yarn.nodemanager.vmem-check-enabled |
Whether virtual memory limits will be enforced for containers |
false |
yarn.resource-types |
The resource types to be used for scheduling. Use resource-types.xml to specify details about the individual resource types |
— |
yarn.resourcemanager.bind-host |
The actual address, the server will bind to.
If this optional address is set, the RPC and Webapp servers will bind to this address and the port, specified in |
0.0.0.0 |
yarn.resourcemanager.cluster-id |
Name of the cluster. In the High Availability mode, this parameter is used to ensure that Resource Manager participates in leader election for this cluster and ensures that it does not affect other clusters |
— |
yarn.resource-types.memory-mb.increment-allocation |
The FairScheduler grants memory equal to increments of this value.
If you submit a task with a resource request which is not a multiple of |
1024 |
yarn.resource-types.vcores.increment-allocation |
The FairScheduler grants vcores in increments of this value.
If you submit a task with resource request, that is not a multiple of |
1 |
yarn.resourcemanager.ha.enabled |
Enables Resource Manager High Availability. When enabled:
|
false |
yarn.resourcemanager.ha.rm-ids |
The list of Resource Manager nodes in the cluster when the High Availability is enabled.
See description of |
— |
yarn.resourcemanager.hostname |
The host name of the Resource Manager |
— |
yarn.resourcemanager.leveldb-state-store.path |
The Local path, where the Resource Manager state will be stored, when using |
/srv/hadoop-yarn/leveldb-state-store |
yarn.resourcemanager.monitor.capacity.queue-management.monitoring-interval |
Time between invocations of this QueueManagementDynamicEditPolicy policy (in milliseconds) |
1500 |
yarn.resourcemanager.reservation-system.enable |
Enables the ReservationSystem in the ResourceManager |
false |
yarn.resourcemanager.reservation-system.planfollower.time-step |
The frequency of the PlanFollower timer (in milliseconds). A large value is expected |
1000 |
Resource scheduler |
The type of a pluggable scheduler for Hadoop.
Available values: |
CapacityScheduler |
yarn.resourcemanager.scheduler.monitor.enable |
Enables a set of periodic monitors (specified in |
false |
yarn.resourcemanager.scheduler.monitor.policies |
The list of SchedulingEditPolicy classes that interact with the Scheduler. A particular module may be incompatible with the Scheduler, other policies, or a configuration of either |
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy |
yarn.resourcemanager.monitor.capacity.preemption.observe_only |
If set to |
false |
yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval |
Time between invocations of this ProportionalCapacityPreemptionPolicy policy (in milliseconds) |
3000 |
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill |
Time between requesting a preemption from an application and killing the container (in milliseconds) |
15000 |
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round |
Maximum percentage of resources, preempted in a single round. By controlling this value one can throttle the pace, at which containers are reclaimed from the cluster. After computing the total desired preemption, the policy scales it back within this limit |
0.1 |
yarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity |
Maximum amount of resources above the target capacity ignored for preemption. This defines a deadzone around the target capacity, that helps to prevent thrashing and oscillations around the computed target balance. High values would slow the time to capacity and (absent natural.completions) it might prevent convergence to guaranteed capacity |
0.1 |
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor |
Given a computed preemption target, account for containers naturally expiring and preempt only this percentage of the delta.
This determines the rate of geometric convergence into the deadzone ( |
0.2 |
yarn.resourcemanager.nodes.exclude-path |
Path to the file with nodes to exclude |
/etc/hadoop/conf/exclude-path.xml |
yarn.resourcemanager.nodes.include-path |
Path to the file with nodes to include |
/etc/hadoop/conf/include-path |
yarn.resourcemanager.recovery.enabled |
Enables Resource Manager to recover state after starting.
If set to |
true |
yarn.resourcemanager.store.class |
The class to use as the persistent store.
If |
— |
yarn.resourcemanager.system-metrics-publisher.enabled |
The setting that controls whether YARN system metrics are published on the Timeline Server or not by Resource Manager |
true |
yarn.scheduler.fair.user-as-default-queue |
Defines whether to use the username, associated with the allocation as the default queue name, in the event, that a queue name is not specified.
If this is set to |
true |
yarn.scheduler.fair.preemption |
Defines whether to use preemption |
false |
yarn.scheduler.fair.preemption.cluster-utilization-threshold |
The utilization threshold after which the preemption kicks in. The utilization is computed as the maximum ratio of usage to capacity among all resources |
0.8f |
yarn.scheduler.fair.sizebasedweight |
Defines whether to assign shares to individual apps based on their size, rather than providing an equal share to all apps regardless of size.
When set to |
false |
yarn.scheduler.fair.assignmultiple |
Defines whether to allow multiple container assignments in one heartbeat |
false |
yarn.scheduler.fair.dynamic.max.assign |
If |
true |
yarn.scheduler.fair.max.assign |
If |
-1 |
yarn.scheduler.fair.locality.threshold.node |
For applications that request containers on particular nodes, this parameter defines the number of scheduling opportunities since the last container assignment to wait before accepting a placement on another node.
Expressed as a floating number between |
-1.0 |
yarn.scheduler.fair.locality.threshold.rack |
For applications, that request containers on particular racks, the number of scheduling opportunities since the last container assignment to wait before accepting a placement on another rack.
Expressed as a floating point between |
-1.0 |
yarn.scheduler.fair.allow-undeclared-pools |
If set to |
true |
yarn.scheduler.fair.update-interval-ms |
Time interval, at which to lock the scheduler and recalculate fair shares, recalculate demand, and check whether anything is due for preemption |
500 |
yarn.scheduler.minimum-allocation-mb |
Minimum allocation for every container request at the Resource Manager (in MB).
Memory requests, lower than this, will throw |
1024 |
yarn.scheduler.maximum-allocation-mb |
Maximum allocation for every container request at the Resource Manager (in MB).
Memory requests, higher than this, will throw |
4096 |
yarn.scheduler.minimum-allocation-vcores |
Minimum allocation for every container request at the Resource Manager, in terms of virtual CPU cores.
Requests, lower than this, will throw |
1 |
yarn.scheduler.maximum-allocation-vcores |
Maximum allocation for every container request at the Resource Manager, in terms of virtual CPU cores.
Requests, higher than this, will throw |
2 |
yarn.timeline-service.enabled |
On the server side this parameter indicates, whether Timeline service is enabled or not. And on the client side, this parameter can be used to indicate whether client wants to use Timeline service. If this parameter is set on the client side along with security, then YARN Client tries to fetch the delegation tokens for the Timeline Server |
true |
yarn.timeline-service.hostname |
The hostname of the Timeline service Web application |
— |
yarn.timeline-service.http-cross-origin.enabled |
Enables cross-origin support (CORS) for Timeline Server |
true |
yarn.webapp.ui2.enable |
In the Server side it indicates, whether the new YARN UI v2 is enabled or not |
true |
yarn.resourcemanager.proxy-user-privileges.enabled |
If set to |
false |
yarn.resourcemanager.webapp.spnego-principal |
The Kerberos principal to be used for SPNEGO filter for the Resource Manager web UI |
HTTP/_HOST@REALM |
yarn.resourcemanager.webapp.spnego-keytab-file |
The Kerberos keytab file to be used for SPNEGO filter for the Resource Manager web UI |
/etc/security/keytabs/HTTP.service.keytab |
yarn.nodemanager.linux-container-executor.group |
The UNIX group that the linux-container-executor should run as |
yarn |
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled |
A flag to enable override of the default Kerberos authentication filter with the RM authentication filter to allow authentication using delegation tokens (fallback to Kerberos if the tokens are missing).
Only applicable when the http authentication type is |
false |
yarn.resourcemanager.principal |
The Kerberos principal for the Resource Manager |
yarn-resourcemanager/_HOST@REALM |
yarn.resourcemanager.keytab |
The keytab for the Resource Manager |
/etc/security/keytabs/yarn-resourcemanager.service.keytab |
yarn.resourcemanager.webapp.https.address |
The https address of the Resource Manager web application. If only a host is provided as the value, the webapp will be served on a random port |
${yarn.resourcemanager.hostname}:8090 |
yarn.nodemanager.principal |
The Kerberos principal for the NodeManager |
yarn-nodemanager/_HOST@REALM |
yarn.nodemanager.keytab |
Keytab for NodeManager |
/etc/security/keytabs/yarn-nodemanager.service.keytab |
yarn.nodemanager.webapp.spnego-principal |
The Kerberos principal to be used for SPNEGO filter for the NodeManager web interface |
HTTP/_HOST@REALM |
yarn.nodemanager.webapp.spnego-keytab-file |
The Kerberos keytab file to be used for SPNEGO filter for the NodeManager web interface |
/etc/security/keytabs/HTTP.service.keytab |
yarn.nodemanager.webapp.https.address |
The HTTPS address of the NodeManager web application |
0.0.0.0:8044 |
yarn.timeline-service.http-authentication.type |
Defines the authentication used for the Timeline Server HTTP endpoint.
Supported values are: |
simple |
yarn.timeline-service.http-authentication.simple.anonymous.allowed |
Indicates if anonymous requests are allowed by the Timeline Server when using |
true |
yarn.timeline-service.http-authentication.kerberos.keytab |
The Kerberos keytab to be used for the Timeline Server (Collector/Reader) HTTP endpoint |
/etc/security/keytabs/HTTP.service.keytab |
yarn.timeline-service.http-authentication.kerberos.principal |
The Kerberos principal to be used for the Timeline Server (Collector/Reader) HTTP endpoint |
HTTP/_HOST@REALM |
yarn.timeline-service.principal |
The Kerberos principal for the timeline reader. NodeManager principal would be used for timeline collector as it runs as an auxiliary service inside NodeManager |
yarn/_HOST@REALM |
yarn.timeline-service.keytab |
The Kerberos keytab for the timeline reader. NodeManager keytab would be used for timeline collector as it runs as an auxiliary service inside NodeManager |
/etc/security/keytabs/yarn.service.keytab |
yarn.timeline-service.delegation.key.update-interval |
The update interval for delegation keys |
86400000 |
yarn.timeline-service.delegation.token.renew-interval |
Time to renew delegation tokens |
86400000 |
yarn.timeline-service.delegation.token.max-lifetime |
The maxim token lifetime |
86400000 |
yarn.timeline-service.client.best-effort |
Defines, whether a failure to obtain a delegation token should be considered as an application failure ( |
false |
yarn.timeline-service.webapp.https.address |
The HTTPS address of the Timeline service web application |
${yarn.timeline-service.hostname}:8190 |
yarn.http.policy |
This configures the HTTP endpoint for Yarn Daemons. The following values are supported:
|
HTTP_ONLY |
yarn.nodemanager.container-executor.class |
Name of the container-executor Java class |
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor |
yarn.nodemanager.recovery.supervised |
Specifies whether to retain local data if a NodeManager is present in a cluster. If set to |
true |
|
CAUTION
In AstraLinux, regular user UIDs can start from 100. For YARN to work correctly on AstraLinux, set the |
| Parameter | Description | Default value |
|---|---|---|
banned.users |
A comma-separated list of users who cannot run applications |
bin |
min.user.id |
Prevents other super-users |
500 |
| Parameter | Description | Default value |
|---|---|---|
yarn.nodemanager.webapp.cross-origin.enabled |
Enables cross-origin support for NodeManager web-services |
true |
yarn.resourcemanager.webapp.cross-origin.enabled |
Enables cross-origin support for ResourceManager web-services |
true |
yarn_site.enable_cors.active |
Enables CORS (Cross-Origin Resource Sharing) |
true |
| Parameter | Description | Default value |
|---|---|---|
YARN_RESOURCEMANAGER_OPTS |
YARN ResourceManager heap memory. Sets initial (-Xms) and maximum (-Xmx) Java heap size for ResourceManager |
-Xms1G -Xmx8G |
YARN_NODEMANAGER_OPTS |
YARN NodeManager heap memory. Sets initial (-Xms) and maximum (-Xmx) Java heap size for NodeManager |
— |
YARN_TIMELINESERVER_OPTS |
YARN Timeline Server heap memory. Sets initial (-Xms) and maximum (-Xmx) Java heap size for Timeline Server |
-Xms700m -Xmx8G |
| Parameter | Description | Default value |
|---|---|---|
DECOMMISSIONED |
The list of hosts in the |
— |
| Parameter | Description | Default value |
|---|---|---|
xasecure.policymgr.clientssl.keystore |
Path to the keystore file used by Ranger |
— |
xasecure.policymgr.clientssl.keystore.credential.file |
Path to the keystore credentials file |
/etc/yarn/conf/ranger-yarn.jceks |
xasecure.policymgr.clientssl.truststore.credential.file |
Path to the truststore credentials file |
/etc/yarn/conf/ranger-yarn.jceks |
xasecure.policymgr.clientssl.truststore |
Path to the truststore file used by Ranger |
— |
xasecure.policymgr.clientssl.keystore.password |
Password to the keystore file |
— |
xasecure.policymgr.clientssl.truststore.password |
Password to the truststore file |
— |
| Parameter | Description | Default value |
|---|---|---|
HADOOP_JOB_HISTORYSERVER_OPTS |
MapReduce History Server heap memory. Sets initial (-Xms) and maximum (-Xmx) Java heap size for MapReduce History Server |
-Xms700m -Xmx8G |
| Parameter | Description | Default value |
|---|---|---|
GPU on YARN |
Defines, whether to use GPU on YARN |
false |
capacity-scheduler.xml |
The content of capacity-scheduler.xml, which is used by CapacityScheduler |
|
fair-scheduler.xml |
The content of fair-scheduler.xml, which is used by FairScheduler |
|
Custom mapred-site.xml |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file mapred-site.xml |
— |
Ranger plugin enabled |
Whether or not Ranger plugin is enabled |
false |
Custom yarn-site.xml |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file yarn-site.xml |
— |
Custom ranger-yarn-audit.xml |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file ranger-yarn-audit.xml |
— |
Custom ranger-yarn-security.xml |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file ranger-yarn-security.xml |
— |
Custom ranger-yarn-policymgr-ssl.xml |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file ranger-yarn-policymgr-ssl.xml |
— |
Custom mapred-env.sh |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file mapred-env.sh |
— |
Custom yarn-env.sh |
In this section you can define values for custom parameters that are not displayed in ADCM UI, but are allowed in the configuration file yarn-env.sh |
— |
container-executor.cfg template |
The template for the container-executor.cfg configuration file |
— |
| Parameter | Description | Default value |
|---|---|---|
Java agent path |
Path to the JMX Prometheus Java agent |
/usr/lib/adh-utils/jmx/jmx_prometheus_javaagent.jar |
Prometheus metrics port |
Port on which to display YARN NodeManager metrics in the Prometheus format |
9205 |
Mapping config path |
Path to the metrics mapping configuration file |
/etc/hadoop/conf/jmx_yarn_nodemanager_metric_config.yml |
Mapping config |
Metrics mapping configuration file |
| Parameter | Description | Default value |
|---|---|---|
Java agent path |
Path to the JMX Prometheus Java agent |
/usr/lib/adh-utils/jmx/jmx_prometheus_javaagent.jar |
Prometheus metrics port |
Port on which to display YARN ResourceManager metrics in the Prometheus format |
9204 |
Mapping config path |
Path to the metrics mapping configuration file |
/etc/hadoop/conf/jmx_yarn_resourcemanager_metric_config.yml |
Mapping config |
Metrics mapping configuration file |
| Parameter | Description | Default value |
|---|---|---|
Java agent path |
Path to the JMX Prometheus Java agent |
/usr/lib/adh-utils/jmx/jmx_prometheus_javaagent.jar |
Prometheus metrics port |
Port on which to display YARN Timeline Server metrics in the Prometheus format |
9206 |
Mapping config path |
Path to the metrics mapping configuration file |
/etc/hadoop/conf/jmx_yarn_timelineserver_metric_config.yml |
Mapping config |
Metrics mapping configuration file |