ADS configuration parameters

This article describes the parameters that can be configured for ADS services via ADCM. To read about the configuration process, refer to the relevant articles: Online installation, Offline installation.

NOTE
Some of the parameters become visible in the ADCM UI after the Show advanced flag being set.

Kafka

kafka-env.sh

 

Kafka service environment variable settings

Parameter Description Default value

PID_DIR

The directory to store the Kafka process ID

/var/run/kafka

LOG_DIR

The directory for logs

/var/log/kafka

JMX_PORT

Port on which Kafka sends JMX metrics

9999

server.properties

 

Parameter Description Default value

auto.create.topics.enable

Enables automatic topic creation

false

auto.leader.rebalance.enable

Enables automatic leader balancing in the background at regular intervals

true

queued.max.requests

Number of requests in the queue before blocking network flows

500

num.network.threads

The number of threads used by the server to receive requests from the network and send responses to the network

3

num.io.threads

Sets the number of threads spawned for IO operations

8

unclean.leader.election.enable

Specifies whether to include out-of-ISR replicas and set the last resort as the leader, even if doing so may result in data loss

false

offsets.topic.replication.factor

The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation does not occur until the cluster size meets this replication factor requirement

1

transaction.state.log.min.isr

Overrides the min.insync.replicas configuration for a transaction topic

1

transaction.state.log.replication.factor

The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation fails until the cluster size meets this replication factor requirement

1

zookeeper.connection.timeout.ms

The max time that the client waits to establish a connection to ZooKeeper. If not set, the value in zookeeper.session.timeout.ms is used (in ms)

30000

zookeeper.session.timeout.ms

ZooKeeper session timeout (in ms)

30000

zookeeper.sync.time.ms

How far a ZooKeeper follower can be behind a ZooKeeper leader (in ms)

2000

security.inter.broker.protocol

Security protocol used to communicate between brokers

PLAINTEXT

ssl.keystore.location

The location of the keystore file. This is optional for client and can be used for two-way authentication for client

 — 

ssl.keystore.password

The store password for the keystore file. This is optional for client and only needed if ssl.keystore.location is configured

 — 

ssl.key.password

The password of the private key in the keystore file. This is optional for client

 — 

ssl.keystore.type

The file format of the keystore file. This is optional for client

 — 

ssl.truststore.location

The location of the truststore file

 — 

ssl.truststore.password

The store password for the truststore file. This is optional for client and only needed if ssl.truststore.location is configured

 — 

ssl.truststore.type

The file format of the truststore file

 — 

log.dirs

The directory to store the logs

/kafka-logs

listeners

A comma-separated list of URIs to listen for and the names of the listeners. Only changing the port is supported. Changing the protocol or adding a listener may cause errors

PLAINTEXT://:9092

default.replication.factor

The default replication factor for automatically created topics

1

num.partitions

The default number of log partitions per topic

1

delete.topic.enable

Enables topics deletion. Topics deletion has no effect if this config is turned off

true

log.retention.hours

The number of hours to keep a log file before deleting it

168

log.roll.hours

The maximum time before a new log segment is rolled out

168

log.cleanup.policy

Log cleanup policy

delete

log.cleanup.interval.mins

An interval that determines how often the log cleaner checks for logs to be deleted. The log file must be deleted if it has not been modified within the time specified by the log.retention.hours parameter (168 hours by default)

10

log.cleaner.min.compaction.lag.ms

The minimum time a message remains uncompacted in the log. Only applicable for logs that are being compacted (in ms)

0

log.cleaner.delete.retention.ms

The amount of time to retain tombstone markers for log compacted topics (in ms)

86400000

log.cleaner.enable

Enables running the log cleaning process on the server

true

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file server.properties

 — 

Parameter Description Default value

Kafka Ranger plugin enabled

Indicates whether Ranger Kafka Plugin is enabled (auto-populated)

false

ranger-kafka-audit.xml

 
Apache Ranger options

Parameter Description Default value

xasecure.audit.destination.solr.batch.filespool.dir

The directory for Solr audit spool

/srv/ranger/kafka_plugin/audit_solr_spool

xasecure.audit.destination.solr.urls

Specifies Solr URL. Not setting when using ZooKeeper to connect to Solr

 — 

xasecure.audit.destination.solr.zookeepers

Enables Audit to Solr for the Ranger plugins

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-kafka-audit.xml

 — 

ranger-kafka-security.xml

 
Apache Ranger options

Parameter Description Default value

ranger.plugin.kafka.policy.rest.url

URL to Ranger Admin

 — 

ranger.plugin.kafka.service.name

Name of the Ranger Service containing policies for this Kafka instance

 — 

ranger.plugin.kafka.policy.cache.dir

The directory where Ranger policies are cached after successful retrieval from the source

/srv/ranger/kafka/policycache

ranger.plugin.kafka.policy.pollIntervalMs

How often to poll for changes in policies (in ms)

30000

ranger.plugin.kafka.policy.rest.client.connection.timeoutMs

Kafka plugin RangerRestClient connection timeout (in ms)

120000

ranger.plugin.kafka.policy.rest.client.read.timeoutMs

Kafka plugin RangerRestClient read timeout (in ms)

30000

ranger.plugin.kafka.policy.rest.ssl.config.file

Location of the file containing SSL data for connecting to Ranger Admin

/etc/kafka/conf/ranger-policymgr-ssl.xml

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-kafka-security.xml

 — 

ranger-policymgr-ssl.xml

 
Apache Ranger options

Parameter Description Default value

xasecure.policymgr.clientssl.keystore

The location of the keystore file

 — 

xasecure.policymgr.clientssl.keystore.password

The keystore password

 — 

xasecure.policymgr.clientssl.truststore

The location of the truststore file

 — 

xasecure.policymgr.clientssl.truststore.password

The truststore password

 — 

xasecure.policymgr.clientssl.keystore.credential.file

Location of the keystore credential file

/etc/kafka/conf/keystore.jceks

xasecure.policymgr.clientssl.truststore.credential.file

Location of the truststore credential file

/etc/kafka/conf/truststore.jceks

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-policymgr-ssl.xml

 — 

JMX auth

 
Enables authentication for JMX in the Kafka broker (use when it is necessary to protect access to the JMX port of Kafka brokers)

Parameter Description

Username

User name for authentication in JMX

Password

User password for authentication in JMX

JAAS template

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka.service.keytab"
    principal="kafka/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka.service.keytab"
    principal="kafka/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName="kafka"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka.service.keytab"
    principal="kafka/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
{%- elif cluster.config.sasl_plain_auth_default_config is not none %}
    {%- set credential = cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data %}
KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka"
  password="{{ credential['kafka'] }}"
{% for user, password in credential.items() %}
  user_{{ user }}="{{ password }}"{% if loop.index != loop | length %}
{% endif %}
{% endfor %};
};
{% endif %}

 

Kafka Broker component configuration parameters:

log4j properties configuration

 

Parameter Description Default value

log4j.rootLogger

Setting the logging level

INFO

log4j.logger.org.apache.zookeeper

Change to adjust ZooKeeper client logging

INFO

log4j.logger.kafka

Change to adjust the general broker logging level (output to server.log and stdout). See also log4j.logger.org.apache.kafka

INFO

log4j.logger.org.apache.kafka

Change to adjust the general broker logging level (output to server.log and stdout). See also log4j.logger.kafka

INFO

log4j.logger.kafka.request.logger

Change to DEBUG or TRACE to enable request logging

WARN

log4j.logger.kafka.controller

Setting the controller Kafka logging level

TRACE

log4j.logger.kafka.log.LogCleaner

Setting the Kafka log cleaning level

INFO

log4j.logger.state.change.logger

Setting log status change

INFO

log4j.logger.kafka.authorizer.logger

Access denials are logged at INFO level, change to DEBUG to also log allowed accesses

INFO

log4j advanced properties configuration

 

Parameter Description Default value

log4j.logger.kafka.network.Processor

Configures the processor network threads logging level

TRACE

log4j.logger.kafka.server.KafkaApis

Configures the KafkaApis logging level (processing requests to the Kafka broker)

TRACE

log4j.logger.kafka.network.RequestChannel

Configures the logging level for requests in a queue

TRACE

log4j_properties_template

 

    The custom log4j_properties file template is intended for specifying custom logging parameters.

    Default value:

{% set kafka_broker_log4j_properties_configuration = services.kafka.BROKER.config.log4j_properties_configuration %}

log4j.rootLogger={{ kafka_broker_log4j_properties_configuration['log4j.rootLogger'] }}, stdout, kafkaAppender

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.logger.org.apache.zookeeper={{ kafka_broker_log4j_properties_configuration['log4j.logger.org.apache.zookeeper'] }}

log4j.logger.kafka={{ kafka_broker_log4j_properties_configuration['log4j.logger.kafka'] }}
log4j.logger.org.apache.kafka={{ kafka_broker_log4j_properties_configuration['log4j.logger.org.apache.kafka'] }}

log4j.logger.kafka.request.logger={{ kafka_broker_log4j_properties_configuration['log4j.logger.kafka.request.logger'] }}, requestAppender
log4j.additivity.kafka.request.logger=false

{% if services.kafka.BROKER.config.log4j_advanced_properties_configuration['log4j.logger.kafka.network.Processor'] is defined %}
{% set kafka_broker_log4j_advanced_properties_configuration = services.kafka.BROKER.config.log4j_advanced_properties_configuration %}
log4j.logger.kafka.network.Processor={{ kafka_broker_log4j_advanced_properties_configuration['log4j.logger.kafka.network.Processor'] }}, requestAppender
log4j.logger.kafka.server.KafkaApis={{ kafka_broker_log4j_advanced_properties_configuration['log4j.logger.kafka.server.KafkaApis'] }}, requestAppender
log4j.additivity.kafka.server.KafkaApis=false

log4j.logger.kafka.network.RequestChannel$={{ kafka_broker_log4j_advanced_properties_configuration['log4j.logger.kafka.network.RequestChannel'] }}, requestAppender
{% else %}
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
{% endif %}
log4j.additivity.kafka.network.RequestChannel$=false


log4j.logger.kafka.controller={{ kafka_broker_log4j_properties_configuration['log4j.logger.kafka.controller'] }}, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner={{ kafka_broker_log4j_properties_configuration['log4j.logger.kafka.log.LogCleaner'] }}, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false

log4j.logger.state.change.logger={{ kafka_broker_log4j_properties_configuration['log4j.logger.state.change.logger'] }}, stateChangeAppender
log4j.additivity.state.change.logger=false

log4j.logger.kafka.authorizer.logger={{ kafka_broker_log4j_properties_configuration['log4j.logger.kafka.authorizer.logger'] }}, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false
tools log4j properties configuration

 

Parameter Description Default value

log4j.rootLogger

Setting the logging level

WARN

tools_log4j_properties_template

 

    The custom tools_log4j_properties file template is intended for specifying custom logging parameters.

    Default value:

{% set kafka_broker_tools_log4j_properties_configuration = services.kafka.BROKER.config.tools_log4j_properties_configuration %}

log4j.rootLogger={{ kafka_broker_tools_log4j_properties_configuration['log4j.rootLogger'] }}, stderr

log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stderr.Target=System.err

Kafka Connect

Main

 

Parameter Description Default value

tasks.max

The maximum number of tasks that should be created for this connector

10

offset-syncs.topic.replication.factor

Replication factor for internal offset-syncs.topic

1

checkpoint.topic.replication.factor

Replication factor used for internal checkpoint.topic

1

sync.topic.acls.enabled

Enable monitoring of the source cluster for ACL changes

false

kafka-connect-env.sh

 

Parameter Description Default value

KAFKA_HEAP_OPTS

Heap size allocated to the KAFKA server process

-Xms256M -Xmx2G

LOG_DIR

The directory for logs

/var/log/kafka

KAFKA_LOG4J_OPTS

Environment variable for LOG4J logging configuration

-Dlog4j.configuration=file:/etc/kafka/conf/connect-distributed-log4j.properties

connect-distributed.properties

 

Parameter Description Default value

group.id

A unique string that identifies a Kafka Connect group, to which this connector belongs

mm-connect

rest.port

Port for Kafka Connect REST API to work

8083

plugin.path

Path to Kafka Connect plugin

 — 

config.storage.replication.factor

Replication factor for config.storage.topic

1

offset.storage.replication.factor

Replication factor for offset.storage.topic

1

status.storage.replication.factor

Replication factor for status.storage.topic

1

offset.flush.interval.ms

Interval at which to try committing offsets for tasks

1000

key.converter

Converter class for key Connect data

org.apache.kafka.connect.converters.ByteArrayConverter

value.converter

Converter class for value Connect data

org.apache.kafka.connect.converters.ByteArrayConverter

connector.client.config.override.policy

Class name or alias of implementation of ConnectorClientConfigOverridePolicy

None

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file connect-distributed.properties

 — 

connect-distributed-log4j.properties

 

Parameter Description Default value

rootLogger

Logging level

INFO

MaxBackupIndex

Maximum number of saved files

10

MaxFileSize

Maximum file size

100MB

Setting the structure of the logging configuration file for Kafka Connect

Logger Default package names Default event level

loggers

org.apache.zookeeper

ERROR

org.I0Itec.zkclient

ERROR

org.reflections

ERROR

connect_distributed_log4j_properties_template

 

    The connect_distributed_log4j_properties template is intended for specifying custom logging parameters.

    Default value:

{% set connect_distributed_log4j_properties = services.kafka_connect.config.connect_distributed_log4j_properties_content %}

log4j.rootLogger={{ connect_distributed_log4j_properties['rootLogger'] }}, connectDRFA, connectRFA, stdout

# Send the logs to the console.
#
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

#
# The `%X{connector.context}` parameter in the layout includes connector-specific and task-specific information
# in the log message, where appropriate. This makes it easier to identify those log messages that apply to a
# specific connector. Simply add this parameter to the log layout configuration below to include the contextual information.
#
#log4j.appender.stdout.layout.ConversionPattern=[%d] %p %X{connector.context}%m (%c:%L)%n
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
connect.log.pattern=[%d] %p %m (%c:%L)%n

{% for key, value in connect_distributed_log4j_properties['loggers'] | dictsort -%}
log4j.logger.{{ key }}={{ value }}
{% endfor -%}

log4j.appender.connectDRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.connectDRFA.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.connectDRFA.File=${kafka.logs.dir}/connect-distributed.log
log4j.appender.connectDRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.connectDRFA.layout.ConversionPattern=${connect.log.pattern}
log4j.appender.connectDRFA.MaxBackupIndex={{ connect_distributed_log4j_properties['MaxBackupIndex'] }}

log4j.appender.connectRFA=org.apache.log4j.RollingFileAppender
log4j.appender.connectRFA.File=${kafka.logs.dir}/connect-distributed.log
log4j.appender.connectRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.connectRFA.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.connectRFA.Append=true
log4j.appender.connectRFA.MaxBackupIndex={{ connect_distributed_log4j_properties['MaxBackupIndex'] }}
log4j.appender.connectRFA.MaxFileSize={{ connect_distributed_log4j_properties['MaxFileSize'] }}
SSL

 

Parameter Description Default value

ssl.keystore.location

The location of the keystore file. This is optional for client and can be used for two-way authentication for client

 — 

ssl.keystore.password

The store password for the keystore file. This is optional for client and only needed if ssl.keystore.location is configured

 — 

ssl.key.password

The password of the private key in the keystore file. This is optional for client

 — 

ssl.keystore.type

The file format of the keystore file. This is optional for client

 — 

ssl.truststore.location

The location of the truststore file

 — 

ssl.truststore.password

The store password for the truststore file. This is optional for client and only needed if ssl.truststore.location is configured

 — 

ssl.truststore.type

The file format of the truststore file

 — 

JAAS template

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName="kafka"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka-connect.service.keytab"
    principal="kafka-connect/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
{%- elif cluster.config.sasl_plain_auth_default_config is not none %}
    {%- set credential = cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data %}
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka-connect"
  password="{{ credential['kafka-connect'] }}";
};
{%- endif %}

Kafka-Manager

Main

 

Parameter Description Default value

Kafka-Manager port

Kafka-Manager to listen port. Specified as JAVA_OPTS in kafka-manager-env file

9000

application.conf

 

Parameter Description Default value

ssl_active

Option that is used to enable SSL

false

play.server.https.keyStore.path

The location of the keystore file for the Play framework server

/tmp/keystore.jks

play.server.https.keyStore.password

The keystore password for the Play framework server

 — 

play.server.https.keyStore.type

The file format of the keystore file for the Play framework server

JKS

kafka-manager.consumer.properties.file

The file format of the truststore file for the Play framework server

/cmak/conf/consumer.properties

Kafka-Manager play.http.secret.key

Secret key for the Play framework server, must be at least 32 characters

 — 

play.http.session.maxAge

Maximum age of Kafka-Manager session cookie

1h

play.i18n.langs

Tags for specifying application languages — specially formatted strings that identify a specific language

en

play.http.requestHandler

The type of request handler used is its own implementation of the lowest-level Play framework HTTP pipeline API

play.http.DefaultHttpRequestHandler

play.http.context

Sets up an HttpContext object (request context) for the Play framework server

/

play.application.loader

Full name of the Kafka-Manager application loader class

loader.KafkaManagerLoader

pinned-dispatcher.type

Akka dispatcher type

PinnedDispatcher

pinned-dispatcher.executor

Akka dispatcher thread pool executor

thread-pool-executor

application.features

Kafka-Manager application features:

  • KMClusterManagerFeature — allows you to add, update and delete a cluster from Kafka-Manager;

  • KMTopicManagerFeature — allows you to add, update and delete topics from the Kafka cluster;

  • KMPreferredReplicaElectionFeature — allows you to elect a preferred replica for a Kafka cluster;

  • KMReassignPartitionsFeature — allows you to generate partition assignments and reassign partitions.

 — 

akka.loggers

Logger Akka

akka.event.slf4j.Slf4jLogger

akka.loglevel

Akka logging level

INFO

akka.logger-startup-timeout

Logging start timeout

60s

Additional parameters in 'key-value' format

Additional parameters in key-value format

 — 

Parameter Description Default value

Kafka-Manager env

Custom environment variables

 — 

consumer.properties

 

Parameter Description Default value

key.deserializer

API class used to deserialize the entry key

org.apache.kafka.common.serialization.ByteArrayDeserializer

value.deserializer

API class used to deserialize the entry value

org.apache.kafka.common.serialization.ByteArrayDeserializer

ssl.keystore.location

The location of the keystore file. This is optional for client and can be used for two-way authentication for client

/tmp/keystore.jks

ssl.keystore.password

The store password for the keystore file. This is optional for client and only needed if ssl.keystore.location is configured

 — 

ssl.key.password

The password of the private key in the keystore file. This is optional for client

 — 

ssl.keystore.type

Keystore file format. This is optional for client

JKS

ssl.truststore.location

The location of the truststore file

/tmp/truststore.jks

ssl.truststore.password

Password for accessing the truststore

 — 

ssl.truststore.type

The file format of the truststore file

JKS

Default POST data for Kafka cluster

 

Parameters that are used to add a Kafka cluster to the Kafka-Manager service

Parameter Description Default value

Enable JMX Polling

Enables or disables the polling thread for JMX

true

JMX Auth Username

Adding a New Username for JMX Authorization

 — 

JMX Auth Password

Adding a New User Password for JMX Authorization

 — 

JMX with SSL

Enables or disables JMX activation by SSL authentication

false

Poll consumer information

Poll consumer information

true

Filter out inactive consumers

Filter out inactive consumers

true

Enable Logkafka

Enables or disables Logkafka

false

Enable Active OffsetCache

Enables Active OffsetCache

true

Display Broker and Topic Size

Defines whether to display broker and topic size

false

brokerViewUpdatePeriodSeconds

Broker View Cycle Update Time/Cycle

30

clusterManagerThreadPoolSize

Cluster control thread pool size

10

clusterManagerThreadPoolQueueSize

Cluster control thread pool queue size

100

kafkaCommandThreadPoolSize

Kafka command thread pool size

10

kafkaCommandThreadPoolQueueSize

Kafka command thread pool queue size

100

logkafkaCommandThreadPoolSize

Logkafka command thread pool size

10

logkafkaCommandThreadPoolQueueSize

Logkafka command thread pool queue size

100

logkafkaUpdatePeriodSeconds

Logkafka update cycle time (in seconds)

30

partitionOffsetCacheTimeoutSecs

Logkafka update cycle time (in seconds)

5

brokerViewThreadPoolSize

Broker view thread pool size

10

brokerViewThreadPoolQueueSize

Broker view thread pool queue size

1000

offsetCacheThreadPoolSize

Cache offset thread pool size

10

offsetCacheThreadPoolQueueSize

Offset cache thread pool queue size

1000

kafkaAdminClientThreadPoolSize

Kafka control client thread pool size

10

kafkaAdminClientThreadPoolQueueSize

Kafka control client thread pool queue size

1000

kafkaManagedOffsetMetadataCheckMillis

Metadata offset check time

30000

kafkaManagedOffsetGroupCacheSize

Offset Group Cache Size

1000000

kafkaManagedOffsetGroupExpireDays

Offset Group Expire

7

Security Protocol

Security Protocol

PLAINTEXT

SASL Mechanism

Enables SASL authentication mechanism

DEFAULT

jaasConfig

Configurations for JAAS Authentication

 — 

JAAS template

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName=kafka
    principal="kafka-manager/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka-manager.service.keytab";
};
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName=kafka
    principal="kafka-manager/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka-manager.service.keytab";
};
{%- elif cluster.config.sasl_plain_auth_default_config is not none %}
    {%- set credential = cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data %}
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka-manager"
  password="{{ credential['kafka-manager'] }}";
};

{%- endif %}

 
Kafka-Manager component configuration parameters:

log4j properties configuration

 

    Template for customizing the log4j logging library.

    Default value:

<!--
  ~ Copyright (C) 2009-2015 Typesafe Inc. <http://www.typesafe.com>

  Maintained by ADCM

-->
<!-- The default logback configuration that Play uses if no other configuration is provided -->
<configuration>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/var/log/kafka-manager/application.log</file>
        <encoder>
           <pattern>%date - [%level] - from %logger in %thread %n%message%n%xException%n</pattern>
        </encoder>

        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>/var/log/kafka-manager/application.%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>5</maxHistory>
            <totalSizeCap>5GB</totalSizeCap>
        </rollingPolicy>
    </appender>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%date - [%level] %logger{15} - %message%n%xException{10}</pattern>
        </encoder>
    </appender>

    <appender name="ASYNCFILE" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="FILE" />
    </appender>

    <appender name="ASYNCSTDOUT" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="STDOUT" />
    </appender>

    <logger name="play" level="INFO" />
    <logger name="application" level="INFO" />
    <logger name="kafka.manager" level="INFO" />

    <!-- Off these ones as they are annoying, and anyway we manage configuration ourself -->
    <logger name="com.avaje.ebean.config.PropertyMapLoader" level="OFF" />
    <logger name="com.avaje.ebeaninternal.server.core.XmlConfigLoader" level="OFF" />
    <logger name="com.avaje.ebeaninternal.server.lib.BackgroundThread" level="OFF" />
    <logger name="com.gargoylesoftware.htmlunit.javascript" level="OFF" />
    <logger name="org.apache.zookeeper" level="INFO"/>

    <root level="WARN">
        <appender-ref ref="ASYNCFILE" />
        <appender-ref ref="ASYNCSTDOUT" />
    </root>

</configuration>

Kafka REST Proxy

Main

 

Parameter Description Default value

rest_listener_port

REST Proxy listener port. Specified as listeners in kafka-rest.properties file

8082

kafka-env.sh

 

Parameter Description Default value

LOG_DIR

The directory to store the logs

/var/log/kafka-rest

JMX_PORT

Port on which Kafka REST Proxy sends JMX metrics

9998

KAFKAREST_HEAP_OPTS

Heap size allocated to the Kafka REST Proxy process

-Xmx1024M

KAFKAREST_JMX_OPTS

JVM options in terms of JMX options (authorization, connection, ssl)

-Dcom.sun.management.jmxremote

-Dcom.sun.management.jmxremote.authenticate=false

-Dcom.sun.management.jmxremote.ssl=false

KAFKAREST_OPTS

Kafka REST Proxy environment variable name

-Djava.security.auth.login.config=/etc/kafka-rest/jaas_config.conf

Basic Auth properties

 

Parameter Description Default value

authentication.method

Authentication method

BASIC

authentication.roles

Defines a comma-separated list of user roles. To log in to the Kafka REST Proxy server, the authenticated user must belong to at least one of these roles. For more information, see Basic authentication

admin

authentication.realm

Corresponds to a section in the jaas_config.file that defines how the server authenticates users and must be passed as a parameter to the JVM during server startup

KafkaRest

kafka-rest.properties

 

Parameter Description Default value

id

Unique ID for this REST server instance

kafka-rest-server

consumer.threads

The minimum number of threads to run consumer request on. You must set this value higher than the maximum number of consumers in a single consumer group

50

consumer.request.timeout.ms

The maximum total time to wait for messages for a request in the maximum request size has not yet been reached (in ms)

100

consumer.request.max.bytes

The maximum number of bytes in message keys and values returned by a single request

67108864

fetch.min.bytes

The minimum number of bytes in message keys and values returned by a single request

-1

ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate the server hostname using the server certificate

 — 

ssl.keystore.location

Used for HTTPS. Location of the keystore file to use for SSL

 — 

ssl.keystore.type

The file format of the keystore

 — 

ssl.keystore.password

Used for HTTPS. The store password for the keystore file

 — 

ssl.key.password

Used for HTTPS. The password of the private key in the keystore file

 — 

ssl.truststore.location

Used for HTTPS. Location of the truststore. Required only to authenticate HTTPS clients

 — 

ssl.truststore.type

The file format of the truststore

 — 

ssl.truststore.password

Used for HTTPS. The store password for the truststore file

 — 

client.ssl.keystore.location

The location of the SSL keystore file

 — 

client.ssl.keystore.password

The password to access the keystore

 — 

client.ssl.key.password

The password of the key contained in the keystore

 — 

client.ssl.keystore.type

The file format of the keystore

 — 

client.ssl.truststore.location

The location of the SSL truststore file

 — 

client.ssl.truststore.password

The password to access the truststore

 — 

client.ssl.truststore.type

The file format of the truststore

 — 

client.ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate the server

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file kafka-rest.properties

 — 

JAAS template

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.basic_auth_default_config is not none %}
{{ services.kafka_rest.config.basic_auth_properties_content['authentication.realm'] }} {
  org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
  file="{{ rest_home_path }}/config/password.properties"
  debug="false";
};
{% endif %}
{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName="kafka"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/kafka-rest.service.keytab"
    principal="kafka-rest/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
{%- elif cluster.config.sasl_plain_auth_default_config is not none %}
    {%- set credential = cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data %}
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka-rest"
  password="{{ credential['kafka-rest'] }}";
};
{% endif %}
Principal Propagation

 

Parameter Description Default value

JAAS Entry

A section of the user jaas.conf file that specifies the list of users to authenticate to Kafka. For details, see Work with kafka-rest-security

KafkaClient

ksqlDB

Main

 

Parameter Description Default value

Listener port

ksqlDB server listener port. Specified as listeners in ksql-server.properties file

8088

ksql-env.sh

 

Parameter Description Default value

LOG_DIR

The directory for storing logs

/var/log/ksql

JMX_PORT

Port on which ksqlDB-server sends JMX metrics

10099

KSQL_HEAP_OPTS

Heap size allocated to the ksqlDB-server process

-Xmx3g

KSQL_JVM_PERFORMANCE_OPTS

JVM options in terms of PERFORMANCE options

-server

-XX:+UseConcMarkSweepGC

-XX:+CMSClassUnloadingEnabled

-XX:+CMSScavengeBeforeRemark

-XX:+ExplicitGCInvokesConcurrent

-XX:NewRatio=1 -Djava.awt.headless=true

CLASSPATH

A setting for the Java Virtual Machine or Java compiler that specifies the location of custom classes and packages

/usr/lib/ksql/libs/*

KSQL_CLASSPATH

Path to Java deployment of ksqlDB Server and related Java classes

${CLASSPATH}

KSQL_OPTS

Environment variable that specifies the configuration settings for the ksqlDB server. Properties set with KSQL_OPTS take precedence over those specified in the ksqlDB configuration file

-Djava.security.auth.login.config=/etc/ksqldb/jaas_config.conf

Basic Auth properties

 

Parameter Description Default value

authentication.method

Authentication method

BASIC

authentication.roles

Defines a comma-separated list of user roles. To log in to the ksqlDB server, the authenticated user must belong to at least one of these roles. For more information, see Basic authentication

admin

authentication.realm

Corresponds to a section in the jaas_config.file that defines how the server authenticates users and must be passed as a parameter to the JVM during server startup

KsqlServer-Props

Path to password.properties

Path to password.properties

/etc/ksqldb/password.properties

server.properties

 

Parameter Description Default value

application.id

Application ID

ksql-server

ksql.internal.topic.replicas

The replication factor for the ksqlDB Servers internal topics

1

ksql.streams.state.dir

The storage directory for stateful operation

/usr/lib/ksql/state

ksql.streams.replication.factor

Underlying internal topics of Kafka Streams

1

ksql.streams.topic.min.insync.replicas

Minimum number of brokers that must have data written to synchronized replicas

2

ksql.streams.num.standby.replicas

Number of replicas for stateful operations

1

ksql.streams.producer.acks

Number of brokers that need to acknowledge receipt of a message before it is considered a successful write

all

ksql.streams.producer.delivery.timeout.ms

The batch expiry (in ms)

2147483647

ksql.streams.producer.max.block.ms

Maximum allowable time for the producer to block (in ms)

9223372036854775000

ssl.endpoint.identification.algorithm

Endpoint identification algorithm for server validation

 — 

ssl.keystore.location

Used for HTTPS. Location of the keystore file to use for SSL

 — 

ssl.keystore.type

The file format of the keystore file

 — 

ssl.keystore.password

Used for HTTPS. The store password for the keystore file

 — 

ssl.key.password

Used for HTTPS. The password of the private key in the keystore file

 — 

ssl.truststore.location

Location of the truststore file

 — 

ssl.truststore.type

File format of the truststore file

 — 

ssl.truststore.password

Used for HTTPS. The store password for the truststore file

 — 

ksql.schema.registry.ssl.keystore.location

The location of the SSL keystore file

ksql.schema.registry.ssl.keystore.password

The password to access the keystore

 — 

ksql.schema.registry.ssl.key.password

The password of the key contained in the keystore

 — 

ksql.schema.registry.ssl.keystore.type

The file format of the keystore

 — 

ksql.schema.registry.ssl.truststore.location

The location of the SSL truststore file

 — 

ksql.schema.registry.ssl.truststore.password

The password to access the truststore

 — 

ksql.schema.registry.ssl.truststore.type

The file format of the truststore

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file server.properties

 — 

connect.properties

 

Parameter Description Default value

group.id

The group ID is a unique identifier for the set of workers

ksql-connect-cluster

key.converter

The converters specify the format of data in Kafka and how to translate it into Connect data

org.apache.kafka.connect.storage.StringConverter

key.converter.schema.registry.url

KSQL key data location

http://localhost:8081

value.converter

Converter class for value Connect data

io.confluent.connect.avro.AvroConverter

value.converter.schema.registry.url

Location of ksqlDB data values

http://localhost:8081

config.storage.topic

The name of the internal topic for storing configurations

ksql-connect-configs

offset.storage.topic

A topic to store statistics connect offsets

ksql-connect-offsets

status.storage.topic

A topic to store statistics connect status

ksql-connect-statuses

config.storage.replication.factor

Replication factor for config.storage.topic

1

offset.storage.replication.factor

Replication factor for offset.storage.topic

1

status.storage.replication.factor

Replication factor for status.storage.topic

1

internal.key.converter

A converter class for internal values with connect records

org.apache.kafka.connect.json.JsonConverter

internal.value.converter

A converter class for internal values with connect records

org.apache.kafka.connect.json.JsonConverter

internal.key.converter.schemas.enable

Schema configuration for internal statistics connect data

false

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file connect.properties

 — 

datagen.properties

 

Parameter Description Default value

interceptor.classes

If you are not using any interceptors currently, you will need to add a new item to the Java Properties object that you use to create a new Producer

io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file datagen.properties

 — 

JAAS template

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.basic_auth_default_config is not none %}
{{ services.ksql.config.basic_auth_properties_content['authentication.realm'] }} {
  org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
  file="{{ ksql_home_path }}/config/password.properties"
  debug="false";
};
{% endif %}
{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName="kafka"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/ksql-server.service.keytab"
    principal="ksql-server/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
{%- elif cluster.config.sasl_plain_auth_default_config is not none %}
    {%- set credential = cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data %}
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="ksql-server"
  password="{{ credential['ksql-server'] }}";
};
{% endif %}

 

ksqlDB Server component configuration parameters:

log4j properties configuration

 

Parameter Description Default value

log4j.rootLogger

Setting the logging level

INFO

log4j.logger.org.reflections

Setting the Reflections warning level

ERROR

log4j.logger.org.apache.kafka.streams

Setting the logging level of Kafka Streams

INFO

log4j.logger.kafka

Change to adjust the general broker logging level (output to server.log and stdout). See also log4j.logger.org.apache.kafka

WARN

log4j.logger.org.apache.zookeeper

Change to adjust ZooKeeper client logging

WARN

log4j.logger.org.apache.kafka

Change to adjust the general broker logging level (output to server.log and stdout). See also log4j.logger.kafka

WARN

log4j.logger.org.I0Itec.zkclient

Change to adjust ZooKeeper client logging level

WARN

log4j.logger.io.confluent.ksql.rest.server.resources.KsqlResource

 

Parameter Description Default value

log4j.logger.io.confluent.ksql.rest.server.resources.KsqlResource

Stop ksqlDB from logging out each request it receives

WARN

log4j.logger.io.confluent.ksql.util.KsqlConfig

 

Parameter Description Default value

log4j.logger.io.confluent.ksql.util.KsqlConfig

Enable to avoid the logs being spammed with KsqlConfig values

WARN

log4j_properties_template

 

    Template for customizing the log4j logging library.

    Default value:

# Maintained by ADCM
{% set ksql_server_log4j_properties_configuration = services.ksql.SERVER.config.log4j_properties_configuration %}

log4j.rootLogger={{ ksql_server_log4j_properties_configuration['log4j.rootLogger'] }}, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.appender.streams=org.apache.log4j.ConsoleAppender
log4j.appender.streams.layout=org.apache.log4j.PatternLayout
log4j.appender.streams.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.org.reflections={{ ksql_server_log4j_properties_configuration['log4j.logger.org.reflections'] }}, stdout

{% if services.ksql.SERVER.config.log4j_logger_io_confluent_ksql_rest_server_resources_KsqlResource['log4j.logger.io.confluent.ksql.rest.server.resources.KsqlResource'] is defined %}
log4j.logger.io.confluent.ksql.rest.server.resources.KsqlResource={{ services.ksql.SERVER.config.log4j_logger_io_confluent_ksql_rest_server_resources_KsqlResource['log4j.logger.io.confluent.ksql.rest.server.resources.KsqlResource'] }}
{% endif %}
{% if services.ksql.SERVER.config.log4j_logger_io_confluent_ksql_util_KsqlConfig['log4j.logger.io.confluent.ksql.util.KsqlConfig'] is defined %}
log4j.logger.io.confluent.ksql.util.KsqlConfig={{ services.ksql.SERVER.config.log4j_logger_io_confluent_ksql_util_KsqlConfig['log4j.logger.io.confluent.ksql.util.KsqlConfig'] }}
{% endif %}

log4j.logger.org.apache.kafka.streams={{ ksql_server_log4j_properties_configuration['log4j.logger.org.apache.kafka.streams'] }}, streams
log4j.additivity.org.apache.kafka.streams=false

log4j.logger.kafka={{ ksql_server_log4j_properties_configuration['log4j.logger.kafka'] }}, stdout
log4j.logger.org.apache.zookeeper={{ ksql_server_log4j_properties_configuration['log4j.logger.org.apache.zookeeper'] }}, stdout
log4j.logger.org.apache.kafka={{ ksql_server_log4j_properties_configuration['log4j.logger.org.apache.kafka'] }}, stdout
log4j.logger.org.I0Itec.zkclient={{ ksql_server_log4j_properties_configuration['log4j.logger.org.I0Itec.zkclient'] }}, stdout
log4j_rolling_properties_template

 

    Template for customizing the log4j_rolling_properties logging file.

    Default value:

# Maintained by ADCM
{% set broker_port = (services.kafka.config.Main.listeners | regex_replace('.*:(\\d+)$', '\\1')) %}
{% set broker_hosts_with_port = services.kafka.config.bootstrap_servers_without_protocol %}
log4j.rootLogger=INFO, main

# appenders
log4j.appender.main=org.apache.log4j.RollingFileAppender
log4j.appender.main.File=${ksql.log.dir}/ksql.log
log4j.appender.main.layout=org.apache.log4j.PatternLayout
log4j.appender.main.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
log4j.appender.main.MaxFileSize=10MB
log4j.appender.main.MaxBackupIndex=5
log4j.appender.main.append=true

log4j.appender.streams=org.apache.log4j.RollingFileAppender
log4j.appender.streams.File=${ksql.log.dir}/ksql-streams.log
log4j.appender.streams.layout=org.apache.log4j.PatternLayout
log4j.appender.streams.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.appender.kafka=org.apache.log4j.RollingFileAppender
log4j.appender.kafka.File=${ksql.log.dir}/ksql-kafka.log
log4j.appender.kafka.layout=org.apache.log4j.PatternLayout
log4j.appender.kafka.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
log4j.appender.kafka.MaxFileSize=10MB
log4j.appender.kafka.MaxBackupIndex=5
log4j.appender.kafka.append=true

log4j.appender.kafka_appender=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.kafka_appender.layout=io.confluent.common.logging.log4j.StructuredJsonLayout
log4j.appender.kafka_appender.BrokerList=
{%- for host_with_port in broker_hosts_with_port.split(',') -%}
    {% if loop.index > 1 %},{% endif -%}
    {{ ('ssl' in cluster.multi_state) | ternary('https', 'http') }}://{{ host_with_port -}}
{% endfor %}

log4j.appender.kafka_appender.Topic=default_ksql_processing_log
log4j.appender.kafka_appender.SyncSend=true
log4j.appender.kafka_appender.IgnoreExceptions=false


{% if cluster.edition == 'enterprise' %}
{% set sasl_protocol = services.kafka.config['listeners_option']['sasl_protocol'] | d('none') %}
{% set ssl_enable = services.kafka.config['listeners_option']['ssl_enable'] | d(False) %}
log4j.appender.kafka_appender.SecurityProtocol={{ sasl_protocol | kafka_protocol(ssl_enable) }}
log4j.appender.kafka_appender.SaslMechanism={{ sasl_protocol | normalize_sasl_protocol }}

{% if sasl_protocol | normalize_sasl_protocol == 'PLAIN' %}
log4j.appender.kafka_appender.clientJaasConf=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username=ksql-server \
    password="{{ cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data['ksql-server'] }}";
{% endif %}

{% if sasl_protocol | normalize_sasl_protocol == 'GSSAPI' %}
log4j.appender.kafka_appender.SaslKerberosServiceName=kafka
log4j.appender.kafka_appender.clientJaasConf=com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \
    storeKey=true \
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/ksql-server.service.keytab" \
    principal="ksql-server/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}" \
    serviceName="kafka";
{% endif %}

{% if ssl_enable %}
log4j.appender.kafka_appender.SslTruststoreLocation={{ services.kafka.config.server_properties_content['ssl.truststore.location'] }}
log4j.appender.kafka_appender.SslTruststorePassword={{ services.kafka.config.server_properties_content['ssl.truststore.password'] }}
{% endif %}
{% endif %}
# loggers

log4j.logger.org.reflections=ERROR, main

# Uncomment the following line to stop ksqlDB from logging out each request it receives:
#log4j.logger.io.confluent.ksql.rest.server.resources.KsqlResource=WARN

# And this one to avoid the logs being spammed with KsqlConfig values.
# Though these can be useful for debugging / investigations.
#log4j.logger.io.confluent.ksql.util.KsqlConfig=WARN

## ksqlDB Processing logs:
log4j.logger.processing=WARN, kafka_appender
log4j.additivity.processing=false

## Kafka Streams logs:
log4j.logger.org.apache.kafka.streams=INFO, streams
log4j.additivity.org.apache.kafka.streams=false

## Kafka Clients logs:
log4j.logger.org.apache.kafka.clients=INFO, clients
log4j.additivity.org.apache.kafka.clients=false

## Kafka Connect logs:
log4j.logger.org.apache.kafka.connect=INFO, connect
log4j.additivity.org.apache.kafka.connect=false

## Other Kafka logs:
log4j.logger.kafka=WARN, kafka
log4j.additivity.kafka=false

log4j.logger.org.apache.zookeeper=WARN, kafka
log4j.additivity.org.apache.zookeeper=false

log4j.logger.org.apache.kafka=WARN, kafka
log4j.additivity.org.apache.kafka=false

log4j.logger.org.I0Itec.zkclient=WARN, kafka
log4j.additivity.org.I0Itec.zkclient=false

# To achieve high throughput on pull queries, avoid logging every request from Jetty
log4j.logger.io.confluent.rest-utils.requests=WARN

ksql_migrations_log4j_properties_template
# Root logger -- disable all non-migrations-tool logging
log4j.rootLogger=OFF

# Migrations tool logger
log4j.logger.io.confluent.ksql.tools.migrations=INFO, console

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.consoleAppender.layout.ConversionPattern=[%t] %-5p %c %x - %m%n
log4j_file_properties_template

 

    Template for customizing the log4j_file_properties logging file.

    Default value:

#
# Copyright 2018 Confluent Inc.
#
# Licensed under the Confluent Community License (the "License"); you may not use
# this file except in compliance with the License.  You may obtain a copy of the
# License at
#
# http://www.confluent.io/confluent-community-license
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OF ANY KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations under the License.
#

# For the general syntax of property based configuration files see
# the documentation of org.apache.log4j.PropertyConfigurator.

log4j.rootLogger=WARN, default.file

log4j.appender.default.file=io.confluent.ksql.util.TimestampLogFileAppender
log4j.appender.default.file.ImmediateFlush=true
log4j.appender.default.file.append=false

log4j.appender.default.file.file=${ksql.log.dir}/ksql-cli/cli-%timestamp.log
log4j.appender.default.file.layout=org.apache.log4j.PatternLayout
log4j.appender.default.file.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
log4j_silent_properties_template

 

    Template for customizing the log4j_silent_properties logging file.

    Default value:

#
# Copyright 2018 Confluent Inc.
#
# Licensed under the Confluent Community License (the "License"); you may not use
# this file except in compliance with the License.  You may obtain a copy of the
# License at
#
# http://www.confluent.io/confluent-community-license
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OF ANY KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations under the License.
#

log4j.rootLogger=OFF

MiNiFi

Main

 

Parameter Description Default value

MiNiFi C2 Server port

MiNiFi Server HTTP port

10080

nifi.minifi.notifier.ingestors.pull.http.query

Query string to pull configuration

minifi

minifi-env.sh

 

Parameter Description Default value

MINIFI_HOME

The directory for installing MiNiFi

/usr/lib/minifi

MINIFI_PID_DIR

The directory to store the MiNiFi process ID

/var/run/minifi

MINIFI_LOG_DIR

The directory to store the logs

/var/log/minifi

MiNiFi Agent bootstrap.conf

 

Parameter Description Default value

MiNiFi Agent Heap size

Agent heap size

256m

nifi.minifi.notifier.ingestors.pull.http.period.ms

Update check period (in ms)

300000

nifi.minifi.status.reporter.log.query

Query the status of a MiNiFi instance:

  • health — instance report status, active threads, presence or absence of bulletins, and any validation errors;

  • bulletins — list of all current bulletins (if any);

  • stats — the current state of the instance, including read/write bytes and sent/transmitted FlowFiles.

instance:

  • health;

  • bulletins.

nifi.minifi.status.reporter.log.level

Log level at which the status is logged

INFO

nifi.minifi.status.reporter.log.period

Delay between each request (in ms)

60000

nifi.minifi.security.keystore

The full path and name of the keystore

 — 

nifi.minifi.security.keystoreType

The keystore type

 — 

nifi.minifi.security.keystorePasswd

The keystore password

 — 

nifi.minifi.security.keyPasswd

The key password

 — 

nifi.minifi.security.truststore

The full path and name of the truststore

 — 

nifi.minifi.security.truststoreType

The truststore type

 — 

nifi.minifi.security.truststorePasswd

The truststore password

 — 

nifi.minifi.security.ssl.protocol

Security protocol

 — 

nifi.minifi.notifier.ingestors.pull.http.keystore.location

The full path and name of the keystore

 — 

nifi.minifi.notifier.ingestors.pull.http.keystore.type

The keystore type

 — 

nifi.minifi.notifier.ingestors.pull.http.keystore.password

The keystore password

 — 

nifi.minifi.notifier.ingestors.pull.http.truststore.location

The full path and name of the truststore

 — 

nifi.minifi.notifier.ingestors.pull.http.truststore.type

The truststore type

 — 

nifi.minifi.notifier.ingestors.pull.http.truststore.password

The truststore password

 — 

MiNiFi Agent logback.xml

 
Setting logging levels and log rotate for MiNiFi

Parameter Description Default value

app_file_max_history

Maximum number of files for applications

10

boot_file_max_history

Maximum number of files for Boot

5

status_file_max_history

Maximum number of files for statuses

5

root_level

Event Level

INFO

Setting the structure of the logging configuration file for MiNiFi

Logger Default package names Default event level

app_loggers

org.apache.nifi

INFO

org.apache.nifi.processors

WARN

org.apache.nifi.processors.standard.LogAttribute

INFO

org.apache.nifi.controller.repository.StandardProcessSession

WARN

bootstrap_loggers

org.apache.nifi.bootstrap

INFO

org.apache.nifi.bootstrap.Command

INFO

org.apache.nifi.StdOut

INFO

org.apache.nifi.StdErr

ERROR

status_loggers

org.apache.nifi.minifi.bootstrap.status.reporters.StatusLogger

INFO

MiNiFi Agent logback.xml template

 

    Template for customizing the MiNiFi Agent logback.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Maintained by ADCM
-->
<configuration scan="true" scanPeriod="30 seconds">
    <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
        <resetJUL>true</resetJUL>
    </contextListener>
    {% set logback = services.minifi.config['minifi_agent_logback_content'] %}
    <appender name="APP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.minifi.bootstrap.config.log.dir}/minifi-app.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'app_%d.log'.
              For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.minifi.bootstrap.config.log.dir}/minifi-app_%d{yyyy-MM-dd_HH}.%i.log.gz</fileNamePattern>
            <!-- Keep 10 rolling periods worth of log files-->
            <maxHistory>{{ logback.app_file_max_history }}</maxHistory>
            <!-- Max size each log file will be-->
            <maxFileSize>1MB</maxFileSize>
            <!-- Provide a cap of 10 MB across all archive files -->
            <totalSizeCap>10MB</totalSizeCap>
        </rollingPolicy>
        <immediateFlush>true</immediateFlush>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="BOOTSTRAP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.minifi.bootstrap.config.log.dir}/minifi-bootstrap.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'user_%d.log'.
              For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.minifi.bootstrap.config.log.dir}/minifi-bootstrap_%d.log.gz</fileNamePattern>
            <!-- Keep 5 rolling periods worth of logs-->
            <maxHistory>{{ logback.boot_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

        <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
                <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
        </appender>

    <appender name="STATUS_LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.minifi.bootstrap.config.log.dir}/minifi-status.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
            For daily rollover, use 'user_%d.log'.
            For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
            To GZIP rolled files, replace '.log' with '.log.gz'.
            To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.minifi.bootstrap.config.log.dir}/minifi-status_%d.log</fileNamePattern>
            <!-- keep 5 log files worth of history -->
            <maxHistory>{{ logback.status_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>


    <!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->

    {% for key, value in logback.app_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}"/>
    {% endfor -%}

    <!-- Logger for managing logging statements for jetty -->
    <logger name="org.eclipse.jetty" level="INFO"/>

    <!-- Suppress non-error messages due to excessive logging by class or library -->
    <logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
    <logger name="com.sun.jersey.spi.spring" level="ERROR"/>
    <logger name="org.springframework" level="ERROR"/>

    <!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
    <logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>

    <!--
        Logger for capturing Bootstrap logs and MiNiFi's standard error and standard out.
    -->

    {% for key, value in logback.bootstrap_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}" additivity="false">
        <appender-ref ref="BOOTSTRAP_FILE"/>
    {% if key == "org.apache.nifi.minifi.bootstrap.Command" %}
        <appender-ref ref="CONSOLE" />
    {% endif -%}
    </logger>
    {% endfor -%}

    {% for key, value in logback.status_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}" additivity="false">
        <appender-ref ref="STATUS_LOG_FILE" />
    </logger>
    {% endfor -%}

    <root level="{{ logback.root_level }}">
        <appender-ref ref="APP_FILE"/>
    </root>

</configuration>
MiNiFi Agent state-management.xml template

 

    Template for customizing the MiNiFi Agent state-management.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
  Maintained by ADCM
-->
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!--
  This file provides a mechanism for defining and configuring the State Providers
  that should be used for storing state locally and across a NiFi cluster. In order
  to use a specific provider, it must be configured here and its identifier
  must be specified in the nifi.properties file.
-->
<stateManagement>
    <!--
        State Provider that stores state locally in a configurable directory. This Provider requires the following properties:

        Directory - the directory to store components' state in. If the directory being used is a sub-directory of the NiFi installation, it
                    is important that the directory be copied over to the new version when upgrading NiFi.
     -->
    <local-provider>
        <id>local-provider</id>
        <class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
        <property name="Directory">./state/local</property>
    </local-provider>

    <!--
        State Provider that is used to store state in ZooKeeper. This Provider requires the following properties:

        Root Node - the root node in ZooKeeper where state should be stored. The default is '/nifi', but it is advisable to change this to a different value if not using
                   the embedded ZooKeeper server and if multiple NiFi instances may all be using the same ZooKeeper Server.

        Connect String - A comma-separated list of host:port pairs to connect to ZooKeeper. For example, myhost.mydomain:2181,host2.mydomain:5555,host3:6666

        Session Timeout - Specifies how long this instance of NiFi is allowed to be disconnected from ZooKeeper before creating a new ZooKeeper Session. Default value is "30 seconds"

        Access Control - Specifies which Access Controls will be applied to the ZooKeeper ZNodes that are created by this State Provider. This value must be set to one of:
                            - Open  : ZNodes will be open to any ZooKeeper client.
                            - CreatorOnly  : ZNodes will be accessible only by the creator. The creator will have full access to create children, read, write, delete, and administer the ZNodes.
                                             This option is available only if access to ZooKeeper is secured via Kerberos or if a Username and Password are set.

        Username - An optional username that can be used to assign Access Controls to ZNodes. ZooKeeper allows users to assign arbitrary usernames and passwords to ZNodes. These usernames
                   and passwords are not explicitly defined elsewhere but are simply associated with ZNodes, so it is important that all NiFi nodes in a cluster have the same value for the
                   Username and Password properties.

        Password - An optional password that can be used to assign Access Controls to ZNodes. This property must be set if the Username property is set. NOTE: ZooKeeper transmits passwords
                   in plain text. As a result, a Username and Password should be used only if communicate with a ZooKeeper on a localhost or over encrypted comms (such as configuring SSL
                   communications with ZooKeeper).
    -->
    <cluster-provider>
        <id>zk-provider</id>
        <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
        <property name="Connect String"></property>
        <property name="Root Node">/nifi</property>
        <property name="Session Timeout">30 seconds</property>
        <property name="Access Control">CreatorOnly</property>
        <property name="Username">nifi</property>
        <property name="Password">nifi</property>
    </cluster-provider>
</stateManagement>
MiNiFi C2 Server c2.properties

 

Parameter Description Default value

minifi.c2.server.secure

Defines whether MiNiFi C2 is secure

 — 

minifi.c2.server.keystore

The full path and name of the keystore

 — 

minifi.c2.server.keystoreType

The keystore type

 — 

minifi.c2.server.keystorePasswd

The keystore password

 — 

minifi.c2.server.keyPasswd

The key password

 — 

minifi.c2.server.truststore

The full path and name of the truststore

 — 

minifi.c2.server.truststoreType

The truststore type

 — 

minifi.c2.server.truststorePasswd

The truststore password

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file MiNiFi C2 Server c2.properties

 — 

MiNiFi C2 Server logback.xml

 

Parameter Description Default value

log_file_max_history

Maximum number of files for applications

10

root_level

Event Level

INFO

Setting the structure of the logging configuration file for MiNiFi C2 Server

Logger Default package names Default event level

log_file_loggers

org.apache.nifi.minifi.c2

DEBUG

MiNiFi C2 Server authorizations.yaml

 

    Template for customizing the MiNiFi C2 Server authorizations.yaml file.

    Default value:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the \"License\"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an \"AS IS\" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Default Action: deny
Paths:
  /c2/config:
    Default Action: deny
    Actions:
    - Authorization: CLASS_RASPI_3
      Query Parameters:
        class: raspi3
      Action: allow
    - Authorization: ROLE_SUPERUSER
      Action: allow

    # Default authorization lets anonymous pull any config.  Remove below to change that.
    - Authorization: ROLE_ANONYMOUS
      Action: allow

  /c2/config/contentTypes:
    Default Action: deny
    Actions:
    - Authorization: CLASS_RASPI_3
      Action: allow
    # Default authorization lets anonymous pull any config.  Remove below to change that.
    - Authorization: ROLE_ANONYMOUS
      Action: allow

  /c2/config/heartbeat:
    Default Action: deny
    Actions:
      - Authorization: CLASS_RASPI_3
        Query Parameters:
          class: raspi3
        Action: allow
      - Authorization: ROLE_SUPERUSER
        Action: allow

      # Default authorization lets anonymous pull any config.  Remove below to change that.
      - Authorization: ROLE_ANONYMOUS
        Action: allow

  /c2/config/acknowledge:
    Default Action: deny
    Actions:
      - Authorization: CLASS_RASPI_3
        Query Parameters:
          class: raspi3
        Action: allow
      - Authorization: ROLE_SUPERUSER
        Action: allow

      # Default authorization lets anonymous pull any config.  Remove below to change that.
      - Authorization: ROLE_ANONYMOUS
        Action: allow
MiNiFi C2 Server logback.xml template

 

    Template for customizing the MiNiFi C2 Server logback.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Maintained by ADCM
-->
{% set logback = services.minifi.config['minifi_c2_server_logback_content'] -%}
<configuration scan="true" scanPeriod="30 seconds">
    <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
        <resetJUL>true</resetJUL>
    </contextListener>
        <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
                <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
        </appender>

    <appender name="LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/var/log/minifi-c2/minifi-c2.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>/var/log/minifi-c2/minifi-c2_%d.log</fileNamePattern>
            <!-- keep 5 log files worth of history -->
            <maxHistory>{{ logback.log_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    {% for key, value in logback.log_file_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}">
        <appender-ref ref="LOG_FILE"/>
    {% if key == "org.apache.nifi.minifi.c2" %}
        <appender-ref ref="CONSOLE" />
    {% endif -%}
    </logger>
    {% endfor -%}

    <root level="{{ logback.root_level }}">
        <appender-ref ref="LOG_FILE"/>
    </root>

</configuration>
Parameter Description Default value

Set service checks

Specifies whether to check availability after cluster installation

true

Monitoring Clients

Parameter Description Default value

Protocol

A transport protocol for sending metrics to the monitoring cluster. Possible values are TCP and UDP. A UDP protocol is supported by monitoring clusters of the 2.8 version or later

TCP

NiFi

Parameter Description Default value

Nifi config encryption password

The password from which to derive the key for encrypting the sensitive properties. Must be at least 12 characters long

0123456789ABC

Main

 

Parameter Description Default value

Nifi UI port

NiFi Server HTTP port. Specified as property nifi.web.http.port in the nifi.properties configuration file

9090

Nifi server Heap size

Heap size for Nifi server. Specified in bootstrap.conf configuration file

1024m

Nifi Registry UI

Nifi Registry HTTP port. Specified as the nifi.registry.web.http.port property in the nifi.properties configuration file

18080

Nifi Registry Heap size

Heap size for Nifi Registry. Specified in the bootstrap.conf configuration file

512m

nifi.queue.backpressure.count

The default value for the number of FlowFile files (underlying NiFi processing object) that can be queued before backpressure is applied, i.e. the source stops sending data. The value must be an integer

10000

nifi.queue.backpressure.size

The default value for the maximum amount of data that must be queued before backpressure is applied. The value must be the size of the data, including the unit of measure

1 GB

Directories

 

NiFi service repositories location options

Parameter Description Default value

nifi.flowfile.repository.directory

FlowFile repository location

/usr/lib/nifi-server/flowfile_repository

nifi.content.repository.directory

Content repository location

/usr/lib/nifi-server/content_repository

nifi.provenance.repository.directory

Provenance repository location

/usr/lib/nifi-server/provenance_repository

nifi.database.directory

H2 database directory location

/usr/lib/nifi-server/database_repository

nifi.registry.db.directory

Location of the Registry database directory

/usr/lib/nifi-registry/database

nifi.nar.library.directory.lib

The parameter should be used in case of adding custom nars

 — 

Parameter Description Default value

NiFi Ranger plugin enabled

Indicates whether Ranger NiFi Plugin is enabled (auto-populated)

false

ranger-nifi-audit.xml

 

Parameter Description Default value

xasecure.audit.destination.solr.batch.filespool.dir

The directory for Solr audit spool

/srv/ranger/nifi_plugin/audit_solr_spool

xasecure.audit.destination.solr.urls

Specifies Solr URL. Not setting when using ZooKeeper to connect to Solr

 — 

xasecure.audit.destination.solr.zookeepers

ZooKeeper connection string for the Solr destination

 — 

xasecure.audit.destination.solr.force.use.inmemory.jaas.config

ZooKeeper connections to Solr using configuration in a JAAS file

true

xasecure.audit.jaas.Client.loginModuleControlFlag

Specifies whether the success of the module is required, requisite, sufficient, or optional

 — 

xasecure.audit.jaas.Client.loginModuleName

Class name of the authentication technology used

 — 

xasecure.audit.jaas.Client.option.keyTab

Set this to the file name of the keytab to get principal’s secret key

 — 

xasecure.audit.jaas.Client.option.serviceName

Service name

 — 

xasecure.audit.jaas.Client.option.storeKey

Enable if you want the keytab or the principal’s key to be stored in the Subject’s private credentials

true

xasecure.audit.jaas.Client.option.useKeyTab

Enable if you want the module to get the principal’s key from the the keytab

true

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-nifi-audit.xml

 — 

ranger-nifi-security.xml

 

Parameter Description Default value

ranger.plugin.nifi.policy.rest.url

URL to Ranger Admin

 — 

ranger.plugin.nifi.service.name

Name of the Ranger service containing policies for this NiFi instance

 — 

ranger.plugin.nifi.policy.source.impl

Class to retrieve policies from the source

org.apache.ranger.admin.client.RangerAdminRESTClient

ranger.plugin.nifi.policy.cache.dir

Directory where Ranger policies are cached after successful retrieval from the source

/srv/ranger/nifi/policycache

ranger.plugin.nifi.policy.pollIntervalMs

How often to poll for changes in policies

30000

ranger.plugin.nifi.policy.rest.client.connection.timeoutMs

NiFi plugin RangerRestClient connection timeout in milliseconds

120000

ranger.plugin.nifi.policy.rest.client.read.timeoutMs

NiFi plugin RangerRestClient read timeout in milliseconds

30000

ranger.plugin.nifi.policy.rest.ssl.config.file

Path to the file containing SSL details to contact Ranger Admin

/etc/nifi/conf/ranger-nifi-policymgr-ssl.xml

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-nifi-security.xml

 — 

ranger-nifi-policymgr-ssl.xml

 

Parameter Description Default value

xasecure.policymgr.clientssl.keystore

The location of the keystore file

 — 

xasecure.policymgr.clientssl.keystore.password

The keystore password

 — 

xasecure.policymgr.clientssl.truststore

The location of the truststore file

 — 

xasecure.policymgr.clientssl.truststore.password

The truststore password

 — 

xasecure.policymgr.clientssl.keystore.credential.file

Location of the keystore password credential file

/etc/nifi/conf/keystore.jceks

xasecure.policymgr.clientssl.truststore.credential.file

Location of the truststore password credential file

/etc/nifi/conf/truststore.jceks

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-nifi-policymgr-ssl.xml

 — 

ranger-nifi-registry-audit.xml

 

Parameter Description Default value

xasecure.audit.destination.solr.batch.filespool.dir

The directory for Solr audit spool

/srv/ranger/nifi_registry_plugin/audit_solr_spool

xasecure.audit.destination.solr.urls

Specifies Solr URL

 — 

xasecure.audit.destination.solr.zookeepers

Zookeeper connection string for the Solr destination

 — 

xasecure.audit.destination.solr.force.use.inmemory.jaas.config

ZooKeeper connections to Solr using configuration in a JAAS file

 — 

xasecure.audit.jaas.Client.loginModuleControlFlag

Specifies whether the success of the module is required, requisite, sufficient, or optional

 — 

xasecure.audit.jaas.Client.loginModuleName

Class name of the authentication technology used

 — 

xasecure.audit.jaas.Client.option.keyTab

Set this to the file name of the keytab to get principal’s secret key

 — 

xasecure.audit.jaas.Client.option.serviceName

Service name

 — 

xasecure.audit.jaas.Client.option.storeKey

Set this to true to if you want the keytab or the principal’s key to be stored in the Subject’s private credentials

 — 

xasecure.audit.jaas.Client.option.useKeyTab

Set this to true if you want the module to get the principal’s key from the the keytab

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-nifi-registry-audit.xml

 — 

ranger-nifi-registry-security.xml

 

Parameter Description Default value

ranger.plugin.nifi-registry.policy.rest.url

Path to the NiFi-registry variable for the Ranger service

 — 

ranger.plugin.nifi-registry.service.name

Name of the Ranger service containing policies for this NiFi-registry instance

 — 

ranger.plugin.nifi-registry.policy.source.impl

Class to retrieve policies from the source

org.apache.ranger.admin.client.RangerAdminRESTClient

ranger.plugin.nifi-registry.policy.cache.dir

The directory where Ranger policies are cached after successful retrieval from the source

/srv/ranger/nifi-registry/policycache

ranger.plugin.nifi-registry.policy.pollIntervalMs

How often to poll for changes in policies (in ms)

30000

ranger.plugin.nifi-registry.policy.rest.client.connection.timeoutMs

Nifi-registry plugin RangerRestClient connection timeout (in ms)

120000

ranger.plugin.nifi-registry.policy.rest.client.read.timeoutMs

Nifi-registrу plugin RangerRestClient read timeout (in ms)

30000

ranger.plugin.nifi-registry.policy.rest.ssl.config.file

Path to the file containing SSL details to contact Ranger Admin

/etc/nifi-registry/conf/ranger-policymgr-ssl.xml

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-nifi-registry-security.xml

 — 

ranger-nifi-registry-policymgr-ssl.xml

 

Parameter Description Default value

xasecure.policymgr.clientssl.keystore

The location of the keystore file

 — 

xasecure.policymgr.clientssl.keystore.password

The keystore password

 — 

xasecure.policymgr.clientssl.truststore

The location of the truststore file

 — 

xasecure.policymgr.clientssl.truststore.password

The truststore password

 — 

xasecure.policymgr.clientssl.keystore.credential.file

Location of keystore password credential file

/etc/nifi-registry/conf/keystore.jceks

xasecure.policymgr.clientssl.truststore.credential.file

Location of the truststore password credential file

/etc/nifi-registry/conf/truststore.jceks

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file ranger-nifi-registry-policymgr-ssl.xml

 — 

authorizers.xml

 

Parameter Description Default value

DN NiFi’s nodes list

List of user and system identifications to seed the User File. These are required fields to enable SSL for the first time. Must include not only the DName of the NiFi Server component, but also the DName of the NiFi Registry, as well as the DName for the MiNiFi service components. For example, for an SSL-enabled cluster consisting of only NiFi Server, when adding a MiNiFi service or Schema Registry extension, you need to supplement this list with new DNames . Example for nodes — CN=nifi_node_hostname, OU=Arenadata, O=Arenadata, L=Moscow, ST=Moscow, C=RU

 — 

NiFi Initial Admin

ID of the primary administrator user who will be granted access to the user interface and the ability to create additional users, groups, and policies. The value of this property can be:

  • full user DN when setting Identity Strategy value of LDAP Login Identity Provider group to USE_DN;

  • only the login (name) of the user when setting the Identity Strategy value of the LDAP Login Identity Provider group to USE_USERNAME.

 — 

NiFi Initial Admin password

Initial Admin password — password of the user designated by NiFi Initial Admin

 — 

Ranger Admin Identitity

The DN of the certificate that Ranger will use to communicate with Nifi. Requires a generated SSL keystore and truststore on the Ranger host. Affected only for NiFi Ranger Plugin

 — 

LDAP Login Identitity Provider

 

Parameter Description Default value

Authentication Strategy

How the connection to the LDAP server is authenticated

ANONYMOUS

Manager DN

DN of a user that has an entry in the Active Directory with right to search users and groups. Will be used to bind to an LDAP server to search for users

 — 

Manager Password

The password of the manager that is used to bind to the LDAP server to search for users

 — 

TLS - Keystore

Path to the Keystore that is used when connecting to LDAP via LDAPS or START_TLS

 — 

TLS - Keystore Password

Password for the Keystore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Keystore Type

Type of the keystore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. JKS or PKCS12)

 — 

TLS - Truststore

Path to the truststore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Truststore Password

Password for the truststore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Truststore Type

Type of the truststore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. JKS or PKCS12)

 — 

TLS - Client Auth

Client authentication policy when connecting to LDAP using LDAPS or START_TLS. Possible values are REQUIRED, WANT, NONE

NONE

TLS - Protocol

Protocol to use when connecting to LDAP using LDAPS or START_TLS. (i.e. TLS, TLSv1.1, TLSv1.2, etc.)

 — 

TLS - Shutdown Gracefully

Specifies whether the TLS should be shut down gracefully before the target context is closed

False

Referral Strategy

Strategy for handling referrals

FOLLOW

Connect Timeout

Duration of connect timeout

10 secs

Read Timeout

Duration of read timeout

10 secs

LDAP URL

Space-separated list of URLs of the LDAP servers (e.g. ldap://<hostname>:<port>)

 — 

User Search Base

Base DN for searching for users (e.g. ou=users,o=nifi). Required to search users

 — 

User Search Filter

Filter for searching for users against the User Search Base (e.g. sAMAccountName={0}). The user specified name is inserted into {0}

 — 

Identity Strategy

Strategy to identify users. Possible values are USE_DN and USE_USERNAME

USE_DN

Authentication Expiration

The duration of how long the user authentication is valid for. If the user never logs out, they will be required to log back in following this duration

12 hours

LDAP UserGroupProvider

 

Parameter Description Default value

Authentication Strategy

How the connection to the LDAP server is authenticated

ANONYMOUS

Manager DN

DN of a user that has an entry in the Active Directory with right to search users and groups. Will be used to bind to an LDAP server to search for users

 — 

Manager Password

The password of the manager that is used to bind to the LDAP server to search for users

 — 

TLS - Keystore

Path to the Keystore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Keystore Password

Password for the Keystore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Keystore Type

Type of the keystore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. JKS or PKCS12)

 — 

TLS - Truststore

Path to the truststore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Truststore Password

Password for the truststore that is used when connecting to LDAP using LDAPS or START_TLS

 — 

TLS - Truststore Type

Type of the truststore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. JKS or PKCS12)

 — 

TLS - Client Auth

Client authentication policy when connecting to LDAP using LDAPS or START_TLS. Possible values are REQUIRED, WANT, NONE

NONE

TLS - Protocol

Protocol to use when connecting to LDAP using LDAPS or START_TLS. (i.e. TLS, TLSv1.1, TLSv1.2, etc.)

 — 

TLS - Shutdown Gracefully

Specifies whether the TLS should be shut down gracefully before the target context is closed

 — 

Referral Strategy

Strategy for handling referrals

FOLLOW

Connect Timeout

Duration of connect timeout

10 secs

Read Timeout

Duration of read timeout

10 secs

LDAP URL

Space-separated list of URLs of the LDAP servers (e.g. ldap://<hostname>:<port>)

 — 

Page Size

Sets the page size when retrieving users and groups. If not specified, no paging is performed

 — 

Sync Interval

Duration of time between syncing users and groups. Minimum allowable value is 10 secs

30 mins

User Search Base

Base DN for searching for users (e.g. ou=users,o=nifi). Required to search users

 — 

User Object Class

Object class for identifying users (e.g. person). Required if searching users

 — 

User Search Scope

Search scope for searching users

ONE_LEVEL

User Search Filter

Filter for searching for users against the User Search Base (e.g. (memberof=cn=team1,ou=groups,o=nifi))

 — 

User Identity Attribute

Attribute to use to extract user identity (e.g. cn). Optional. If not set, the entire DN is used

 — 

User Group Name Attribute

Attribute to use to define group membership (e.g. memberof). Optional. If not set, group membership will not be calculated through the users.

 — 

User Group Name Attribute - Referenced Group Attribute

If blank, the value of the attribute defined in User Group Name Attribute is expected to be the full dn of the group. If not blank, this property will define the attribute of the group ldap entry that the value of the attribute defined in User Group Name Attribute is referencing (e.g. name)

 — 

Group Search Base

Base DN for searching for groups (e.g. ou=groups,o=nifi). Required to search groups

 — 

Group Object Class

Object class for identifying groups (e.g. groupOfNames). Required if searching groups

 — 

Group Search Scope

Search scope for user group

ONE_LEVEL

Group Search Filter

Filter for searching for groups against the Group Search Base. Optional

 — 

Group Name Attribute

Attribute to use to extract group name (e.g. cn). Optional. If not set, the entire DN is used

 — 

Group Member Attribute

Attribute to use to define group membership (e.g. member). Optional

 — 

Group Member Attribute - Referenced User Attribute

If blank, the value of the attribute defined in Group Member Attribute is expected to be the full dn of the user. If not blank, this property will define the attribute of the user ldap entry that the value of the attribute defined in Group Member Attribute is referencing (e.g. uid).

 — 

Analytics Framework

 

Analytics platform configurations

Parameter Description Default value

nifi.analytics.predict.interval

Time interval in which analytic predictions should be made (e.g. queue saturation)

3 mins

nifi.analytics.query.interval

The time interval to query for past observations (for example, the last 3 minutes of snapshots). The value must be at least 3 times greater than the specified value nifi.components.status.snapshot.frequency

5 mins

nifi.analytics.connection.model.implementation

Implementation class for the state analysis model used for connection predictions

Ordinary Least Squares

nifi.analytics.connection.model.score.name

Name of the scoring type to use to score the model

rSquared

nifi.analytics.connection.model.score.threshold

Threshold for the scoring value (the score model must be above the specified threshold)

.90

nifi-env.sh

 
Parameters defining the place to install the NiFi service

Parameter Description Default value

NIFI_HOME

The directory for NiFi installation

/usr/lib/nifi-server

NIFI_PID_DIR

The directory to store the NiFi process ID

/var/run/nifi

NIFI_LOG_DIR

The directory to store the logs

/var/log/nifi

NIFI_ALLOW_EXPLICIT_KEYTAB

Defines whether to prevent of the old free-form keytab properties that were left around for backwards compatibility

true

nifi.properties

 

Parameter Description Default value

nifi.flow.configuration.file

The location of the XML-based flow configuration file

/etc/nifi/conf/flow.xml.gz

nifi.flow.configuration.json.file

/etc/nifi/conf/flow.json.gz

nifi.flow.configuration.archive.enabled

Enables NiFi to create a fallback schema for automatic stream updates

true

nifi.cluster.node.connection.timeout

When connecting to another node in the cluster, specifies how long this node should wait before considering the connection a failure

5 sec

nifi.cluster.node.read.timeout

When communicating with another node in the cluster, specifies how long this node should wait to receive information from the remote node before considering the communication with the node a failure

5 sec

nifi.zookeeper.connect.timeout

How long to wait when connecting to ZooKeeper before considering the connection a failure

3 sec

nifi.zookeeper.session.timeout

How long to wait after losing a connection to ZooKeeper before the session is expired

3 sec

nifi.variable.registry.properties

Comma-separated list of file location paths for one or more custom property files

/etc/nifi/conf/extra-args.properties

nifi.remote.input.http.enabled

Specifies whether HTTP Site-to-Site should be enabled on this host

true

nifi.remote.input.http.transaction.ttl

Specifies how long a transaction can stay alive on the server

30 sec

nifi.remote.contents.cache.expiration

Specifies how long NiFi should cache information about a remote NiFi instance when communicating via Site-to-Site

30 secs

nifi.flow.configuration.archive.max.time

The lifespan of archived flow.xml files

30 days

nifi.flow.configuration.archive.max.storage

The total data size allowed for the archived flow.xml files

500 MB

nifi.flow.configuration.archive.max.count

The number of archive files allowed

 — 

nifi.flowcontroller.autoResumeState

Indicates whether upon restart the components on the NiFi graph should return to their last state

true

nifi.flowcontroller.graceful.shutdown.period

Indicates the shutdown period

10 sec

nifi.flowservice.writedelay.interval

When many changes are made to the flow.xml, this property specifies how long to wait before writing out the changes, so as to batch the changes into a single write

500 ms

nifi.administrative.yield.duration

If a component allows an unexpected exception to escape, it is considered a bug. As a result, the framework will pause (or administratively yield) the component for this amount of time

30 sec

nifi.bored.yield.duration

When a component has no work to do (i.e., is bored), this is the amount of time it will wait before checking to see if it has new data to work on

10 millis

nifi.ui.banner.text

The banner text that may be configured to display at the top of the User Interface

 — 

nifi.ui.autorefresh.interval

The interval at which the User Interface auto-refreshes

30 sec

nifi.state.management.provider.local

The ID of the Local State Provider to use

local-provider

nifi.state.management.provider.cluster

The ID of the Cluster State Provider to use

zk-provider

nifi.state.management.embedded.zookeeper.start

Specifies whether or not this instance of NiFi should start an embedded ZooKeeper Server

false

nifi.h2.url.append

Specifies additional arguments to add to the connection string for the H2 database

;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE

nifi.flowfile.repository.implementation

The FlowFile Repository implementation. To store flowfiles in memory instead of on disk (accepting data loss in the event of power/machine failure or a restart of NiFi), set this property to org.apache.nifi.controller.repository.VolatileFlowFileRepository

org.apache.nifi.controller.repository.WriteAheadFlowFileRepository

nifi.flowfile.repository.wal.implementation

If the repository implementation is configured to use the WriteAheadFlowFileRepository, this property can be used to specify which implementation of the Write-Ahead Log should be used

org.apache.nifi.wali.SequentialAccessWriteAheadLog

nifi.flowfile.repository.partitions

The number of partitions

256

nifi.flowfile.repository.checkpoint.interval

The FlowFile Repository checkpoint interval

2 mins

nifi.flowfile.repository.always.sync

If set to true, any change to the repository will be synchronized to the disk, meaning that NiFi will ask the operating system not to cache the information. This is very expensive and can significantly reduce NiFi performance. However, if it is false, there could be the potential for data loss if either there

false

nifi.swap.manager.implementation

The Swap Manager implementation

org.apache.nifi.controller.FileSystemSwapManager

nifi.queue.swap.threshold

The queue threshold at which NiFi starts to swap FlowFile information to disk

20000

nifi.swap.in.period

The swap in period

5 sec

nifi.swap.in.threads

The number of threads to use for swapping in

1

nifi.swap.out.period

The swap out period

5 sec

nifi.swap.out.threads

The number of threads to use for swapping out

4

nifi.content.repository.implementation

The Content Repository implementation. The default value is org.apache.nifi.controller.repository.FileSystemRepository and should only be changed with caution. To store flowfile content in memory instead of on disk (at the risk of data loss in the event of power/machine failure), set this property to org.apache.nifi.controller.repository

org.apache.nifi.controller.repository.FileSystemRepository

nifi.content.claim.max.appendable.size

The maximum size for a content claim

1 MB

nifi.content.claim.max.flow.files

The maximum number of FlowFiles to assign to one content claim

100

nifi.content.repository.archive.max.retention.period

If archiving is enabled, then this property specifies the maximum amount of time to keep the archived data

12 hours

nifi.content.repository.archive.max.usage.percentage

If archiving is enabled then this property must have a value that indicates the content repository disk usage percentage at which archived data begins to be removed. If the archive is empty and content repository disk usage is above this percentage, then archiving is temporarily disabled. Archiving will resume when disk usage is below this percentage

50%

nifi.content.repository.archive.enabled

To enable content archiving, set this to true. Content archiving enables the provenance UI to view or replay content that is no longer in a dataflow queue

true

nifi.content.repository.always.sync

If set to true, any change to the repository will be synchronized to the disk, meaning that NiFi will ask the operating system not to cache the information. This is very expensive and can significantly reduce NiFi performance. However, if it is false, there could be the potential for data loss if either there is a sudden power loss or the operating system crashes

false

nifi.content.viewer.url

The URL for a web-based content viewer if one is available

../nifi-content-viewer/

nifi.provenance.repository.implementation

The Provenance Repository implementation. Possible values are:

  • org.apache.nifi.provenance.WriteAheadProvenanceRepository

  • org.apache.nifi.provenance.VolatileProvenanceRepository

  • org.apache.nifi.provenance.PersistentProvenanceRepository

  • org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepository

org.apache.nifi.provenance.WriteAheadProvenanceRepository

nifi.provenance.repository.debug.frequency

Controls the number of events processed between DEBUG statements documenting the performance metrics of the repository

1_000_000

nifi.provenance.repository.encryption.key.provider.implementation

The fully-qualified class name of the key provider

 — 

nifi.provenance.repository.encryption.key.provider.location

The path to the key definition resource

 — 

nifi.provenance.repository.encryption.key.id

The active key ID to use for encryption (e.g. Key1)

 — 

nifi.provenance.repository.encryption.key

The key to use for StaticKeyProvider

 — 

nifi.provenance.repository.max.storage.time

The maximum amount of time to keep data provenance information

24 hours

nifi.provenance.repository.max.storage.size

The maximum amount of data provenance information to store at a time

1 GB

nifi.provenance.repository.rollover.time

The amount of time to wait before rolling over the latest data provenance information so that it is available in the User Interface

30 secs

nifi.provenance.repository.rollover.size

The amount of information to roll over at a time

100 MB

nifi.provenance.repository.query.threads

The number of threads to use for Provenance Repository queries

2

nifi.provenance.repository.index.threads

The number of threads to use for indexing Provenance events so that they are searchable

2

nifi.provenance.repository.compress.on.rollover

Indicates whether to compress the provenance information when rolling it over

true

nifi.provenance.repository.always.sync

If set to true, any change to the repository will be synchronized to the disk, meaning that NiFi will ask the operating system not to cache the information

false

nifi.provenance.repository.indexed.fields

A comma-separated list of the fields that should be indexed and made searchable

EventType, FlowFileUUID, Filename, ProcessorID, Relationship

nifi.provenance.repository.indexed.attributes

A comma-separated list of FlowFile Attributes that should be indexed and made searchable

 — 

nifi.provenance.repository.index.shard.size

Large values for the shard size will result in more Java heap usage when searching the Provenance Repository but should provide better performance

500 MB

nifi.provenance.repository.max.attribute.length

Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved

65536

nifi.provenance.repository.concurrent.merge.threads

Specifies the maximum number of threads that are allowed to be used for each of the storage directories

2

nifi.provenance.repository.buffer.size

The Provenance Repository buffer size

100000

nifi.components.status.repository.implementation

The Component Status Repository implementation

org.apache.nifi.controller.status.history.VolatileComponentStatusRepository

nifi.components.status.repository.buffer.size

Specifies the buffer size for the Component Status Repository

1440

nifi.components.status.snapshot.frequency

Indicates how often to present a snapshot of the components status history

1 min

nifi.web.war.directory

The location of the web.war directory

./lib

nifi.web.jetty.working.directory

The location of the Jetty working directory

./work/jetty

nifi.web.jetty.threads

The number of Jetty threads

200

nifi.web.max.header.size

The maximum size allowed for request and response headers

16 KB

nifi.web.proxy.context.path

A comma-separated list of allowed HTTP X-ProxyContextPath or X-Forwarded-Context header values to consider. By default, this value is blank meaning all requests containing a proxy context path are rejected

 — 

nifi.web.proxy.host

A comma-separated list of allowed HTTP Host header values to consider when NiFi is running securely and will be receiving requests to a different host[:port] than it is bound to. For example, when running in a Docker container or behind a proxy (e.g. localhost:18443, proxyhost:443). By default, this value is blank meaning NiFi should only allow requests sent to the host[:port] that NiFi is bound to.

 — 

nifi.sensitive.props.key

Password (source string) from which to extract the encryption key for the algorithm specified in the nifi.sensitive.props.algorithm parameter

mysensetivekey

nifi.sensitive.props.key.protected

Protected password (source string) used to obtain the encryption key for the algorithm specified in the nifi.sensitive.props.algorithm parameter

 — 

nifi.sensitive.props.algorithm

The algorithm used to encrypt sensitive properties

PBEWITHMD5AND256BITAES-CBC-OPENSSL

nifi.sensitive.props.provider

The sensitive property provider

BC

nifi.sensitive.props.additional.keys

The comma-separated list of properties to encrypt in addition to the default sensitive properties

 — 

nifi.security.user.authorizer

Specifies which of the configured Authorizers in the authorizers.xml file to use. By default, it is set to file-provider

managed-authorizer

nifi.security.ocsp.responder.url

The URL for the Online Certificate Status Protocol (OCSP) responder if one is being used

 — 

nifi.security.ocsp.responder.certificate

The location of the OCSP responder certificate if one is being used. It is blank by default

 — 

nifi.security.user.oidc.discovery.url

The discovery URL for the desired OpenId Connect Provider

 — 

nifi.security.user.oidc.connect.timeout

Connect timeout when communicating with the OpenId Connect Provider

5 secs

nifi.security.user.oidc.read.timeout

Read timeout when communicating with the OpenId Connect Provider

5 secs

nifi.security.user.oidc.client.id

The client id for NiFi after registration with the OpenId Connect Provider

 — 

nifi.security.user.oidc.client.secret

The client secret for NiFi after registration with the OpenId Connect Provider

 — 

nifi.security.user.oidc.preferred.jwsalgorithm

The preferred algorithm for validating identity tokens. If this value is blank, it will default to RS256 which is required to be supported by the OpenId Connect Provider according to the specification. If this value is HS256, HS384, or HS512, NiFi will attempt to validate HMAC protected tokens using the specified client secret. If this value is none, NiFi will attempt to validate unsecured/plain tokens. Other values for this algorithm will attempt to parse as an RSA or EC algorithm to be used in conjunction with the JSON Web Key (JWK) provided through the jwks_uri in the metadata found at the discovery URL

 — 

nifi.security.user.knox.url

The URL for the Apache Knox login page

 — 

nifi.security.user.knox.publicKey

The path to the Apache Knox public key that will be used to verify the signatures of the authentication tokens in the HTTP Cookie

 — 

nifi.security.user.knox.cookieName

The name of the HTTP Cookie that Apache Knox will generate after successful login

hadoop-jwt

nifi.security.user.knox.audiences

Optional. A comma-separated listed of allowed audiences. If set, the audience in the token must be present in this listing. The audience that is populated in the token can be configured in Knox

 — 

nifi.cluster.protocol.heartbeat.interval

The interval at which nodes should emit heartbeats to the Cluster Coordinator

5 sec

nifi.cluster.node.protocol.port

The node’s protocol port

11433

nifi.cluster.node.protocol.threads

The number of threads that should be used to communicate with other nodes in the cluster

10

nifi.cluster.node.protocol.max.threads

The maximum number of threads that should be used to communicate with other nodes in the cluster

50

nifi.cluster.node.event.history.size

When the state of a node in the cluster is changed, an event is generated and can be viewed in the Cluster page. This value indicates how many events to keep in memory for each node

25

nifi.cluster.node.max.concurrent.requests

The maximum number of outstanding web requests that can be replicated to nodes in the cluster. If this number of requests is exceeded, the embedded Jetty server will return a "409: Conflict" response

100

nifi.cluster.firewall.file

The location of the node firewall file. This is a file that may be used to list all the nodes that are allowed to connect to the cluster. It provides an additional layer of security. This value is blank by default, meaning that no firewall file is to be used

 — 

nifi.cluster.flow.election.max.wait.time

Specifies the amount of time to wait before electing a Flow as the "correct" Flow. If the number of Nodes that have voted is equal to the number specified by the nifi.cluster.flow.election.max.candidates property, the cluster will not wait this long

5 mins

nifi.cluster.load.balance.host

Specifies the hostname to listen on for incoming connections for load balancing data across the cluster. If not specified, will default to the value used by the nifi.cluster.node.address property

 — 

nifi.cluster.load.balance.port

Specifies the port to listen on for incoming connections for load balancing data across the cluster

6342

nifi.cluster.load.balance.connections.per.node

The maximum number of connections to create between this node and each other node in the cluster. For example, if there are 5 nodes in the cluster and this value is set to 4, there will be up to 20 socket connections established for load-balancing purposes (5 x 4 = 20)

4

nifi.cluster.load.balance.max.thread.count

The maximum number of threads to use for transferring data from this node to other nodes in the cluster. While a given thread can only write to a single socket at a time, a single thread is capable of servicing multiple connections simultaneously because a given connection may not be available for reading/writing at any given time

8

nifi.cluster.load.balance.comms.timeout

When communicating with another node, if this amount of time elapses without making any progress when reading from or writing to a socket, then a TimeoutException will be thrown. This will then result in the data either being retried or sent to another node in the cluster, depending on the configured Load Balancing Strategy

30 sec

nifi.remote.input.socket.port

The remote input socket port for Site-to-Site communication

10443

nifi.remote.input.secure

This indicates whether communication between this instance of NiFi and remote NiFi instances should be secure

true

nifi.security.keystore

The full path and name of the keystore

/tmp/keystore.jks

nifi.security.keystoreType

The keystore type

JKS

nifi.security.keystorePasswd

The keystore password

 — 

nifi.security.keyPasswd

The key password

 — 

nifi.security.truststore

The full path and name of the truststore

 — 

nifi.security.truststoreType

The truststore type

JKS

nifi.security.truststorePasswd

The truststore password

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file nifi.properties

 — 

Nifi Server logback.xml

 
Setting logging levels and log rotate for NiFi-Server

Parameter Description Default value

app_file_max_history

Maximum number of files for applications

10

user_file_max_history

Maximum user files

10

boot_file_max_history

Maximum number of files for Boot

5

root_level

Event level

INFO

Setting the structure of the logging configuration file for NiFi-Server

Logger Default package names Default event level

app_loggers

org.apache.nifi

INFO

org.apache.nifi.processors

WARN

org.apache.nifi.processors.standard.LogAttribute

INFO

org.apache.nifi.processors.standard.LogMessage

INFO

org.apache.nifi.controller.repository.StandardProcessSession

WARN

org.wali

WARN

org.apache.nifi.cluster

INFO

org.apache.nifi.server.JettyServer

INFO

org.eclipse.jetty

INFO

user_events_loggers

org.apache.nifi.web.security

INFO

org.apache.nifi.web.api.config

INFO

org.apache.nifi.authorization

INFO

org.apache.nifi.cluster.authorization

INFO

org.apache.nifi.web.filter.RequestLogger

INFO

bootstrap_loggers

org.apache.nifi.bootstrap

INFO

org.apache.nifi.bootstrap.Command

INFO

org.apache.nifi.StdOut

INFO

org.apache.nifi.StdErr

INFO

custom_logger

 — 

 — 

NiFi logback.xml

 

    Template for customizing the NiFi logback.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Maintained by ADCM
-->
{% set logback = services.nifi.config['nifi_logback_content'] %}

<configuration scan="true" scanPeriod="30 seconds">
    <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
        <resetJUL>true</resetJUL>
    </contextListener>

    <appender name="APP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'app_%d.log'.
              For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
            <maxFileSize>100MB</maxFileSize>
            <!-- keep 30 log files worth of history -->
            <maxHistory>{{ logback.app_file_max_history }}</maxHistory>
        </rollingPolicy>
        <immediateFlush>true</immediateFlush>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="USER_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'user_%d.log'.
              For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user_%d.log</fileNamePattern>
            <!-- keep 30 log files worth of history -->
            <maxHistory>{{ logback.user_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="BOOTSTRAP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'user_%d.log'.
              For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap_%d.log</fileNamePattern>
            <!-- keep 5 log files worth of history -->
            <maxHistory>{{ logback.boot_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->

    {% for key, value in logback.app_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}"/>
    {% endfor -%}

    <logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
    <logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
    <logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
    <logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
    <logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
    <logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />

    <logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />

    <logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
    <logger name="org.apache.curator.ConnectionState" level="OFF" />

    <!-- Suppress non-error messages due to excessive logging by class or library -->
    <logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
    <logger name="com.sun.jersey.spi.spring" level="ERROR"/>
    <logger name="org.springframework" level="ERROR"/>

    <!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
    <logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
    <logger name="org.glassfish.jersey.internal.Errors" level="ERROR"/>

    <!-- Suppress non-error messages due to Jetty AnnotationParser emitting a large amount of WARNS. Issue described in NIFI-5479. -->
    <logger name="org.eclipse.jetty.annotations.AnnotationParser" level="ERROR"/>

    <!--
        Logger for capturing user events. We do not want to propagate these
        log events to the root logger. These messages are only sent to the
        user-log appender.
    -->

    {% for key, value in logback.user_events_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}" additivity="false">
        <appender-ref ref="USER_FILE"/>
    </logger>
    {% endfor -%}

    <!--
        Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
    -->

    {% for key, value in logback.bootstrap_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}" additivity="false">
        <appender-ref ref="BOOTSTRAP_FILE"/>
    {% if key == "org.apache.nifi.bootstrap.Command" %}
        <appender-ref ref="CONSOLE" />
    {% endif -%}
    </logger>
    {% endfor -%}

    <!--
        Custom Logger
    -->

    {% if logback.custom_logger is not none -%}
    {% if logback.custom_logger | length > 0 -%}
    {% for key, value in logback.custom_logger | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}"/>
    {% endfor -%}
    {% endif -%}
    {% endif -%}

    <root level="{{ logback.root_level }}">
        <appender-ref ref="APP_FILE"/>
    </root>

</configuration>
NiFi state-management.xml

 

    Template for customizing the NiFi state-management.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
  Maintained by ADCM
-->
{%- if cluster.config.cluster_znode is defined and cluster.config.cluster_znode is not none %}
{% set zookeeper_connect = cluster.config.cluster_znode.split('/')[0] %}
{%- endif -%}

<stateManagement>
    <!--
        State Provider that stores state locally in a configurable directory. This Provider requires the following properties:

        Directory - the directory to store components' state in. If the directory being used is a sub-directory of the NiFi installation, it
                    is important that the directory be copied over to the new version when upgrading NiFi.
        Always Sync - If set to true, any change to the repository will be synchronized to the disk, meaning that NiFi will ask the operating system not to cache the information. This is very
                expensive and can significantly reduce NiFi performance. However, if it is false, there could be the potential for data loss if either there is a sudden power loss or the
                operating system crashes. The default value is false.
        Partitions - The number of partitions.
        Checkpoint Interval - The amount of time between checkpoints.
     -->
    <local-provider>
        <id>local-provider</id>
        <class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
        <property name="Directory">{{ nifi_home }}/conf/state/local</property>
        <property name="Always Sync">false</property>
        <property name="Partitions">16</property>
        <property name="Checkpoint Interval">2 mins</property>
    </local-provider>

    <!--
        State Provider that is used to store state in ZooKeeper. This Provider requires the following properties:

        Root Node - the root node in ZooKeeper where state should be stored. The default is '/nifi', but it is advisable to change this to a different value if not using
                   the embedded ZooKeeper server and if multiple NiFi instances may all be using the same ZooKeeper Server.

        Connect String - A comma-separated list of host:port pairs to connect to ZooKeeper. For example, myhost.mydomain:2181,host2.mydomain:5555,host3:6666

        Session Timeout - Specifies how long this instance of NiFi is allowed to be disconnected from ZooKeeper before creating a new ZooKeeper Session. Default value is "30 seconds"

        Access Control - Specifies which Access Controls will be applied to the ZooKeeper ZNodes that are created by this State Provider. This value must be set to one of:
                            - Open  : ZNodes will be open to any ZooKeeper client.
                            - CreatorOnly  : ZNodes will be accessible only by the creator. The creator will have full access to create children, read, write, delete, and administer the ZNodes.
                                             This option is available only if access to ZooKeeper is secured via Kerberos or if a Username and Password are set.
    -->
    <cluster-provider>
        <id>zk-provider</id>
        <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
        <property name="Connect String">{{ zookeeper_connect | default('') }}</property>
        <property name="Root Node">/arenadata/cluster/{{ cluster.id }}/nifi</property>
        <property name="Session Timeout">10 seconds</property>
        <property name="Access Control">Open</property>
    </cluster-provider>

    <!--
        Cluster State Provider that stores state in Redis. This can be used as an alternative to the ZooKeeper State Provider.

        This provider requires the following properties:

            Redis Mode - The type of Redis instance:
                            - Standalone
                            - Sentinel
                            - Cluster (currently not supported for state-management due to use of WATCH command which Redis does not support in clustered mode)

            Connection String - The connection string for Redis.
                        - In a standalone instance this value will be of the form hostname:port.
                        - In a sentinel instance this value will be the comma-separated list of sentinels, such as host1:port1,host2:port2,host3:port3.
                        - In a clustered instance this value will be the comma-separated list of cluster masters, such as host1:port,host2:port,host3:port.

        This provider has the following optional properties:

            Key Prefix - The prefix for each key stored by this state provider. When sharing a single Redis across multiple NiFi instances, setting a unique
                        value for the Key Prefix will make it easier to identify which instances the keys came from (default nifi/components/).

            Database Index - The database index to be used by connections created from this connection pool.
                        See the databases property in redis.conf, by default databases 0-15 will be available.

            Communication Timeout - The timeout to use when attempting to communicate with Redis.

            Cluster Max Redirects - The maximum number of redirects that can be performed when clustered.

            Sentinel Master - The name of the sentinel master, require when Mode is set to Sentinel.

            Password - The password used to authenticate to the Redis server. See the requirepass property in redis.conf.

            Pool - Max Total - The maximum number of connections that can be allocated by the pool (checked out to clients, or idle awaiting checkout).
                        A negative value indicates that there is no limit.

            Pool - Max Idle - The maximum number of idle connections that can be held in the pool, or a negative value if there is no limit.

            Pool - Min Idle - The target for the minimum number of idle connections to maintain in the pool. If the configured value of Min Idle is
                    greater than the configured value for Max Idle, then the value of Max Idle will be used instead.

            Pool - Block When Exhausted - Whether or not clients should block and wait when trying to obtain a connection from the pool when the pool
                    has no available connections. Setting this to false means an error will occur immediately when a client requests a connection and
                    none are available.

            Pool - Max Wait Time - The amount of time to wait for an available connection when Block When Exhausted is set to true.

            Pool - Min Evictable Idle Time - The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.

            Pool - Time Between Eviction Runs - The amount of time between attempting to evict idle connections from the pool.

            Pool - Num Tests Per Eviction Run - The number of connections to tests per eviction attempt. A negative value indicates to test all connections.

            Pool - Test On Create - Whether or not connections should be tested upon creation (default false).

            Pool - Test On Borrow - Whether or not connections should be tested upon borrowing from the pool (default false).

            Pool - Test On Return - Whether or not connections should be tested upon returning to the pool (default false).

            Pool - Test While Idle - Whether or not connections should be tested while idle (default true).

        <cluster-provider>
            <id>redis-provider</id>
            <class>org.apache.nifi.redis.state.RedisStateProvider</class>
            <property name="Redis Mode">Standalone</property>
            <property name="Connection String">localhost:6379</property>
        </cluster-provider>
    -->

</stateManagement>
NiFi bootstrap-notification-services.xml

 

    Template for customizing the NiFi bootstrap-notification-services.xml file.

    Default value:

<?xml version="1.0"?>
<!--
  Maintained by ADCM
-->
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<services>
    <!-- This file is used to define how interested parties are notified when events in NiFi's lifecycle occur. -->
    <!-- The format of this file is:
        <services>
            <service>
                <id>service-identifier</id>
                <class>org.apache.nifi.notifications.DesiredNotificationService</class>
                <property name="property name">property value</property>
                <property name="another property">another property value</property>
            </service>
        </services>

        This file can contain 0 to many different service definitions.
        The id can then be referenced from the bootstrap.conf file in order to configure the notification service
        to be used when particular lifecycle events occur.
    -->

<!--
     <service>
        <id>email-notification</id>
        <class>org.apache.nifi.bootstrap.notification.email.EmailNotificationService</class>
        <property name="SMTP Hostname"></property>
        <property name="SMTP Port"></property>
        <property name="SMTP Username"></property>
        <property name="SMTP Password"></property>
        <property name="SMTP TLS"></property>
        <property name="From"></property>
        <property name="To"></property>
     </service>
-->
<!--
     <service>
        <id>http-notification</id>
        <class>org.apache.nifi.bootstrap.notification.http.HttpNotificationService</class>
        <property name="URL"></property>
     </service>
-->
</services>
nifi-registry.properties

 

Parameter Description Default value

nifi.registry.web.war.directory

The location of the web war directory

./lib

nifi.registry.web.jetty.working.directory

The location of the Jetty working directory

./work/jetty

nifi.registry.web.jetty.threads

The number of the Jetty threads

200

nifi.registry.security.needClientAuth

Specifies that connecting clients must authenticate with a client cert

false

nifi.registry.db.directory

The location of the Registry database directory

 — 

nifi.registry.db.url.append

Specifies additional arguments to add to the connection string for the Registry database

 — 

nifi.registry.db.url

The full JBDC connection string

jdbc:h2:/usr/lib/nifi-registry/database/nifi-registry-primary;AUTOCOMMIT=OFF;DB_CLOSE_ON_EXIT=FALSE;LOCK_MODE=3;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE

nifi.registry.db.driver.class

The class name of the JBDC driver

org.h2.Driver

nifi.registry.db.driver.directory

An optional directory containing one or more JARs to add to the classpath

 — 

nifi.registry.db.username

The username for the database

nifireg

nifi.registry.db.password

The password for the database

 — 

nifi.registry.db.maxConnections

The maximum number of connections for the connection pool

5

nifi.registry.db.sql.debug

Whether or not to enable debug logging for SQL statements

false

nifi.registry.security.keystore

The full path and the name of the keystore

/tmp/keystore.jks

nifi.registry.security.keystoreType

The keystore type

JKS

nifi.registry.security.keystorePasswd

The keystore password

 — 

nifi.registry.security.keyPasswd

The key password

 — 

nifi.registry.security.truststore

The full path and name of the truststore

/tmp/truststore.jks

nifi.registry.security.truststoreType

The truststore type

JKS

nifi.registry.security.truststorePasswd

The truststore password

 — 

nifi.registry.sensitive.props.additional.keys

Comma-separated list of properties for encryption in addition to the default sensitive properties

nifi.registry.db.password

Parameter Description Default value

Nifi-Registry Flow Provider

Nifi-Registry flow provider

FileSystemFlowPersistenceProvider

FileSystem Flow Provider Configuration

Parameter Description Default value

Filesystem Flow Storage Directory

Filesystem flow storage directory

/usr/lib/nifi-registry/flow_storage

Git Flow Provider Configuration

 

Parameter Description Default value

Git Flow Storage Directory

File system path for a directory where flow contents files are persisted to. The directory must exist when NiFi Registry starts. It also must be initialized as a Git directory

/usr/lib/nifi-registry/git_flow_storage

Remote To Push

When a new flow snapshot is created, this persistence provider updates files in the specified Git directory, then creates a commit to the local repository. If Remote To Push is defined, provider pushes to the specified remote repository (e.g. origin). To define more detailed remote spec such as branch names, use Refspec

 — 

Remote Access User

The username is used to make push requests to the remote repository when Remote To Push is enabled, and the remote repository is accessed by HTTP protocol. If SSH is used, user authentications are done with SSH keys

 — 

Remote Access Password

The password for the Remote Access User

 — 

Remote Clone Repository

Remote repository URI to use to clone into Flow Storage Directory, if local repository is not present in Flow Storage Directory. If left empty, the Git directory needs to be configured as per initialaze Git directory. If URI is provided, then Remote Access User and Remote Access Password also should be present. Currently, default branch of remote wil be cloned

 — 

Parameter Description Default value

Nifi-Registry Bundle Provider

Nifi-Registry bundle provider

FileSystemBundlePersistenceProvider

FileSystem Bundle Provider Configuration

Parameter Description Default value

Extension Bundle Storage Directory

Extension bundle storage directory

/usr/lib/nifi-registry/extension_bundles

nifi-registry-env.sh

 
Parameters that determine the location for installing the NiFi-Registry service

Parameter Description Default value

NIFI_REGISTRY_HOME

The directory for installing

/usr/lib/nifi-registry

NIFI_REGISTRY_PID_DIR

The directory to store the NiFi-Registry

/var/run/nifi-registry

NIFI_REGISTRY_LOG_DIR

The directory to store the logs

/var/log/nifi-registry

Nifi-Registry logback.xml

 
Setting logging levels and log rotate for NiFi-Registry

Parameter Description Default value

app_file_max_history

Maximum number of files for applications

10

events_file_max_history

Maximum number of files for events

5

boot_file_max_history

Maximum number of files for Boot

5

root_level

Event level

INFO

Setting the structure of the logging configuration file for NiFi-Server

Logger Default package names Default event level

app_loggers

org.apache.nifi.registry

INFO

org.hibernate.SQL

WARN

org.hibernate.type

INFO

events_loggers

org.apache.nifi.registry.provider.hook.LoggingEventHookProvider

INFO

bootstrap_loggers

org.apache.nifi.registry.bootstrap

INFO

org.apache.nifi.registry.bootstrap.Command

INFO

org.apache.nifi.registry.StdOut

INFO

org.apache.nifi.registry.StdErr

ERROR

NiFi Registry logback.xml

 

    Template for customizing the NiFi Registry logback.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Maintained by ADCM
-->
{% set logback = services.nifi.config['nifi_registry_logback_content'] %}

<configuration scan="true" scanPeriod="30 seconds">
    <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
        <resetJUL>true</resetJUL>
    </contextListener>

    <appender name="APP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.registry.bootstrap.config.log.dir}/nifi-registry-app.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'app_%d.log'.
              For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.registry.bootstrap.config.log.dir}/nifi-registry-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>100MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
            <!-- keep 30 log files worth of history -->
            <maxHistory>{{ logback.app_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
            <immediateFlush>true</immediateFlush>
        </encoder>
    </appender>

    <appender name="BOOTSTRAP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.registry.bootstrap.config.log.dir}/nifi-registry-bootstrap.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'user_%d.log'.
              For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.registry.bootstrap.config.log.dir}/nifi-registry-bootstrap_%d.log</fileNamePattern>
            <!-- keep 5 log files worth of history -->
            <maxHistory>{{ logback.boot_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="EVENTS_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${org.apache.nifi.registry.bootstrap.config.log.dir}/nifi-registry-event.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!--
              For daily rollover, use 'user_%d.log'.
              For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
              To GZIP rolled files, replace '.log' with '.log.gz'.
              To ZIP rolled files, replace '.log' with '.log.zip'.
            -->
            <fileNamePattern>${org.apache.nifi.registry.bootstrap.config.log.dir}/nifi-registry-event_%d.log</fileNamePattern>
            <!-- keep 5 log files worth of history -->
            <maxHistory>{{ logback.events_file_max_history }}</maxHistory>
        </rollingPolicy>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date ## %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
        </encoder>
    </appender>

    <!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->

    {% for key, value in logback.app_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}"/>
    {% endfor -%}

    <!--
        Logger for capturing Bootstrap logs and NiFi Registry's standard error and standard out.
    -->

    {% for key, value in logback.bootstrap_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}" additivity="false">
        <appender-ref ref="BOOTSTRAP_FILE"/>
    {% if key == "org.apache.nifi.registry.bootstrap.Command" %}
        <appender-ref ref="CONSOLE" />
    {% endif -%}
    </logger>
    {% endfor -%}

    <!-- This will log all events to a separate file when the LoggingEventHookProvider is enabled in providers.xml -->

    {% for key, value in logback.events_loggers | dictsort -%}
    <logger name="{{ key }}" level="{{ value }}" additivity="false">
        <appender-ref ref="EVENTS_FILE"/>
    </logger>
    {% endfor -%}

    <root level="{{ logback.root_level }}">
        <appender-ref ref="APP_FILE"/>
    </root>

</configuration>
NiFi Registry providers.xml

 

    Template for customizing the NiFi Registry providers.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
  Maintained by ADCM
-->
<providers>

{% if services.nifi.config['registry_flow_provider'] == 'FileSystemFlowPersistenceProvider' %}
{% set provider = services.nifi.config['registry_filesystem_flow_provider'] %}
    <flowPersistenceProvider>
        <class>org.apache.nifi.registry.provider.flow.FileSystemFlowPersistenceProvider</class>
        <property name="Flow Storage Directory">{{ provider.flow_persistence_directory }}</property>
    </flowPersistenceProvider>
{% elif services.nifi.config['registry_flow_provider'] == 'GitFlowPersistenceProvider' %}
{% set provider = services.nifi.config['registry_git_flow_provider'] %}
    <flowPersistenceProvider>
        <class>org.apache.nifi.registry.provider.flow.git.GitFlowPersistenceProvider</class>
        <property name="Flow Storage Directory">{{ provider.flow_persistence_directory }}</property>
        <property name="Remote To Push">{{ provider.remote_to_push }}</property>
        <property name="Remote Access User">{{ provider.remote_access_user }}</property>
        <property name="Remote Access Password">{{ provider.remote_access_password }}</property>
        <property name="Remote Clone Repository">{{ provider.remote_clone_repository }}</property>
    </flowPersistenceProvider>
{% endif %}

    <!--
    <eventHookProvider>
    	<class>org.apache.nifi.registry.provider.hook.ScriptEventHookProvider</class>
    	<property name="Script Path"></property>
    	<property name="Working Directory"></property>
    	-->
    	<!-- Optional Whitelist Event types
        <property name="Whitelisted Event Type 1">CREATE_FLOW</property>
        <property name="Whitelisted Event Type 2">DELETE_FLOW</property>
    	-->
    <!--
    </eventHookProvider>
    -->

    <!-- This will log all events to a separate file specified by the EVENT_APPENDER in logback.xml -->
    <!--
    <eventHookProvider>
        <class>org.apache.nifi.registry.provider.hook.LoggingEventHookProvider</class>
    </eventHookProvider>
    -->

{% if services.nifi.config['registry_bundle_provider'] == 'FileSystemBundlePersistenceProvider' %}
{% set provider = services.nifi.config['registry_filesystem_bundle_provider'] %}
    <extensionBundlePersistenceProvider>
        <class>org.apache.nifi.registry.provider.extension.FileSystemBundlePersistenceProvider</class>
        <property name="Extension Bundle Storage Directory">{{ provider.bundle_persistence_directory }}</property>
    </extensionBundlePersistenceProvider>
{% endif %}

</providers>
NiFi Registry registry-aliases.xml

 

    Template for customizing the NiFi Registry registry-aliases.xml file.

    Default value:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
    Maintained by ADCM
-->
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<aliases>
    <!--
    <alias>
        <internal>LOCAL_NIFI_REGISTRY</internal>
        <external>http://registry.nifi.apache.org:18080</external>
    </alias>
    -->
</aliases>
Parameter Description Default value

Set service checks

Defines whether to check availability after cluster installation

true

JAAS template file

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    principal="nifi/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/nifi.service.keytab";
};
{% endif %}

Schema-Registry

Main

 

Parameter Description Default value

listener port

Schema-Registry listener port. Specified as listeners in the schema-registry.properties file

8081

schema-registry-env.sh

 

Parameter Description Default value

LOG_DIR

The directory for storing logs

/var/log/schema-registry

JMX_PORT

Port on which Schema-Registry sends JMX metrics

9997

SCHEMA_REGISTRY_HEAP_OPTS

Heap size allocated to the Schema-Registry process

-Xmx1024M

SCHEMA_REGISTRY_JVM_PERFORMANCE_OPTS

JVM options in terms of PERFORMANCE options

-server

-XX:+UseG1G

-XX:MaxGCPauseMillis=20

-XX:InitiatingHeapOccupancyPercent=35

-XX:+ExplicitGCInvokesConcurrent

-Djava.awt.headless=true

SCHEMA_REGISTRY_OPTS

JAVA environment variables for Schema-Registry

-Djava.security.auth.login.config=/etc/schema-registry/jaas_config.conf

Basic Auth properties

 

Parameter Description Default value

authentication.method

Authentication method

BASIC

authentication.roles

Defines a comma-separated list of user roles. To be authorized on the Schema-Registry server, the authenticated user must belong to at least one of these roles. For more information, see Basic authentication

admin

authentication.realm

Corresponds to a section in the jaas_config.file that defines how the server authenticates users and must be passed as a parameter to the JVM during server startup

SchemaRegistry-Props

schema-registry.properties

 

Parameter Description Default value

kafkastore.topic

The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy

_schemas

debug

Boolean indicating whether extra debugging information is generated in some error response entities

false

inter.instance.protocol

The protocol used while making calls between the instances of Schema Registry

 — 

ssl.keystore.location

Used for HTTPS. Location of the keystore file to use for SSL

 — 

ssl.keystore.password

Used for HTTPS. The store password for the keystore file

 — 

ssl.key.password

The password of the key contained in the keystore

 — 

ssl.truststore.location

Used for HTTPS. Location of the truststore. Required only to authenticate HTTPS clients

 — 

ssl.truststore.password

The password to access the truststore

 — 

kafkastore.ssl.keystore.location

The location of the SSL keystore file

 — 

kafkastore.ssl.keystore.password

The password to access the keystore

 — 

kafkastore.ssl.key.password

The password of the key contained in the keystore

 — 

kafkastore.ssl.keystore.type

The file format of the keystore

 — 

kafkastore.ssl.truststore.location

The location of the SSL truststore file

 — 

kafkastore.ssl.truststore.password

The password to access the truststore

 — 

kafkastore.ssl.truststore.type

The file format of the truststore

 — 

kafkastore.ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate the server hostname using the server certificate

 — 

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file schema-registry.properties

 — 

JAAS template file

 

    The user file template jaas.conf is intended for specifying user data for connecting clients of other services to the current service (paths to keytab files, the useTicketCache parameter, and others). For more information, see Configure a custom jaas.conf.

    Default value:

{% if cluster.config.basic_auth_default_config is not none %}
{{ services.schema_registry.config.basic_auth_properties_content['authentication.realm'] }} {
  org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
  file="{{ schema_registry_home_path }}/config/password.properties"
  debug="false";
};
{% endif %}
{% if cluster.config.kerberos_client and cluster.config.kerberos_client.enable_kerberos %}
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName="kafka"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/schema-registry.service.keytab"
    principal="schema-registry/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    serviceName="kafka"
    keyTab="{{ cluster.config.kerberos_client.keytab_dir }}/schema-registry.service.keytab"
    principal="schema-registry/{{ ansible_fqdn }}@{{ cluster.config.kerberos_client.realm }}";
};
{%- elif cluster.config.sasl_plain_auth_default_config is not none %}
    {%- set credential = cluster.config.sasl_plain_auth_default_config.sasl_plain_users_data %}
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="schema-registry"
  password="{{ credential['schema-registry'] }}";
};
{% endif %}

 
Schema-Registry component configuration parameters:

log4j properties configuration

 

Parameter Description Default value

log4j.rootLogger

Setting the logging level

INFO

log4j.logger.kafka

Change to adjust the general broker logging level (output to server.log and stdout). See also log4j.logger.org.apache.kafka

ERROR

log4j.logger.org.apache.zookeeper

Change to adjust ZooKeeper client logging

ERROR

log4j.logger.org.apache.kafka

Change to adjust the general broker logging level (output to server.log and stdout). See also log4j.logger.kafka

ERROR

log4j.logger.org.I0Itec.zkclient

Change to adjust ZooKeeper client logging level

ERROR

ZooKeeper

Main

 

Parameter Description Default value

connect

The ZooKeeper connection string that is used by other services or clusters. It is generated automatically

 — 

dataDir

The location where ZooKeeper stores the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database

/var/lib/zookeeper

admin.serverPort

Port that listens on the embedded Jetty server. Jetty server provides an HTTP interface to the four letter word commands

58080

zoo.cfg

 

Parameter Description Default value

clientPort

The port that Client connections, i.e. the port that Clients attempt to connect to

2181

tickTime

Base unit of time used by ZooKeeper. It is used for heart contractions. The minimum session timeout becomes twice the tickTime (in ms)

2000

initLimit

The timeouts, ZooKeeper uses to limit the length of the time that the ZooKeeper Servers in Quorum have to connect to the Leader

5

syncLimit

How far out of date each Server can be from the Leader

2

maxClientCnxns

Limits the number of active connections from the host, specified by IP address, to a single ZooKeeper Server

0

autopurge.snapRetainCount

Enables storing the most recent snapshots and related transaction logs in dataDir and dataLogDir respectively, and deleting the rest. The minimum value is 3

3

autopurge.purgeInterval

The time interval in hours between runs of the purge task. Set to a positive integer (1 and above) to enable the auto purging

24

Add key, value

The parameters and their values ​​entered in this field override the parameters specified in the ADCM user interface. This field also allows you to set values ​​for all user parameters that are not displayed in the interface, but are allowed in the configuration file zoo.cfg

 — 

zookeeper-env.sh

 

Parameter Description Default value

ZOO_LOG_DIR

The directory to store the logs

/var/log/zookeeper

ZOOPIDFILE

The directory to store the ZooKeeper process ID

/var/run/zookeeper/zookeeper_server.pid

SERVER_JVMFLAGS

It is used for setting different JVM parameters connected, for example, with garbage collecting

-Xmx1024m

JAVA

A path to Java

$JAVA_HOME/bin/java

ZOO_LOG4J_PROP

It is used for setting the LOG4J logging level and telling what log appenders to turn on. The effect of turning on the log appender CONSOLE is that logs now go to stdout. The effect of turning on ROLLINGFILE is that zookeeper.log file is created, rotated, and expired

INFO, CONSOLE, ROLLINGFILE

Found a mistake? Seleсt text and press Ctrl+Enter to report it