ADS Control configuration parameters

This article describes the parameters that can be configured for ADS Control service via ADCM. To read about the configuring process, refer to the relevant articles: Online installation, Offline installation.

NOTE
Some of the parameters become visible in the ADCM UI after the Show advanced flag being set.

The configuration descriptions use the concepts of the Eclipse Vert.x framework:

  • Verticle — a piece of code that is an event loop that can be deployed in the Vert.x environment.

  • Worker verticle — a verticle that runs using a thread from the pool of already created Vert.x worker threads, rather than using a new event loop.

General

 

Parameter Description Default value

server.http.port

Port to connect to ADS Control

8888

server.vertx.pools.eventLoopPoolSize

Maximum number of Event Loop verticles in the pool

8

server.vertx.pools.workerPoolSize

Maximum number of working verticles in the pool

20

server.vertx.config.clusterResponseTimeoutMs

Time interval to request getting metrics from Kafka cluster (in ms)

500

JMX Workers

 

Parameter Description Default value

server.vertx.verticle.jmx.instances

Number of verticle JMX instances that receive and process a request from the ADS cluster over JMX

1

server.vertx.verticle.jmx.workerPoolSize

Maximum number of working verticles JMX in the pool

5

server.vertx.verticle.jmx.jmxPoolTimeMilliseconds

Time interval to request getting metrics from ADS cluster over JMX (in ms)

60000

Kafka Workers

 

Parameter Description Default value

server.vertx.verticle.kafka-cluster-info-producer.instances

Number of instances of kafka-cluster-info-producer verticles that receive data from ADS clusters and write to kafka-cluster-info-storage verticles

1

server.vertx.verticle.kafka-cluster-info-producer.workerPoolSize

Number of instances of kafka-cluster-info-producer worker verticles in the pool

7

server.vertx.verticle.kafka-cluster-info-storage.instances

Number of instances of kafka-cluster-info-storage verticles that write and store the latest ADS cluster polling data

1

server.vertx.verticle.kafka-cluster-info-storage.workerPoolSize

Number of kafka-cluster-info-storage worker verticles in the pool

1

server.vertx.verticle.kafka-cluster-name-publisher.instances

Number of instances of kafka-cluster-name-publisher verticles that pass data about imported Kafka clusters to kafka-cluster-info-producer verticles to run cluster polling

1

server.vertx.verticle.kafka-cluster-name-publisher.workerPoolSize

Number of instances of kafka-cluster-name-publisher worker verticles in the pool

1

server.vertx.verticle.kafka-cluster-name-publisher.pushPeriodMilliseconds

Interval between requests for information about imported clusters by the kafka-cluster-name-publisher verticles (in ms)

5000

server.vertx.verticle.kafka-cluster-name-publisher.responseTimeoutMilliseconds

Timeout when requesting information about imported clusters by the kafka-cluster-name-publisher verticles (in ms)

15000

server.vertx.verticle.kafka-offset-info-producer.instances

The number of instances of kafka-offset-info-producer verticles that receive the latest offsets in Kafka topics and write to the kafka-cluster-info-storage verticle

1

server.vertx.verticle.kafka-offset-info-producer.workerPoolSize

Number of instances of kafka-offset-info-producer work verticles in the pool

2

server.vertx.verticle.kafka-offset-info-producer.updatePeriodMilliseconds

Interval between requests for data on the latest offsets in Kafka topics of ADS clusters (in ms)

200

server.vertx.verticle.kafka-commit-offset-info-producer.instances

Number of instances of kafka-commit-offset-info-producer verticles that receive data about the last offset of the __consumer_offsets topic (data about the last message read by the consumer)

1

server.vertx.verticle.kafka-commit-offset-info-producer.workerPoolSize

Number of instances of kafka-commit-offset-info-producer work verticles in the pool

2

Kafka Consumer

 

Parameter Description Default value

server.vertx.kafka.lastOffsetPollTimeoutMs

Timeout of the last poll of the topic offset __consumer_offsets (data about the last message read by the consumer) (in ms)

200

Basic authentication

 

Parameter Description Default value

Admin Password

ADS Control administrator password

 — 

SSL

 

Parameter Description Default value

server.http.ssl.keyStoreFilename

Name of the keystore

certs/keystore.jks

server.http.ssl.keyStorePassword

Key store password

 — 

Logging

 

    The logback_template parameter defines the log template.

    Default value:

 <?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property name="LOG_PATH" value="logs/app.log"/>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
            </pattern>
        </encoder>
    </appender>

    <appender name="FILE-ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}</file>

        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>logs/archived/app.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <!-- each archived file, size max 10MB -->
            <maxFileSize>10MB</maxFileSize>
            <!-- total size of all archive files, if total size > 20GB, it will delete old archived file -->
            <totalSizeCap>20GB</totalSizeCap>
            <!-- 60 days to keep -->
            <maxHistory>60</maxHistory>
        </rollingPolicy>

        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>
    <root level="INFO">
        <appender-ref ref="STDOUT"/>
    </root>
    <logger name="io.arenadata.adscc" level="DEBUG">
        <appender-ref ref="FILE-ROLLING"/>
    </logger>
    <logger name="io.arenadata.adscc" level="DEBUG">
        <appender-ref ref="STDOUT"/>
    </logger>
    <logger name="org.apache.kafka" level="ERROR">
        <appender-ref ref="STDOUT"/>
    </logger>
</configuration>
Monitoring

 

Parameter Description Default value

server.monitoring.openTelemetry.enable

Enables the monitoring system "OpenTelemetry"

ON

server.monitoring.prometheus.enable

Enables the monitoring system "Prometheus"

ON

Client

 

Parameter Description Default value

client.http.webClientOptions.maxPoolSize

Maximum pool size for connections

100

client.http.circuitbreaker.maxFailures

Number of failures at which the Circuit Breaker pattern disables the execution of the operation

3

client.http.circuitbreaker.timeout

Value above which the request increases the value of the number of failures (in ms)

500

client.http.circuitbreaker.fallbackOnFailure

Sets whether or not the fallback is executed on failure, even when the circuit is closed

ON

client.http.circuitbreaker.resetTimeout

Time after which the circuit breaker pattern will try to close the circuit again (in ms)

100

Kafka Clusters

 

Group Parameter Description

clusters

name

Kafka cluster name

bootstrapservers

Bootstrap Kafka cluster servers

schemaRegistryUrl

URL to connect to Schema Registry

jmxClientProperties

JMX port

kafkaConnectClusters

name

Kafka Connect cluster name

url

URL to connect to Kafka Connect

jmxClientProperties

JMX port

Found a mistake? Seleсt text and press Ctrl+Enter to report it