ADSCC configuration parameters
This article describes the parameters that can be configured for ADSCC services via ADCM. To read about the configuring process, refer to the relevant articles: Online installation, Offline installation.
NOTE
Some of the parameters become visible in the ADCM UI after the Advanced flag being set.
|
The configuration descriptions use the concepts of the Eclipse Vert.x framework:
-
Verticle — a piece of code that is an event loop that can be deployed in the Vert.x environment.
-
Worker verticle — a verticle that runs using a thread from the pool of already created Vert.x worker threads, rather than using a new event loop.
Parameter | Description | Default value |
---|---|---|
server.http.port |
Port to connect to ADSCC |
8888 |
server.vertx.pools.eventLoopPoolSize |
Maximum number of Event Loop verticles in the pool |
8 |
server.vertx.pools.workerPoolSize |
Maximum number of working verticles in the pool |
20 |
server.vertx.config.clusterResponseTimeoutMs |
Time interval to request getting metrics from Kafka cluster (in ms) |
500 |
Parameter | Description | Default value |
---|---|---|
server.vertx.verticle.jmx.instances |
Number of verticle |
1 |
server.vertx.verticle.jmx.workerPoolSize |
Maximum number of working verticles |
5 |
server.vertx.verticle.jmx.jmxPoolTimeMilliseconds |
Time interval to request getting metrics from ADS cluster over JMX (in ms) |
60000 |
Parameter | Description | Default value |
---|---|---|
server.vertx.verticle.kafka-cluster-info-producer.instances |
Number of instances of |
1 |
server.vertx.verticle.kafka-cluster-info-producer.workerPoolSize |
Number of instances of |
7 |
server.vertx.verticle.kafka-cluster-info-storage.instances |
Number of instances of |
1 |
server.vertx.verticle.kafka-cluster-info-storage.workerPoolSize |
Number of |
1 |
server.vertx.verticle.kafka-cluster-name-publisher.instances |
Number of instances of |
1 |
server.vertx.verticle.kafka-cluster-name-publisher.workerPoolSize |
Number of instances of |
1 |
server.vertx.verticle.kafka-cluster-name-publisher.pushPeriodMilliseconds |
Interval between requests for information about imported clusters by the |
5000 |
server.vertx.verticle.kafka-cluster-name-publisher.responseTimeoutMilliseconds |
Timeout when requesting information about imported clusters by the |
15000 |
server.vertx.verticle.kafka-offset-info-producer.instances |
The number of instances of |
1 |
server.vertx.verticle.kafka-offset-info-producer.workerPoolSize |
Number of instances of |
2 |
server.vertx.verticle.kafka-offset-info-producer.updatePeriodMilliseconds |
Interval between requests for data on the latest offsets in Kafka topics of ADS clusters (in ms) |
200 |
server.vertx.verticle.kafka-commit-offset-info-producer.instances |
Number of instances of |
1 |
server.vertx.verticle.kafka-commit-offset-info-producer.workerPoolSize |
Number of instances of |
2 |
Parameter | Description | Default value |
---|---|---|
server.vertx.kafka.lastOffsetPollTimeoutMs |
Timeout of the last poll of the topic offset |
200 |
Parameter | Description | Default value |
---|---|---|
Admin Password |
ADSCC administrator password |
— |
Parameter | Description | Default value |
---|---|---|
server.http.ssl.keyStoreFilename |
Name of the keystore |
certs/keystore.jks |
server.http.ssl.keyStorePassword |
Key store password |
— |
The logback_template
parameter defines the log template.
Default value:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="LOG_PATH" value="logs/app.log"/>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
</pattern>
</encoder>
</appender>
<appender name="FILE-ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>logs/archived/app.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<!-- each archived file, size max 10MB -->
<maxFileSize>10MB</maxFileSize>
<!-- total size of all archive files, if total size > 20GB, it will delete old archived file -->
<totalSizeCap>20GB</totalSizeCap>
<!-- 60 days to keep -->
<maxHistory>60</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
<logger name="io.arenadata.adscc" level="DEBUG">
<appender-ref ref="FILE-ROLLING"/>
</logger>
<logger name="io.arenadata.adscc" level="DEBUG">
<appender-ref ref="STDOUT"/>
</logger>
<logger name="org.apache.kafka" level="ERROR">
<appender-ref ref="STDOUT"/>
</logger>
</configuration>
Parameter | Description | Default value |
---|---|---|
server.monitoring.openTelemetry.enable |
Enables the monitoring system "OpenTelemetry" |
ON |
server.monitoring.prometheus.enable |
Enables the monitoring system "Prometheus" |
ON |
Parameter | Description | Default value |
---|---|---|
client.http.webClientOptions.maxPoolSize |
Maximum pool size for connections |
100 |
client.http.circuitbreaker.maxFailures |
Number of failures at which the Circuit Breaker pattern disables the execution of the operation |
3 |
client.http.circuitbreaker.timeout |
Value above which the request increases the value of the number of failures (in ms) |
500 |
client.http.circuitbreaker.fallbackOnFailure |
Sets whether or not the fallback is executed on failure, even when the circuit is closed |
ON |
client.http.circuitbreaker.resetTimeout |
Time after which the circuit breaker pattern will try to close the circuit again (in ms) |
100 |
Group | Parameter | Description |
---|---|---|
clusters |
name |
Kafka cluster name |
bootstrapservers |
Bootstrap Kafka cluster servers |
|
schemaRegistryUrl |
URL to connect to Schema Registry |
|
jmxClientProperties |
JMX port |
|
kafkaConnectClusters |
name |
Kafka Connect cluster name |
url |
URL to connect to Kafka Connect |
|
jmxClientProperties |
JMX port |