【Kafka】Kafka 4.1.0版本安装、配置和服务启动问题解决过程记录
从官网下载Kafka最新版的安装包(kafka_2.13-4.1.0.tgz),上传到虚拟机解压,并重命名目录为kafka
在未对Kafka做任何配置的前提下,第一次启动Kafka进程,提示Java版本不符合要求,需要新版本。
2025-09-30 15:52:43.375 | main | INFO | io.prometheus.jmx.JavaAgent | Starting ...
2025-09-30 15:52:43.814 | main | INFO | io.prometheus.jmx.JavaAgent | HTTP enabled [true]
2025-09-30 15:52:43.814 | main | INFO | io.prometheus.jmx.JavaAgent | HTTP host:port [0.0.0.0:9308]
2025-09-30 15:52:43.814 | main | INFO | io.prometheus.jmx.JavaAgent | Starting HTTPServer ...
2025-09-30 15:52:43.959 | main | INFO | io.prometheus.jmx.JavaAgent | HTTPServer started
2025-09-30 15:52:43.960 | main | INFO | io.prometheus.jmx.JavaAgent | OpenTelemetry enabled [false]
2025-09-30 15:52:43.960 | main | INFO | io.prometheus.jmx.JavaAgent | Running ...
错误: 加载主类 kafka.Kafka 时出现 LinkageErrorjava.lang.UnsupportedClassVersionError: kafka/Kafka has been compiled by a more recent version of the Java Runtime (class file version 61.0), this version of the Java Runtime only recognizes class file
versions up to 55.0
于是上传JDK17的安装包(jdk-17.0.16.0.1_linux-x64.tar.gz)到虚拟机上,并在虚拟机里面配置了17的环境变量,再次启动,出现下面的报错
2025-09-30 15:56:47.578 | main | INFO | io.prometheus.jmx.JavaAgent | Starting ...
2025-09-30 15:56:47.868 | main | INFO | io.prometheus.jmx.JavaAgent | HTTP enabled [true]
2025-09-30 15:56:47.869 | main | INFO | io.prometheus.jmx.JavaAgent | HTTP host:port [0.0.0.0:9308]
2025-09-30 15:56:47.869 | main | INFO | io.prometheus.jmx.JavaAgent | Starting HTTPServer ...
2025-09-30 15:56:47.948 | main | INFO | io.prometheus.jmx.JavaAgent | HTTPServer started
2025-09-30 15:56:47.955 | main | INFO | io.prometheus.jmx.JavaAgent | OpenTelemetry enabled [false]
2025-09-30 15:56:47.955 | main | INFO | io.prometheus.jmx.JavaAgent | Running ...
[2025-09-30 15:56:49,337] INFO Registered `kafka:type=kafka.Log4jController` MBean (kafka.utils.Log4jControllerRegistration$)
[2025-09-30 15:56:49,873] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.RuntimeException: No readable meta.properties files found.at org.apache.kafka.metadata.properties.MetaPropertiesEnsemble.verify(MetaPropertiesEnsemble.java:480) ~[kafka-metadata-4.1.0.jar:?]at kafka.server.KafkaRaftServer$.initializeLogDirs(KafkaRaftServer.scala:140) ~[kafka_2.13-4.1.0.jar:?]at kafka.server.KafkaRaftServer.<init>(KafkaRaftServer.scala:55) ~[kafka_2.13-4.1.0.jar:?]at kafka.Kafka$.buildServer(Kafka.scala:68) ~[kafka_2.13-4.1.0.jar:?]at kafka.Kafka$.main(Kafka.scala:75) [kafka_2.13-4.1.0.jar:?]at kafka.Kafka.main(Kafka.scala) [kafka_2.13-4.1.0.jar:?]
查询资料发现,在 KRaft 模式下(即无需 ZooKeeper),需要使用 kafka-storage.sh 来初始化存储目录。
首先,使用下面的脚本和参数,来创建一个随机的UUID
# cd kafka/
# ./bin/kafka-storage.sh random-uuid
DMK-PCK8RXWdjz7klUKnpg
第二步,使用这个UUID来格式化存储目录(如果Kafka是集群,需要在集群的每一个节点上执行此命令),我这里使用的standalone模式,需要在命令里面明确--standalone参数
# ./kafka-storage.sh format -t DMK-PCK8RXWdjz7klUKnpg -c ../config/server.properties --standalone
Formatting dynamic metadata voter directory /root/kafka/logs with metadata.version 4.1-IV1.
第三步,启动kafka实例,可以发现进程已经启动成功。
# ./kafka-server-start.sh -daemon ../config/server.properties
# ps -ef| grep kafka| grep -v grep
root 8358 1 99 16:37 pts/0 00:00:07 /usr/jdk-17.0.16.0.1/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xlog:gc*:file=/root/kafka/bin/../logs/kafkaServer-gc.log:time,tags:filecount=10,filesize=100M -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/root/kafka/bin/../logs -Dlog4j2.configurationFile=./../config/log4j2.yaml -cp .:/usr/jdk-17.0.16.0.1/lib:/usr/jdk-17.0.16.0.1/lib/tools.jar:/root/kafka/bin/../libs/activation-1.1.1.jar:/root/kafka/bin/../libs/aopalliance-repackaged-3.0.6.jar:/root/kafka/bin/../libs/argparse4j-0.7.0.jar:/root/kafka/bin/../libs/caffeine-3.2.0.jar:/root/kafka/bin/../libs/classgraph-4.8.179.jar:/root/kafka/bin/../libs/commons-beanutils-1.11.0.jar:/root/kafka/bin/../libs/commons-collections-3.2.2.jar:/root/kafka/bin/../libs/commons-digester-2.1.jar:/root/kafka/bin/../libs/commons-lang3-3.18.0.jar:/root/kafka/bin/../libs/commons-logging-1.3.5.jar:/root/kafka/bin/../libs/commons-validator-1.9.0.jar:/root/kafka/bin/../libs/connect-api-4.1.0.jar:/root/kafka/bin/../libs/connect-basic-auth-extension-4.1.0.jar:/root/kafka/bin/../libs/connect-json-4.1.0.jar:/root/kafka/bin/../libs/connect-mirror-4.1.0.jar:/root/kafka/bin/../libs/connect-mirror-client-4.1.0.jar:/root/kafka/bin/../libs/connect-runtime-4.1.0.jar:/root/kafka/bin/../libs/connect-transforms-4.1.0.jar:/root/kafka/bin/../libs/hash4j-0.22.0.jar:/root/kafka/bin/../libs/HdrHistogram-2.2.2.jar:/root/kafka/bin/../libs/hk2-api-3.0.6.jar:/root/kafka/bin/../libs/hk2-locator-3.0.6.jar:/root/kafka/bin/../libs/hk2-utils-3.0.6.jar:/root/kafka/bin/../libs/jackson-annotations-2.19.0.jar:/root/kafka/bin/../libs/jackson-core-2.19.0.jar:/root/kafka/bin/../libs/jackson-databind-2.19.0.jar:/root/kafka/bin/../libs/jackson-dataformat-csv-2.19.0.jar:/root/kafka/bin/../libs/jackson-dataformat-yaml-2.19.0.jar:/root/kafka/bin/../libs/jackson-datatype-jdk8-2.19.0.jar:/root/kafka/bin/../libs/jackson-jakarta-rs-base-2.19.0.jar:/root/kafka/bin/../libs/jackson-jakarta-rs-json-provider-2.19.0.jar:/root/kafka/bin/../libs/jackson-module-blackbird-2.19.0.jar:/root/kafka/bin/../libs/jackson-module-jakarta-xmlbind-annotations-2.19.0.jar:/root/kafka/bin/../libs/jakarta.activation-2.0.1.jar:/root/kafka/bin/../libs/jakarta.activation-api-2.1.0.jar:/root/kafka/bin/../libs/jakarta.annotation-api-2.1.1.jar:/root/kafka/bin/../libs/jakarta.inject-api-2.0.1.jar:/root/kafka/bin/../libs/jakarta.servlet-api-6.0.0.jar:/root/kafka/bin/../libs/jakarta.validation-api-3.0.2.jar:/root/kafka/bin/../libs/jakarta.ws.rs-api-3.1.0.jar:/root/kafka/bin/../libs/jakarta.xml.bind-api-3.0.1.jar:/root/kafka/bin/../libs/javassist-3.30.2-GA.jar:/root/kafka/bin/../libs/javax.activation-api-1.2.0.jar:/root/kafka/bin/../libs/javax.annotation-api-1.3.2.jar:/root/kafka/bin/../libs/jaxb-api-2.3.1.jar:/root/kafka/bin/../libs/jersey-client-3.1.10.jar:/root/kafka/bin/../libs/jersey-common-3.1.10.jar:/root/kafka/bin/../libs/jersey-container-servlet-3.1.10.jar:/root/kafka/bin/../libs/jersey-container-servlet-core-3.1.10.jar:/root/kafka/bin/../libs/jersey-hk2-3.1.10.jar:/root/kafka/bin/../libs/jersey-server-3.1.10.jar:/root/kafka/bin/../libs/jetty-alpn-client-12.0.22.jar:/root/kafka/bin/../libs/jetty-client-12.0.22.jar:/root/kafka/bin/../libs/jetty-ee10-servlet-12.0.22.jar:/root/kafka/bin/../libs/jetty-ee10-servlets-12.0.22.jar:/root/kafka/bin/../libs/jetty-http-12.0.22.jar:/root/kafka/bin/../libs/jetty-io-12.0.22.jar:/root/kafka/bin/../libs/jetty-security-12.0.22.jar:/root/kafka/bin/../libs/jetty-server-12.0.22.jar:/root/kafka/bin/../libs/jetty-session-12.0.22.jar:/root/kafka/bin/../libs/jetty-util-12.0.22.jar:/root/kafka/bin/../libs/jline-3.30.4.jar:/root/kafka/bin/../libs/jopt-simple-5.0.4.jar:/root/kafka/bin/../libs/jose4j-0.9.6.jar:/root/kafka/bin/../libs/jspecify-1.0.0.jar:/root/kafka/bin/../libs/kafka_2.13-4.1.0.jar:/root/kafka/bin/../libs/kafka-clients-4.1.0.jar:/root/kafka/bin/../libs/kafka-coordinator-common-4.1.0.jar:/root/kafka/bin/../libs/kafka-group-coordinator-4.1.0.jar:/root/kafka/bin/../libs/kafka-group-coordinator-api-4.1.0.jar:/root/kafka/bin/../libs/kafka-metadata-4.1.0.jar:/root/kafka/bin/../libs/kafka-raft-4.1.0.jar:/root/kafka/bin/../libs/kafka-server-4.1.0.jar:/rootkafka/bin/../libs/kafka-server-common-4.1.0.jar:/root/kafka/bin/../libs/kafka-share-coordinator-4.1.0.jar:/root/kafka/bin/../libs/kafka-shell-4.1.0.jar:/root/kafka/bin/../libs/kafka-storage-4.1.0.jar:/root/kafka/bin/../libs/kafka-storage-api-4.1.0.jar:/root/kafka/bin/../libs/kafka-streams-4.1.0.jar:/root/kafka/bin/../libs/kafka-streams-examples-4.1.0.jar:/root/kafka/bin/../libs/kafka-streams-scala_2.13-4.1.0.jar:/root/kafka/bin/../libs/kafka-streams-test-utils-4.1.0.jar:/root/kafka/bin/../libs/kafka-tools-4.1.0.jar:/root/kafka/bin/../libs/kafka-tools-api-4.1.0.jar:/root/kafka/bin/../libs/kafka-transaction-coordinator-4.1.0.jar:/root/kafka/bin/../libs/log4j-1.2-api-2.24.3.jar:/root/kafka/bin/../libs/log4j-api-2.24.3.jar:/root/kafka/bin/../libs/log4j-core-2.24.3.jar:/root/kafka/bin/../libs/log4j-slf4j-impl-2.24.3.jar:/root/kafka/bin/../libs/lz4-java-1.8.0.jar:/root/kafka/bin/../libs/maven-artifact-3.9.6.jar:/root/kafka/bin/../libs/metrics-core-2.2.0.jar:/root/kafka/bin/../libs/opentelemetry-proto-1.0.0-alpha.jar:/root/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/root/kafka/bin/../libs/pcollections-4.0.2.jar:/root/kafka/bin/../libs/plexus-utils-3.5.1.jar:/root/kafka/bin/../libs/protobuf-java-3.25.5.jar:/root/kafka/bin/../libs/re2j-1.8.jar:/root/kafka/bin/../libs/rocksdbjni-9.7.3.jar:/root/kafka/bin/../libs/scala-library-2.13.16.jar:/root/kafka/bin/../libs/scala-logging_2.13-3.9.5.jar:/root/kafka/bin/../libs/scala-reflect-2.13.16.jar:/root/kafka/bin/../libs/slf4j-api-1.7.36.jar:/root/kafka/bin/../libs/snakeyaml-2.4.jar:/root/kafka/bin/../libs/snappy-java-1.1.10.7.jar:/root/kafka/bin/../libs/swagger-annotations-2.2.25.jar:/root/kafka/bin/../libs/trogdor-4.1.0.jar:/root/kafka/bin/../libs/zstd-jni-1.5.6-10.jar kafka.Kafka ../config/server.properties
下面为服务的配置,由于是测试,就没有做什么过多的修改,生产环境(一般为集群)按需修改即可。
# grep -v "# " server.properties process.roles=broker,controllernode.id=1controller.quorum.bootstrap.servers=192.168.223.199:9093listeners=PLAINTEXT://192.168.223.199:9092,CONTROLLER://192.168.223.199:9093inter.broker.listener.name=PLAINTEXTadvertised.listeners=PLAINTEXT://192.168.223.199:9092,CONTROLLER://192.168.223.199:9093controller.listener.names=CONTROLLERlistener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLnum.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/root/kafka/logsnum.partitions=1num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1
share.coordinator.state.topic.replication.factor=1
share.coordinator.state.topic.min.isr=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1#log.flush.interval.messages=10000#log.flush.interval.ms=1000log.retention.hours=168#log.retention.bytes=1073741824log.segment.bytes=1073741824log.retention.check.interval.ms=300000