当前位置: 首页 > news >正文

spring-kafka消费异常处理

默认的消费异常处理

默认情况下,如果程序没有显式做任何的异常处理,spring-kafka会提供一个默认的DefaultErrorHandler, 它会使用FixedBackOff做重试,会不间断的连续重试最多9次,也就是说一个消息最多会被消费10次。如果重试次数耗尽,最终会在控制台打印异常,并且会提交offset,也就是说这条消息就被丢弃了。
举个例子:
发消息

@GetMapping("send/{msg}")public String send(@PathVariable("msg")String msg){CompletableFuture future = kafkaTemplate.send("test-topic", msg);try{future.get();log.info("消息发送成功");}catch(Exception e){e.printStackTrace();}return "OK";}

收消息

@Component
public class DemoListener {private static Logger log = LoggerFactory.getLogger(DemoListener.class);@KafkaListener(topics = {"test-topic"})public void onMessage(ConsumerRecord record){Object value = record.value();log.info("收到消息:{}", value);throw new RuntimeException("manually throw");}
}

kafka的配置

spring:kafka:bootstrap-servers: localhost:9092  # Kafka服务器地址consumer:group-id: my-group  # 默认的消费者组IDauto-offset-reset: earliest  # 如果没有初始偏移量或偏移量已失效,从最早的消息开始读取key-deserializer: org.apache.kafka.common.serialization.StringDeserializervalue-deserializer: org.apache.kafka.common.serialization.StringDeserializerproducer:key-serializer: org.apache.kafka.common.serialization.StringSerializervalue-serializer: org.apache.kafka.common.serialization.StringSerializer

现在发一条消息做测试,控制台输出如下:

2025-09-14T10:26:27.508+08:00  INFO 5912 --- [nio-8080-exec-1] c.g.xjs.kafka.controller.DemoController  : 消息发送成功
2025-09-14T10:26:27.509+08:00  INFO 5912 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
......
2025-09-14T10:26:31.666+08:00  INFO 5912 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T10:26:31.680+08:00  INFO 5912 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 6 for partition test-topic-0
2025-09-14T10:26:31.680+08:00  INFO 5912 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T10:26:32.174+08:00  INFO 5912 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T10:26:32.182+08:00 ERROR 5912 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler   : Backoff FixedBackOff{interval=0, currentAttempts=10, maxAttempts=9} exhausted for test-topic-0@6org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.github.xjs.kafka.listener.DemoListener.onMessage(org.apache.kafka.clients.consumer.ConsumerRecord)' threw exceptionat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:2996) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2903) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2867) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2779) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2616) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2505) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:2151) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1527) ~[spring-kafka-3.3.3.jar:3.3.3]

自定义重试逻辑

我们可以自定义一个DefaultErrorHandler的bean来自定义重试逻辑,比如:

@Bean
public DefaultErrorHandler errorHandler(){ExponentialBackOff backOff = new ExponentialBackOff();// 最大的重试间隔,默认是30秒backOff.setMaxInterval(30000);// 初始的重试间隔,默认是2秒backOff.setInitialInterval(3000);// 间隔倍数,下一次间隔 = 当前间隔 * 间隔倍数,默认是1.5backOff.setMultiplier(3);// 最大重试次数, 默认无限制重试,如果按照默认配置,首次重试隔2秒,下一次隔(2*1.5)3秒,以此类推backOff.setMaxAttempts(2);return new DefaultErrorHandler(null,backOff);
}

现在重新发一条消息,控制台输出:

2025-09-14T10:42:32.069+08:00  INFO 1288 --- [nio-8080-exec-1] c.g.xjs.kafka.controller.DemoController  : 消息发送成功
2025-09-14T10:42:32.070+08:00  INFO 1288 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T10:42:35.128+08:00  INFO 1288 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 8 for partition test-topic-0
2025-09-14T10:42:35.129+08:00  INFO 1288 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T10:42:35.131+08:00  INFO 1288 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T10:42:44.193+08:00  INFO 1288 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 8 for partition test-topic-0
2025-09-14T10:42:44.193+08:00  INFO 1288 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T10:42:44.195+08:00  INFO 1288 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T10:42:44.199+08:00 ERROR 1288 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler   : Backoff ExponentialBackOffExecution{currentInterval=9000ms, multiplier=3.0, attempts=2} exhausted for test-topic-0@8org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.github.xjs.kafka.listener.DemoListener.onMessage(org.apache.kafka.clients.consumer.ConsumerRecord)' threw exceptionat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:2996) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2903) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2867) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2779) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2616) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2505) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:2151) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1527) ~[spring-kafka-3.3.3.jar:3.3.3]

可以看到,消息总共被接受了3次,包含了2次重试,第一次是在3每秒以后,第二次是在9秒以后。
除了ExponentialBackOff 之外,常见的还有ExponentialBackOffWithMaxRetriesFixedBackOff,当然也可以自定义。
ExponentialBackOff 默认无限重试,默认的最大重试间隔是30秒,如果超过了30秒则按30秒算。
ExponentialBackOffWithMaxRetries可以设置最大的重试次数。
FixedBackOff是固定时间间隔,默认是5秒,默认没有重试次数限制。

队头阻塞与消息丢失问题

上面介绍的异常处理方式存在2个非常严重的问题,一个是队头阻塞问题,另一个是消息丢失问题。所谓的队头阻塞问题,就是说当一条消息在进行重试的时候,就算topic中有了新的消息,消费者也无法消费到,因为消费者线程会以阻塞的方式进行重试,重试结束以后才可以继续后面消息的消费,如果重试时间很长就会导致后面的消息长时间得不到消费。消息丢失就很好理解了,重试次数耗尽以后,仅仅是打印一条错误的日志,更好的处理办法是把错误的消息发送给死信Topic,然后交由人工进行后续处理。接下来先来处理下消息丢失的问题。

死信Topic

在构造DefaultErrorHandler的时候,还有一个参数是ConsumerRecordRecoverer,如果我们提供了这个recover,那么重试次数耗尽以后,消息会被传递给这个recover,我们就可以把消费失败的消息重新投递到DLT中。
幸运的是,spring-kafka已经提供了一个DeadLetterPublishingRecoverer就可以实现这个功能。
下面我们重写下DefaultErrorHandler :

@Beanpublic DefaultErrorHandler errorHandler(KafkaTemplate kafkaTemplate){var recoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,(cr, e)-> new TopicPartition(cr.topic()+".DLT", cr.partition()));ExponentialBackOff backOff = new ExponentialBackOff();backOff.setMaxInterval(30000);backOff.setInitialInterval(3000);backOff.setMultiplier(3);backOff.setMaxAttempts(2);return new DefaultErrorHandler(recoverer,backOff);}

在构造DeadLetterPublishingRecoverer的时候,需要用到KafkaTemplate ,同时我们需要设置DLT的topic和partition。
现在我们重新发一个消息,控制台的日志:

2025-09-14T11:17:48.532+08:00  INFO 9804 --- [nio-8080-exec-4] c.g.xjs.kafka.controller.DemoController  : 消息发送成功
2025-09-14T11:17:48.533+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T11:17:51.609+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 10 for partition test-topic-0
2025-09-14T11:17:51.609+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:17:51.611+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello
2025-09-14T11:18:00.708+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 10 for partition test-topic-0
2025-09-14T11:18:00.708+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:18:00.710+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:hello

这次就没有异常抛出,而且我们可以从DLT中看到消息:

D:\kafka_2.12-3.9.1> .\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092  --topic test-topic.DLT 
hello

非阻塞重试

还是使用上面的代码,我们连续发送2条消息,控制台输出如下:

2025-09-14T11:24:02.837+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:1111
2025-09-14T11:24:03.869+08:00  INFO 9804 --- [nio-8080-exec-8] c.g.xjs.kafka.controller.DemoController  : 消息发送成功
2025-09-14T11:24:05.914+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 11 for partition test-topic-0
2025-09-14T11:24:05.914+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:24:05.915+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:1111
2025-09-14T11:24:14.963+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 11 for partition test-topic-0
2025-09-14T11:24:14.963+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:24:14.965+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:1111
2025-09-14T11:24:15.470+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 12 for partition test-topic-0
2025-09-14T11:24:15.473+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:2222
2025-09-14T11:24:18.553+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 12 for partition test-topic-0
2025-09-14T11:24:18.553+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:24:18.554+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:2222
2025-09-14T11:24:27.609+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 12 for partition test-topic-0
2025-09-14T11:24:27.609+08:00  INFO 9804 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:24:27.611+08:00  INFO 9804 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:2222
2025-09-14T11:24:28.635+08:00  INFO 9804 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-my-group-1, groupId=my-group] Node -1 disconnected.
2025-09-14T11:24:58.128+08:00  INFO 9804 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Node -1 disconnected.

可以看出来,虽然消息是同时发出的,但是第一条消息重试期间,第二条消息是无法得到消费的。只有第一条消息的重试次数耗尽以后,第二条消息才有机会被消费。如果重试时间间隔和次数都比较大,这种阻塞式的重试就不合适了。

下面我们来看下如何使用非阻塞重试:

@Configuration
@EnableKafkaRetryTopic //non-blocking:1
public class KafkaConfiguration {// non-blocking:2@BeanTaskScheduler scheduler() {return new ThreadPoolTaskScheduler();}// non-blocking:3@Beanpublic RetryTopicConfiguration myRetryConfiguration(KafkaTemplate<String, String> template) {return RetryTopicConfigurationBuilder.newInstance().exponentialBackoff(3000, 10, Long.MAX_VALUE).maxAttempts(3).dltSuffix(".DLT").create(template);}}
  • 首先添加@EnableKafkaRetryTopic 注解
  • 然后提供一个TaskScheduler 的实例
  • 最后提供RetryTopicConfiguration 的实例

现在重启应用,连续发送2个消息再次观察控制台输出:

2025-09-14T11:44:40.303+08:00  INFO 5380 --- [nio-8080-exec-8] c.g.xjs.kafka.controller.DemoController  : 消息发送成功
2025-09-14T11:44:40.304+08:00  INFO 5380 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:3333
2025-09-14T11:44:40.817+08:00  INFO 5380 --- [etry-3000-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-3, groupId=my-group] Seeking to offset 3 for partition test-topic-retry-3000-0
2025-09-14T11:44:40.817+08:00  INFO 5380 --- [etry-3000-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:44:41.284+08:00  INFO 5380 --- [nio-8080-exec-5] c.g.xjs.kafka.controller.DemoController  : 消息发送成功
2025-09-14T11:44:41.284+08:00  INFO 5380 --- [ntainer#0-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:4444
2025-09-14T11:44:43.316+08:00  INFO 5380 --- [etry-3000-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:3333
2025-09-14T11:44:43.826+08:00  INFO 5380 --- [etry-3000-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-3, groupId=my-group] Seeking to offset 4 for partition test-topic-retry-3000-0
2025-09-14T11:44:43.826+08:00  INFO 5380 --- [try-30000-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-1, groupId=my-group] Seeking to offset 3 for partition test-topic-retry-30000-0
2025-09-14T11:44:43.826+08:00  INFO 5380 --- [try-30000-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:44:43.828+08:00  INFO 5380 --- [etry-3000-0-C-1] o.a.k.c.c.internals.LegacyKafkaConsumer  : [Consumer clientId=consumer-my-group-3, groupId=my-group] Seeking to offset 4 for partition test-topic-retry-3000-0
2025-09-14T11:44:43.828+08:00  INFO 5380 --- [etry-3000-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : Record in retry and not yet recovered
2025-09-14T11:44:44.332+08:00  INFO 5380 --- [etry-3000-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:4444
2025-09-14T11:45:13.334+08:00  INFO 5380 --- [try-30000-0-C-1] c.g.xjs.kafka.listener.DemoListener      : 收到消息:3333
2025-09-14T11:45:13.334+08:00 ERROR 5380 --- [try-30000-0-C-1] k.r.DeadLetterPublishingRecovererFactory : Record: topic = test-topic-retry-30000, partition = 0, offset = 3, main topic = test-topic threw an error at topic test-topic-retry-30000 and won't be retried. Sending to DLT with name test-topic.DLT.org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failedat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:3000) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2903) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2867) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2779) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2616) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2505) ~[spring-kafka-3.3.3.jar:3.3.3]at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:2151) ~[spring-kafka-3.3.3.jar:3.3.3]at

可以看到再也不会存在队头阻塞问题,并且消息也成功投递到了DLT中:

D:\kafka_2.12-3.9.1> .\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092  --topic test-topic.DLT 
3333
4444

非阻塞重试的原理

我们查看下kafka中的topic列表:

D:\kafka_2.12-3.9.1> .\bin\windows\kafka-topics.bat --list --bootstrap-server localhost:9092
__consumer_offsets
test-topic
test-topic-retry-3000
test-topic-retry-30000
test-topic.DLT

此时会发现多出来2个带retry的topic:test-topic-retry-3000 和 test-topic-retry-30000。

如果消息处理失败,该消息会被转发到一个retry的topic。消费者会检查时间戳,如果尚未到达重试时间,则会暂停该主题分区的消费。当到达重试时间时,分区消费会恢复,消息会被再次消费。这也是为啥我们要配置一个TaskScheduler的原因。如果消息处理再次失败,消息将被转发到下一个重试主题,重复此模式直到处理成功,或者重试次数用尽,最后消息被发送到DLT。

以我们的案例来说,采用初始3秒的指数退避策略,乘数为10,最大重试3-1=2次,系统将自动创建test-topic-retry-3000和test-topic-retry-30000和test-topic.DLT。

参考:https://docs.spring.io/spring-kafka/reference/index.html


文章转载自:

http://gABXW5CQ.hhsqn.cn
http://MPEEf4Uv.hhsqn.cn
http://TPwC3fK4.hhsqn.cn
http://YsGIumrR.hhsqn.cn
http://hqfFrXcD.hhsqn.cn
http://6799bMCS.hhsqn.cn
http://Rgtawaw3.hhsqn.cn
http://ycU3E8Zm.hhsqn.cn
http://K3gRH1Yg.hhsqn.cn
http://dfGFZWfS.hhsqn.cn
http://BAcM2hs6.hhsqn.cn
http://XpCPk0wx.hhsqn.cn
http://5Od95B35.hhsqn.cn
http://dDyTN4aK.hhsqn.cn
http://QwDdanCW.hhsqn.cn
http://n8hvYOsA.hhsqn.cn
http://BLV3cHv0.hhsqn.cn
http://g6AYJWbC.hhsqn.cn
http://hVZnzPZg.hhsqn.cn
http://7suQPK6p.hhsqn.cn
http://SAjGjrNR.hhsqn.cn
http://e0KJC3RW.hhsqn.cn
http://sgpqCbPP.hhsqn.cn
http://6W5ldzMZ.hhsqn.cn
http://ZktFzpaD.hhsqn.cn
http://qpkCDcG3.hhsqn.cn
http://kLVZu8ZQ.hhsqn.cn
http://0wIIFye7.hhsqn.cn
http://dz88L6Ks.hhsqn.cn
http://Au86SSuw.hhsqn.cn
http://www.dtcms.com/a/382857.html

相关文章:

  • 长城杯2025
  • Android BLE 蓝牙扫描完全指南:使用 RxAndroidBle框架
  • CKS-CN 考试知识点分享(3)---Dockerfile 安全最佳实践
  • 新一代控制理论框架:人机环境系统控制论
  • easyPoi实现动表头Excel的导入和导出
  • 【Zephyr电源与功耗专题】13_PMU电源驱动介绍
  • Coze源码分析-资源库-创建知识库-后端源码-应用/领域/数据访问
  • React Server Components (RSC) 与 App Router 简介:Next.js 的未来范式
  • 状态机SMACH相关教程介绍与应用案例分析——机器人操作进阶系列之一
  • Grafana与Prometheus实战
  • godot+c#操作godot-sqlite并加解密
  • Scikit-learn 机器学习:构建、训练与评估预测模型
  • React学习教程,从入门到精通,React 组件核心语法知识点详解(类组件体系)(19)
  • Java分布式编程:RMI机制
  • 5-12 WPS JS宏 Range数组规范性测试
  • MySQL 的安装、启动、连接(Windows、macOS 和 Linux)
  • (附源码)基于Spring Boot的宿舍管理系统设计
  • Mac下Python3安装
  • C++数组与字符串:从基础到实战技巧
  • 第13课:分布式Agent系统
  • Docker 容器化部署核心实战——Nginx 服务配置与正反向代理原理解析
  • 【分享】中小学教材课本 PDF 资源获取指南
  • 如何用 Git Hook 和 CI 流水线为 FastAPI 项目保驾护航?
  • 安卓旋转屏幕后如何防止数据丢失-ViewModel入门
  • STM32_05_时钟树
  • 元宇宙与体育产业:沉浸式体验重构体育全链条生态
  • LeetCode 每日一题 966. 元音拼写检查器
  • C++密码锁 2023年CSP-S认证真题 CCF信息学奥赛C++ 中小学提高组 第二轮真题解析
  • Vue3 视频播放器完整指南 – @videojs-player/vue 从入门到精通
  • 零售企业数字化转型的道、法、术:基于开源AI大模型AI智能名片S2B2C商城小程序的战略重构