●去分区下面查看segment 发现有超过10d前的都还保留topic/partition/segement
●查看server日志发现从最老segment的那天开始就没有deletion日志了
●有新的segment生成
●log clean线程还在
cleanup.policy: deletekafka log的清理策略有两种:delete,compact, 默认是delete
bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files 0000000000001190.timeindex --print-data-log
或者
bin/kafka-dump-log --files 00000000001895067862.timeindex --print-data-log
timestamp: 1859915778000000 offset: 1996这些时间戳转换后时间是个未来的非常大的时间!!!
timestamp: 1859915778000003 offset: 2083
sh bin/kafka-delete-records.sh --bootstrap-server xxxx --offset-json-file config/offset-
json-file.json
offset-json-file.json
{"partitions":
[{"topic": "test1", "partition": 0,
"offset": 1024}],
"version":1
}
# 从最开始的地方删除消息到 1024的offset
如果为CreateTime则会生效,若为LogAppendTime则不会生效
producer.send(new ProducerRecord<String, String> ("test.5",null,1000000000000L,null,value)).get();
# 1000000000000L为timestamp
message.timestamp.type=CreateTime2)创建topic时指定
或
message.timestamp.type=LogAppendTime
kafka-topics.sh --zookeeper 127.0.0.1:2181/kafka \message.timestamp.difference.max.ms:
--create \
--topic test.4 \
--partitions 1 --replication-factor 1 \
--config message.timestamp.type=CreateTime
或
kafka-topics.sh --zookeeper 127.0.0.1:2181/kafka \
--create \
--topic test.4 \
--partitions 1 --replication-factor 1 \
--config message.timestamp.type=LogAppendTime
关于从ConsumerRecord获取时间戳为-1的说明
部分源码为:
ConsumerRecord.java
/**
* Creates a record to be received from a specified topic and partition (provided for
* compatibility with Kafka 0.9 before the message format supported timestamps and before
* serialized metadata were exposed).
*
* @param topic The topic this record is received from
* @param partition The partition of the topic this record is received from
* @param offset The offset of this record in the corresponding Kafka partition
* @param key The key of the record, if one exists (null is allowed)
* @param value The record contents
*/
public ConsumerRecord(String topic,
int partition,
long offset,
K key,
V value) {
this(topic, partition, offset, NO_TIMESTAMP, TimestampType.NO_TIMESTAMP_TYPE,
NULL_CHECKSUM, NULL_SIZE, NULL_SIZE, key, value);
}
public interface RecordBatch extends Iterable<Record> {
/**
* The "magic" values
*/
byte MAGIC_VALUE_V0 = 0;
byte MAGIC_VALUE_V1 = 1;
byte MAGIC_VALUE_V2 = 2;
/**
* The current "magic" value
*/
byte CURRENT_MAGIC_VALUE = MAGIC_VALUE_V2;
/**
* Timestamp value for records without a timestamp
*/
long NO_TIMESTAMP = -1L;
...
public enum TimestampType {为了兼容Kafka 0.9(包含)以前的版本,如果为kafka0.9之前的无时间戳的版本,则返回的timestamp则为 -1
NO_TIMESTAMP_TYPE(-1, "NoTimestampType"), CREATE_TIME(0, "CreateTime"), LOG_APPEND_TIME(1, "LogAppendTime");
...
免责申明:
本文系转载,版权归原作者所有,如若侵权请联系我们进行删除!
《数据治理行业实践白皮书》下载地址:https://fs80.cn/4w2atu
《数栈V6.0产品白皮书》下载地址:https://fs80.cn/cw0iw1
想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs
同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:https://github.com/DTStack