●便于合理使用存储资源,每个Partition在一个Broker上存储,可以把海量的数据按照分区切割成一块一块数据存储在多台Broker上。合理控制分区的任务,可以实现负载均衡的效果。
●提高并行度,生产者可以以分区为单位发送数据;消费者可以以分区为单位进行消费数据。
分区的作用就是提供负载均衡的能力,或者说对数据进行分区的主要原因,就是为了实现系统的高伸缩性(Scalability)。不同的分区能够被放置到不同节点的机器上,而数据的读写操作也都是针对分区这个粒度而进行的,这样每个节点的机器都能独立地执行各自分区的读写请求处理。并且,我们还可以通过添加新的节点机器来增加整体系统的吞吐量。
kafka默认的分区器DefaultPartitioner
package org.apache.kafka.clients.producer.internals;
import java.util.Map;
import org.apache.kafka.clients.producer.Partitioner;
import org.apache.kafka.common.Cluster;
import org.apache.kafka.common.utils.Utils;
/**
* The default partitioning strategy:
* <ul>
* <li>If a partition is specified in the record, use it
* <li>If no partition is specified but a key is present choose a partition based on a hash of the key
* <li>If no partition or key is present choose the sticky partition that changes when the batch is full.
*
* See KIP-480 for details about sticky partitioning.
*/
public class DefaultPartitioner implements Partitioner {
public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value, Iterable<Header> headers) {}
public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value) {}
public ProducerRecord(String topic, Integer partition, K key, V value, Iterable<Header> headers) {}
public ProducerRecord(String topic, Integer partition, K key, V value) {}
kafkaProducer.send(new ProducerRecord<>("first", 1, "", "record" + i),
(recordMetadata, exception) -> {
if (exception == null) {
System.out.println("主题:" + recordMetadata.topic() + ";分区:" + recordMetadata.partition());
}
});
public class MyPartitioner implements Partitioner {
@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
return ThreadLocalRandom.current().nextInt(partitions.size());
}
@Override
public void close() {
}
@Override
public void configure(Map<String, ?> map) {
}
}
public ProducerRecord(String topic, K key, V value) {}
// 指定发送到1号分区
kafkaProducer.send(new ProducerRecord<>("first", "a", "record" + i),
(recordMetadata, exception) -> {
if (exception == null) {
System.out.println("主题:" + recordMetadata.topic() + ";分区:" + recordMetadata.partition());
}
});
public ProducerRecord(String topic, V value) {}
kafkaProducer.send(new ProducerRecord<>("first","record" + i),
(recordMetadata, exception) -> {
if (exception == null) {
System.out.println("主题:" + recordMetadata.topic() + ";分区:" + recordMetadata.partition());
}
});
●重写 partition()方法。
●在生产者的配置中添加分区器参数。
MyPartitioner:
package kafka;
import org.apache.kafka.clients.producer.Partitioner;
import org.apache.kafka.common.Cluster;
import java.util.Map;
/**
* @Title: MyPartitioner.java
* @Package kafka
* @Description: 自定义分区器
* @Author: hongcaixia
* @Date: 2023/1/21 21:24
* @Version V1.0
*/
public class MyPartitioner implements Partitioner {
@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
// 获取消息
String msgValue = value.toString();
// 创建 partition
int partition;
// 判断消息是否包含 tracy
if (msgValue.contains("tracy")) {
partition = 0;
} else {
partition = 1;
}
// 返回分区号
return partition;
}
@Override
public void close() {
}
@Override
public void configure(Map<String, ?> map) {
}
}
package kafka;实现基于地理位置的分区策略
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
/**
* @Title: MyProducer.java
* @Package kafka
* @Description: 生产者使用自定义分区器
* @Author: hongcaixia
* @Date: 2023/1/20 21:24
* @Version V1.0
*/
public class MyProducerPartition {
public static void main(String[] args) {
// 1. 创建kafka生产者的配置对象
Properties properties = new Properties();
// 2. 给kafka配置对象添加配置信息:bootstrap.servers
// 连接集群
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092,192.168.0.2:9092");
// 指定序列化类型 key,value 序列化(必须):key.serializer,value.serializer
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
//设置自定义分区器
properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, "kafka.MyPartitioner");
// 3. 创建kafka生产者对象
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties);
// 4. 调用send方法,发送消息
for (int i = 0; i < 5; i++) {
kafkaProducer.send(new ProducerRecord<>("first", "record" + i));
}
// 5. 关闭资源
kafkaProducer.close();
}
}
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
return partitions.stream().filter(p -> isSouth(p.leader().host())).map(PartitionInfo::partition).findAny().get();
免责申明:
本文系转载,版权归原作者所有,如若侵权请联系我们进行删除!
《数据治理行业实践白皮书》下载地址:https://fs80.cn/4w2atu
《数栈V6.0产品白皮书》下载地址:https://fs80.cn/cw0iw1
想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs
同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:https://github.com/DTStack