导语: zookeeper 和 kafka 在默认情况下,是没有开启安全认证的,那么任意客户端可以在不需要任何身份认证的情况下访问zookeeper和kafka下的各节点,甚至可以进行节点的增加,修改以及删除的动作。注意,前面的动作是基于客户端能访问服务端所在的网络,如果进行了物理隔绝或者做了防火墙限制,那前述内容就不一定成立。但是,在某些对安全加固要求比较严格的客户或者生产环境中,那就必须开启安全认证才行。除了最基本的身份认证以外,还有针对每个节点的权限访问,但本文不涉及该话题。
进入正题,先从zookeeper开始配置,zookeeper官网提供了认证配置的参考,点击下方官网地址,即可查看详情。配置分两种情况:
1.客户端和服务端的双向认证
2.服务端与服务端的双向认证
如果是非集群模式下,仅配置客户端和服务端的双向认证即可。集群模式下,则需要客户端和服务端的认证以及zookeeper服务器之间的双向认证。
以下是各配置的详细内容:
比如新建文件server.jaas.conf,内容如下:
Server {
# 使用摘要认证模块
org.apache.zookeeper.server.auth.DigestLoginModule required
# 用户名:super,密码:adminsecret
user_super="adminsecret"
# 用户名:bob,密码:bobsecret
user_bob="bobsecret";
};
# 强制进行SASL认证
sessionRequireClientSASLAuth=true
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
# 新增变量SERVER_JVMFLAGS
SERVER_JVMFLAGS="-Djava.security.auth.login.config=/{path}/server.jaas.conf"
# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
nohup "$JAVA" $SERVER_JVMFLAGS $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else
exec "$JAVA" $SERVER_JVMFLAGS $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"
fi
Client {
# 使用摘要认证模块,与 server.jaas.conf 保持一致
org.apache.zookeeper.server.auth.DigestLoginModule required
# 用户:bob,与 server.jaas.conf 保持一致
username="bob"
# 密码:bobsecret,与 server.jaas.conf 保持一致
password="bobsecret";
};
CLIENT_JVMFLAGS="-Djava.security.auth.login.config=/{path}/client.jaas.conf"
quorum.auth.enableSasl=true
quorum.auth.learnerRequireSasl=true
quorum.auth.serverRequireSasl=true
quorum.auth.learner.saslLoginContext=QuorumLearner
quorum.auth.server.saslLoginContext=QuorumServer
quorum.cnxn.threads.size=20
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_test="test";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="test"
password="test";
};
# kafka所在主机的ip地址
listeners=SASL_PLAINTEXT://ip:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
props.put(SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put("sasl.mechanism", "PLAIN");
./kafka-topics.sh --bootstrap-server 127.0.0.1:9092 --list
Error while executing topic command : Timed out waiting for a node assignment. Call: listTopics
[2022-10-18 10:08:32,274] ERROR org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listTopics
(kafka.admin.TopicCommand$)
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
kafka.admin.TopicCommand
Admin.create(commandConfig)
...
KafkaAdminClient.createInternal()
...
channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext);
// 与SASL有关的就是下面这段代码
public static ChannelBuilder createChannelBuilder(AbstractConfig config, Time time, LogContext logContext) {
SecurityProtocol securityProtocol = SecurityProtocol.forName(config.getString(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG));
String clientSaslMechanism = config.getString(SaslConfigs.SASL_MECHANISM);
return ChannelBuilders.clientChannelBuilder(securityProtocol, JaasContext.Type.CLIENT, config, null,
clientSaslMechanism, time, true, logContext);
}
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG
SaslConfigs.SASL_MECHANISM
public static final String SECURITY_PROTOCOL_CONFIG = "security.protocol";
public static final String SASL_MECHANISM = "sasl.mechanism";
// 入参是args
val opts = new TopicCommandOptions(args)
...
// topicService 对象的第一个参数是opts.commandConfig
val topicService = TopicService(opts.commandConfig, opts.bootstrapServer)
...
// 接下来看看opts.commandConfig的定义
def commandConfig: Properties = if (has(commandConfigOpt)) Utils.loadProps(options.valueOf(commandConfigOpt)) else new Properties()
// 上面的代码对 commandConfigOpt 进行判断,下面我们看下 commandConfigOpt 的定义:
private val commandConfigOpt = parser.accepts("command-config", "Property file containing configs to be passed to Admin Client. " +
"This is used only with --bootstrap-server option for describing and altering broker configs.")
.withRequiredArg
.describedAs("command config property file")
.ofType(classOf[String])
...
// 由此可知,命令行的参数是 --command-config
./kafka-topics.sh --bootstrap-server 127.0.0.1:9092 --list --command-config="xxx/kafka/config/topics.properties"
免责申明:
本文系转载,版权归原作者所有,如若侵权请联系我们进行删除!
《数据治理行业实践白皮书》下载地址:https://fs80.cn/4w2atu
《数栈V6.0产品白皮书》下载地址:https://fs80.cn/cw0iw1
想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs
同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:https://github.com/DTStack