HDFS
NameNode
50070 dfs.namenode.http-address http服务的端口
50470 dfs.namenode.https-address https服务的端口
DataNode
50010 dfs.datanode.address datanode服务端口,用于数据传输
50075 dfs.datanode.http.address http服务的端口
50475 dfs.datanode.https.address https服务的端口
50020 dfs.datanode.ipc.address ipc服务的端口
8020 fs.defaultFS 接收Client连接的RPC端口,用于获取文件系统metadata信息。
journalnode
8485 dfs.journalnode.rpc-address RPC服务
8480 dfs.journalnode.http-address HTTP服务
ZKFC
8019 dfs.ha.zkfc.port ZooKeeper FailoverController,用于NN HA
YARN
ResourceManager
8032 yarn.resourcemanager.address RM的applications manager(ASM)端口
8030 yarn.resourcemanager.scheduler.address scheduler组件的IPC端口
8031 yarn.resourcemanager.resource-tracker.address IPC
8033 yarn.resourcemanager.admin.address IPC
8088 yarn.resourcemanager.webapp.http.address.rm1 http服务端口
8090 yarn.resourcemanager.webapp.https.address.rm1 https服务端口
NodeManager
8040 yarn.nodemanager.localizer.address localizer IPC
8042 yarn.nodemanager.webapp.address http服务端口
8041 yarn.nodemanager.address NM中container manager的端口
JobHistory Server
10020 mapreduce.jobhistory.address IPC
19888 mapreduce.jobhistory.webapp.address http服务端口,加个s就是https服务端口https://192.168.56.43:19888/jobhistory/logs
HBase
Master
60000 hbase.master.port IPC
60010 hbase.master.info.port http服务端口
RegionServer
60020 hbase.regionserver.port IPC
60030 hbase.regionserver.info.port http服务端口
Hive
Metastore
9083 metastore默认连接端口
HiveServer
10000 hiveserver2 默认连接端口
10002 hiveserver2 默认web端口
ZooKeeper
Server
2181 /etc/zookeeper/conf/zoo.cfg 客户端提供服务的端口
2888 /etc/zookeeper/conf/zoo.cfg follower用来连接到leader,只在leader上监听该端口。
3888 /etc/zookeeper/conf/zoo.cfg 用于leader选举的。只在electionAlg是1,2或3(默认)时需要。
8080 /etc/zookeeper/conf/zoo.cfg web管理后端,可禁用,添加一行admin.serverPort=0
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>dfs.http.policy</name>
<value>HTTPS_ONLY</value>
</property>
<property>
<name>dfs.namenode.https-address.bigdata.nn1</name>
<value>192.168.56.41:50470</value>
</property>
<property>
<name>dfs.namenode.https-address.bigdata.nn2</name>
<value>192.168.56.42:50470</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:50475</value>
</property>
<!-- resourcemanager配置https -->
<property>
<name>yarn.resourcemanager.webapp.https.address.rm1</name>
<value>192.168.56.41:8090</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address.rm2</name>
<value>192.168.56.42:8090</value>
</property>
<!-- nodemanager配置https -->
<property>
<name>yarn.nodemanager.webapp.https.address</name>
<value>0.0.0.0:8042</value>
</property>
<!-- jobhistory如果之前有配置,只需在原配置http加个s变成https即可 -->
<property>
<name>yarn.log.server.url</name>
<value>https://192.168.56.43:19888/jobhistory/logs</value>
</property>
<!-- 这会为YARN守护程序配置HTTP端点。支持以下值: - HTTP_ONLY:仅在http上提供服务 - HTTPS_ONLY:仅在https上提供服务 -->
<property>
<name>yarn.http.policy</name>
<value>HTTPS_ONLY</value>
</property>
server {
listen 443 ssl;
server_name yarn-test.com;
location / {
deny all;
}
ssl_certificate /etc/nginx/cert/tls-yarn.crt;
ssl_certificate_key /etc/nginx/cert/tls-yarn.key;
ssl_session_timeout 5m;
location /cluster {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_set_header Host yarn-test.com;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwaeded-Proto https;
proxy_pass https://yarn-test.com;
}
location /static {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_set_header Host yarn-test.com;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwaeded-Proto https;
proxy_pass https://yarn-test.com;
}
location /proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_set_header Host yarn-test.com;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwaeded-Proto https;
proxy_pass https://yarn-test.com;
}
}
# yarn-resourcemanager服务器,配active那台
upstream yarn-test.com {
server 192.168.56.41:8090;
}
<!-- -->
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
<!-- 定义用于HTTP web控制台的身份验证。支持的值是:simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# -->
<property>
<name>hadoop.http.authentication.type</name>
<value>simple</value>
</property>
<!--初始化类 -->
<property>
<name>hadoop.http.filter.initializers</name>
<value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
</property>
<!--
签名秘密文件,用于对身份验证令牌进行签名。
对于集群中的每个服务,ResourceManager, NameNode, DataNode和NodeManager,可以使用相同的secret。
这个文件应该只有运行守护进程的Unix用户可以读。
-->
<property>
<name>hadoop.http.authentication.signature.secret.file</name>
<value>/opt/hadoop/secret/hadoop-http-auth-signature-secret</value>
</property>
<!-- 指示在使用“简单”身份验证时是否允许匿名请求。 -->
<property>
<name>hadoop.http.authentication.simple.anonymous.allowed</name>
<value>false</value>
</property>
setAcl / ip:192.168.56.41:cdrwa,ip:192.168.56.42:cdrwa,ip:192.168.56.43:cdrwa
<property>
<name>hive.server2.webui.port</name>
<value>0</value>
<description>The port the HiveServer2 WebUI will listen on. This can beset to 0 or a negative integer to disable the web UI</description>
</property>
免责申明:
本文系转载,版权归原作者所有,如若侵权请联系我们进行删除!
《数据治理行业实践白皮书》下载地址:https://fs80.cn/4w2atu
《数栈V6.0产品白皮书》下载地址:https://fs80.cn/cw0iw1
想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs
同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:https://github.com/DTStack