博客 实践数据湖iceberg:多种客户端与iceberg交互启动命令(常用命令)

实践数据湖iceberg:多种客户端与iceberg交互启动命令(常用命令)

   数栈君   发表于 2023-03-31 16:11  443  0

一. 启动命令
1. spark-sql集成iceberg
spark on yarn:
[root@hadoop101 spark]# bin/spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.13.0 --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog --conf spark.sql.catalog.spark_catalog.type=hive --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.local.type=hadoop --conf spark.sql.catalog.local.warehouse=/tmp/iceberg/warehouse --master yarn

spark local:

[root@hadoop101 spark]# bin/spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.13.0 --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog --conf spark.sql.catalog.spark_catalog.type=hive --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.local.type=hadoop --conf spark.sql.catalog.local.warehouse=/tmp/iceberg/warehouse
1
2. flink1.14.3集成iceberg(hive+kafka)
[root@hadoop101 ~]# sql-client.sh embedded -j /opt/software/iceberg0.13/iceberg-flink-runtime-1.14-0.13.0.jar -j /opt/software/iceberg0.13/flink-sql-connector-hive-2.3.6_2.12-1.14.3.jar -j /opt/software/flink-sql-connector-kafka_2.12-1.14.3.jar shell

相关的jar包, 下载地址 https://repo.maven.apache.org/maven2/org/apache/

二.常用命令
2.1 catalog相关命令
建立catalog
flink通过定义catalog就能找到hive,而不需要hive-site.xml。spark需要在 conf 下放 hive-site.xml

CREATE CATALOG hive_catalog6 WITH (
'type'='iceberg',
'catalog-type'='hive',
'uri'='thrift://hadoop101:9083',
'clients'='5',
'property-version'='1',
'warehouse'='hdfs://user/hive/warehouse/hive_catalog6'
);

查看有哪些catalog:

Flink SQL> show catalogs;
+-----------------+
| catalog name |
+-----------------+
| default_catalog |
| hive_catalog6 |
+-----------------+
2 rows in set

查看当前catalog:


ink SQL> show current catalog;
+----------------------+
| current catalog name |
+----------------------+
| default_catalog |
+----------------------+
1 row in set

查看当前db:


Flink SQL> show current database;
+-----------------------+
| current database name |
+-----------------------+
| default_database |
+-----------------------+
1 row in set

切换catalog:

Flink SQL> use catalog hive_catalog6;
[INFO] Execute statement succeed.
1
2
查当前catalog下的db:

Flink SQL> show databases;
+---------------+
| database name |
+---------------+
| default |
| iceberg_db |
| iceberg_db6 |
| source |
+---------------+
4 rows in set

总结
记录常用命令,方便自己查找

内容来源于网络,如侵删。


近日,袋鼠云重磅发布《数据治理行业实践白皮书》,白皮书基于袋鼠云在数据治理领域的8年深厚积累与实践服务经验,从专业视角逐步剖析数据治理难题,阐述数据治理的概念内涵、目标价值、实施路线、保障体系与平台工具,并借助行业实践案例解析,为广大读者提供一种数据治理新思路。

扫码下载《数据治理行业实践白皮书》,下载地址:https://fs80.cn/4w2atuhttp://dtstack-static.oss-cn-hangzhou.aliyuncs.com/2021bbs/files_user1/article/1473f9aa8fcfcb9b9e6ad3fe3dd62eb7..png



想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs

同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:
https://github.com/DTStack

0条评论
社区公告
  • 大数据领域最专业的产品&技术交流社区,专注于探讨与分享大数据领域有趣又火热的信息,专业又专注的数据人园地

最新活动更多
微信扫码获取数字化转型资料
钉钉扫码加入技术交流群