一. 启动命令
1. spark-sql集成iceberg
spark on yarn:
[root@hadoop101 spark]# bin/spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.13.0 --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog --conf spark.sql.catalog.spark_catalog.type=hive --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.local.type=hadoop --conf spark.sql.catalog.local.warehouse=/tmp/iceberg/warehouse --master yarn
spark local:
[root@hadoop101 spark]# bin/spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.13.0 --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog --conf spark.sql.catalog.spark_catalog.type=hive --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.local.type=hadoop --conf spark.sql.catalog.local.warehouse=/tmp/iceberg/warehouse
1
2. flink1.14.3集成iceberg(hive+kafka)
[root@hadoop101 ~]# sql-client.sh embedded -j /opt/software/iceberg0.13/iceberg-flink-runtime-1.14-0.13.0.jar -j /opt/software/iceberg0.13/flink-sql-connector-hive-2.3.6_2.12-1.14.3.jar -j /opt/software/flink-sql-connector-kafka_2.12-1.14.3.jar shell
相关的jar包, 下载地址 https://repo.maven.apache.org/maven2/org/apache/
二.常用命令
2.1 catalog相关命令
建立catalog
flink通过定义catalog就能找到hive,而不需要hive-site.xml。spark需要在 conf 下放 hive-site.xml
CREATE CATALOG hive_catalog6 WITH (
'type'='iceberg',
'catalog-type'='hive',
'uri'='thrift://hadoop101:9083',
'clients'='5',
'property-version'='1',
'warehouse'='hdfs://user/hive/warehouse/hive_catalog6'
);
查看有哪些catalog:
Flink SQL> show catalogs;
+-----------------+
| catalog name |
+-----------------+
| default_catalog |
| hive_catalog6 |
+-----------------+
2 rows in set
查看当前catalog:
ink SQL> show current catalog;
+----------------------+
| current catalog name |
+----------------------+
| default_catalog |
+----------------------+
1 row in set
查看当前db:
Flink SQL> show current database;
+-----------------------+
| current database name |
+-----------------------+
| default_database |
+-----------------------+
1 row in set
切换catalog:
Flink SQL> use catalog hive_catalog6;
[INFO] Execute statement succeed.
1
2
查当前catalog下的db:
Flink SQL> show databases;
+---------------+
| database name |
+---------------+
| default |
| iceberg_db |
| iceberg_db6 |
| source |
+---------------+
4 rows in set
总结
记录常用命令,方便自己查找
内容来源于网络,如侵删。
扫码下载《数据治理行业实践白皮书》,下载地址:https://fs80.cn/4w2atu
想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs
同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:https://github.com/DTStack