现在的spark sql编程通常使用scala api 以及 java api的方式,相比于直接使用 spark sql语句,spark api灵活很多,毕竟可以基于dataset以及rdd两种方式进行操作,不过spark sql的坑就有点多了。
1,getClass.getResourceAsStream这个类,网上通常说的是不加"/"是从当前包读取,加了"/"是从根class路径读取,但是根路径并不是在idea或者文件下看到的诸如src/main/resource/这样的路径,而是最终打包时候生成的jar的时候的格式,在生成jar包的时候resource文件夹下会被展开到根路径下,所以如果要加载resource下的资源,只需要"/资源名"就可以了
2、select crossInfo, split(crossInfo, '|') as jda from tem_test_yy lateral view explode(split(jdaList, '#')) tmpTable as crossInfo
这个语句有bug,返回的结果是
jda1|1|time1 ["j","d","a","1","|","1","|","t","i","m","e","1",""]jda1|1|time1 ["j","d","a","1","|","1","|","t","i","m","e","1",""]jda2|1|time2 ["j","d","a","2","|","1","|","t","i","m","e","2",""]jda3|0|time3 ["|","j","d","a","3","|","0","|","t","i","m","e","3",""]
主要原因是hive里面|字符要使用转义符号!!,所以正确用法是split(crossInfo, '\\|');
3、spark persist不能乱用,尤其是
MEMORY_AND_DISK_SER
级别,对于大表来说,persist效率远不如多执行一遍。。对于几十亿级别的表,效率可降低数倍。。
4、dataframe = dataset[row] ,spark map里面的匿名函数返回值不能是dataset[row],否则会报序列化错误,它只支持dataset[class]的形式,需要在返回以后 在driver端通过 dataset[row].toDF()转成 dataframe也就是dataset[row]才行。但是dataset[row]可以作为map的输入。
5、scala selet("_1.*")和select($"_1"),如果处理的是Tuple[_1,_2]类型的dataset,
后者会生成如下的schema
|-- _1: struct (nullable = true) | |-- all_jda: string (nullable = true) | |-- user_visit_ip: string (nullable = true) | |-- sequence_num: integer (nullable = true)
前者是生成如下的schema
| -- all_jda: string (nullable = true) | -- user_visit_ip: string (nullable = true) | -- sequence_num: integer (nullable = true)
这绝对是一个坑=。=,后者会把多一层schema结构,而在spark sql语句中是能直接取到非顶层的列的。。
6、spark sql的一些问题
(1), hive支持使用正则语句,spark sql 不支持
(2), left out join on A.column = B.column 而不能写成 left out join on column;(3), select * from A left out join B on cloumn会造成ambigious错误 需要小心(4), concat_ws不支持对除String外的其他类型数组的连接,需要自己实现一个udf