Hive内置运算函数,自定义函数(UDF)和Transform

  • 时间:
  • 浏览:0
  • 来源:大发5分6合_大发5分6合官网

  AS (movieid, rating, weekday,userid)

hive> load data local inpath'/home/tuzq/software/hivedata/dual.txt' into table dual;

rate

tuzuoquan

  line = line.strip()

INSERT OVERWRITE TABLE u_data_new

使用示例1:下面这句sql而是借用了weekday_mapper.py对数据进行了除理.

{"movie":"1193","rate":"5","timeStamp":"97880780","uid":"1"}

Added [/home/tuzq/software/hivedata/udf.jar] to class path

{"movie":"1197","rate":"3","timeStamp":"978802268","uid":"1"}

当Hive提供的内置函数无法满足你的业务除理需要时,此时就还能能 考虑使用用户自定义函数(UDF:user-defined function)。

其中weekday_mapper.py内容如下

hive>

  rating INT,

}

   public Text evaluate(final Text s) {

  movieid, rating, unixtime,userid = line.split('\t')

{"movie":"1287","rate":"5","timeStamp":"978802039","uid":"1"}

5、hive>创建临时函数与开发好的java class关联

1197

978802268

{"movie":"3408","rate":"4","timeStamp":"97880275","uid":"1"}

4、hive> addJAR /home/tuzq/software/hivedata/udf.jar> ;

CREATE TABLE u_data_new (

hive>

      returnnew Text(s.toString().toLowerCase());

#!/bin/python

Added resources: [/home/tuzq/software/hivedata/udf.jar]

UDF  作用于单个数据行,产生两个多数据行作为输出。(数学函数,字符串函数)

OK

{"movie":"661","rate":"3","timeStamp":"978802109","uid":"1"}

hive> create temporary function toLowercase as 'hiveudf.ToLowerCase';

测试各种内置函数的快捷方法:

create table dual(id string);

hive> select toLowercase("TUZUOQUAN") from dual;

1、先开发两个多java类,继承UDF,并重载evaluate方法

      if(s == null) {returnnull;}

import org.apache.hadoop.io.Text;

ROW FORMAT DELIMITED

1、创建两个多dual表

5、即可在hql中使用自定义的函数tolowercase ip 

2、打成jar包上传到服务器

  weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()

3

public class ToLowerCase extends UDF {

内容较多,见《Hive官方文档》

uid

适合实现Hive中这样 的功能又不用写UDF的具体情况

Time taken: 0.122 seconds, Fetched: 1 row(s)

FROM t_rating;

{"movie":"914","rate":"3","timeStamp":"978801968","uid":"1"}

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF

  movieid INT,

import org.apache.hadoop.hive.ql.exec.UDF;

UDAF(用户定义聚集函数):接收多个输入数据行,并产生两个多输出数据行。(count,max)

  userid INT)

3、将jar包加在到hive的classpath

1

  print '\t'.join([movieid, rating, str(weekday),userid])

Hive的 TRANSFORM 关键字提供了在SQL中调用自写脚本的功能

内容较多,见《Hive官方文档》

需要将数据导入到hive数据仓库中

   }

import datetime

OK

3、select substr('angelababy',2,3) from dual;

  USING 'python weekday_mapper.py'

Time taken: 0.039 seconds

l  Json数据解析UDF开发

作业:

其中dual.txt后边而是两个多空格

我不管你后边用几只表,最终我能得到两个多结果表:

l  简单UDF示例

add FILE weekday_mapper.py;

  TRANSFORM (movieid , rate, timestring,uid)

for line in sys.stdin:

前期准备,要把hive的lib包导入到工程中,其中UDF依赖的是hive-exec-1.2.1.jar。也而是说要把apache-hive-1.2.1-bin\lib中内容都引入到工程中。若用到hadoop中的许多api,请把hadoop的api也引入进去。

movie

SELECT

注:全在hive中完成,还能能 用自定义函数

import sys

{"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"}

  weekday INT,

有原始json数据如下:

FIELDS TERMINATED BY '\t';

package hiveudf;

timestamp

2、load两个多文件(一行,两个多空格)到dual表