十年網(wǎng)站開(kāi)發(fā)經(jīng)驗(yàn) + 多家企業(yè)客戶 + 靠譜的建站團(tuán)隊(duì)
量身定制 + 運(yùn)營(yíng)維護(hù)+專業(yè)推廣+無(wú)憂售后,網(wǎng)站問(wèn)題一站解決
Hive是一個(gè)數(shù)據(jù)倉(cāng)庫(kù)基礎(chǔ)工具在Hadoop中用來(lái)處理結(jié)構(gòu)化數(shù)據(jù)。它架構(gòu)在Hadoop之上,總歸為大數(shù)據(jù),并使得查詢和分析方便。并提供簡(jiǎn)單的sql查詢功能,可以將sql語(yǔ)句轉(zhuǎn)換為MapReduce任務(wù)進(jìn)行運(yùn)行,下面為大家講解一下hive使用實(shí)例。

數(shù)據(jù)準(zhǔn)備,先在HDFS上準(zhǔn)備文本文件,逗號(hào)分割,并上傳到/test目錄,然后在Hive里創(chuàng)建表,表名和文件名要相同。
$ cat /tmp/table_test.csv
1,user1,1000
2,user2,2000
3,user3,3000
4,user4,4000
5,user5,5000
hive> CREATE TABLE table_test (
id int,
name string,
value INT
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;
前半部分跟我們使用SQL語(yǔ)法差不多,后面的設(shè)置表示我們以’,’為分隔符導(dǎo)入數(shù)據(jù)。
$ hive -e 'load data local inpath '/tmp/table_test.csv' into table db_test.table_test'
Loading data to table db_test.table_test
OK
Time taken: 0.148 seconds
同一個(gè)文件可以多次加載(追加數(shù)據(jù)),同時(shí)會(huì)在HDFS數(shù)據(jù)目錄下多生成一個(gè)文件。另外這里加載數(shù)據(jù)local關(guān)鍵字表示我們從本地文件加載,如果不加local表示從HDFS中加載數(shù)據(jù)。
hive> select * from table_test;
OK
1 user1 1000
2 user2 2000
3 user3 3000
4 user4 4000
5 user5 5000
Time taken: 0.058 seconds, Fetched: 5 row(s)
你也可以使用select id from table_test,但是注意在Hive中除了select * from table之外可以使用全表掃描之外,其余任何查詢都需要走M(jìn)apRedure。
[hadoop@hadoop-nn ~]$ hdfs dfs -ls /user/hive/warehouse/db_test.db/table_test/
Found 1 items
-rwxrwxrwx 2 root supergroup 65 2017-06-15 22:27 /user/hive/warehouse/db_test.db/table_test/table_test.csv
注意文件權(quán)限屬主為root,這是因?yàn)槲沂窃趓oot用戶下進(jìn)入hive的,一般在Hadoop用戶下進(jìn)入hive命令行進(jìn)行創(chuàng)建表。
[hadoop@hadoop-nn ~]$ hdfs dfs -mkdir /test
[hadoop@hadoop-nn ~]$ hdfs dfs -put /tmp/table_test.csv /test/table_test.csv
[hadoop@hadoop-nn ~]$ hive
hive> CREATE TABLE hdfs_table (
id int,
name string,
value INT
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;
hive> LOAD DATA INPATH '/test/table_test.csv' OVERWRITE INTO TABLE db_test.hdfs_table;
Loading data to table db_test.hdfs_table
OK
Time taken: 0.343 seconds
hive> select * from db_test.hdfs_table;
OK
1 user1 1000
2 user2 2000
3 user3 3000
4 user4 4000
5 user5 5000
Time taken: 0.757 seconds, Fetched: 5 row(s)
注意,如果從HDFS加載數(shù)據(jù)到Hive后,原有的HDFS的數(shù)據(jù)文件就不會(huì)存在了。
[hadoop@hadoop-nn ~]$ hdfs dfs -ls /test/table_test.csv
ls: `/test/table_test.csv': No such file or directory
[hadoop@hadoop-nn ~]$ hdfs dfs -ls /user/hive/warehouse/db_test.db/hdfs_table/
Found 1 items
-rwxrwxrwx 2 hadoop supergroup 65 2017-06-15 22:54 /user/hive/warehouse/db_test.db/hdfs_table/table_test.csv
再次上傳一個(gè)文件到對(duì)應(yīng)表的目錄(/user/hive/warehouse/db_test.db/hdfs_table)下
[hadoop@hadoop-nn ~]$ cat /tmp/table_test.csv
6,user6,6000
[hadoop@hadoop-nn ~]$ hdfs dfs -put /tmp/table_test.csv /user/hive/warehouse/db_test.db/hdfs_table/table_test_20170616.csv
hive> select * from db_test.hdfs_table;
OK
1 user1 1000
2 user2 2000
3 user3 3000
4 user4 4000
5 user5 5000
6 user6 6000
Time taken: 0.053 seconds, Fetched: 6 row(s)
可以看到,我們追加的一個(gè)表信息也顯示出來(lái)了。
創(chuàng)建分區(qū)表時(shí),需要給定一個(gè)分區(qū)字段,這個(gè)分區(qū)字段可以是已經(jīng)存在的,也可以是不存在(如果不存在創(chuàng)建表時(shí)會(huì)自動(dòng)添加)。Hive分區(qū)概念跟MySQL分區(qū)差不多。下面創(chuàng)建一個(gè)以月為分區(qū)的分區(qū)表。
CREATE TABLE par_table (
id int,
name string,
value INT
) partitioned by (day int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
hive> desc par_table;
OK
id int
name string
value int
day int
# Partition Information
# col_name data_type comment
day int
Time taken: 0.023 seconds, Fetched: 9 row(s)
hive> LOAD DATA LOCAL INPATH '/tmp/table_test.csv' OVERWRITE INTO TABLE db_test.par_table PARTITION (day='22');
Loading data to table db_test.par_table partition (day=22)
OK
Time taken: 0.267 seconds
hive> LOAD DATA LOCAL INPATH '/tmp/table_test.csv' OVERWRITE INTO TABLE db_test.par_table PARTITION (day='23');
Loading data to table db_test.par_table partition (day=23)
OK
Time taken: 0.216 seconds
[hadoop@hadoop-nn ~]$ hdfs dfs -ls /user/hive/warehouse/db_test.db/par_table/
Found 1 items
drwxrwxrwx - hadoop supergroup 0 2017-06-16 01:12 /user/hive/warehouse/db_test.db/par_table/day=22
drwxrwxrwx - hadoop supergroup 0 2017-06-16 01:12 /user/hive/warehouse/db_test.db/par_table/day=23
可以看到多了對(duì)應(yīng)的分區(qū)目錄了。
查詢數(shù)據(jù),查詢時(shí)有點(diǎn)不太一樣,如果給定一個(gè)where條件指定分區(qū)字段(也就是根據(jù)查詢字段來(lái)進(jìn)行分區(qū)),這樣就只會(huì)查詢這個(gè)分區(qū)的內(nèi)容,不需要加載所有表。如果查詢字段不是分區(qū)字段,那么就需要掃描所有的分區(qū)了。如下兩個(gè)示例:
hive> select * from db_test.par_table;
OK
6 user6 6000 22
6 user6 6000 23
Time taken: 0.054 seconds, Fetched: 2 row(s)
hive> select * from db_test.par_table where day=22;
OK
6 user6 6000 22
Time taken: 0.068 seconds, Fetched: 1 row(s)
Hive支持外部表,外部表跟內(nèi)部表和分區(qū)表不同。只需要在HDFS中有了對(duì)應(yīng)的文件,然后在Hive就可以創(chuàng)建一個(gè)表并指定對(duì)應(yīng)的目錄就可以直接查數(shù)據(jù)了,而不需要執(zhí)行數(shù)據(jù)加載任務(wù)。下面來(lái)測(cè)試看看:
先在HDFS中創(chuàng)建目錄和上傳文件:
[hadoop@hadoop-nn ~]$ hdfs dfs -mkdir -p /hive/external
[hadoop@hadoop-nn ~]$ hdfs dfs -put /tmp/table_test.csv /hive/external/ext_table.csv
然后在Hive中直接創(chuàng)建表:
CREATE EXTERNAL TABLE ext_table (
id int,
name string,
value INT
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/hive/external';
此時(shí),直接查詢此表,不需要加載數(shù)據(jù)了
hive> select * from ext_table;
OK
6 user6 6000
Time taken: 0.042 seconds, Fetched: 1 row(s)
Hive還支持桶表,這里就不說(shuō)了,很少用,有興趣自行查看資料。
最后來(lái)一個(gè)MapReduce處理Hive的過(guò)程
hive> select count(*) from table_test;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = hadoop_20170616021047_9c0dc1bf-383f-49ad-83e2-e2e5dfdcb20c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1497424827481_0004, Tracking URL = http://master:8088/proxy/application_1497424827481_0004/
Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1497424827481_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-06-16 02:10:52,914 Stage-1 map = 0%, reduce = 0%
2017-06-16 02:10:57,062 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.11 sec
2017-06-16 02:11:02,204 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.53 sec
MapReduce Total cumulative CPU time: 2 seconds 530 msec
Ended Job = job_1497424827481_0004
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.53 sec HDFS Read: 7980 HDFS Write: 102 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 530 msec
OK
10
Time taken: 15.254 seconds, Fetched: 1 row(s)
可以好好看一下處理過(guò)程,由于是測(cè)試環(huán)境所以MP時(shí)間很久。
另外Hive也支持視圖,使用非常簡(jiǎn)單,如下配置:
hive> create view view_test as select * from table_test;
OK
Time taken: 0.054 seconds
hive> select * from view_test;
OK
d1 user1 1000
d1 user2 2000
d1 user3 3000
d2 user4 4000
d2 user5 5000
Time taken: 0.057 seconds, Fetched: 5 row(s)
然后我們來(lái)查看一下Hive元數(shù)據(jù)表信息,在MySQL的hive庫(kù)下的DBS表中存儲(chǔ)Hive創(chuàng)建的庫(kù)信息:
mysql> select * from DBS;
+-------+-----------------------+---------------------------------------------------+---------+------------+------------+
| DB_ID | DESC | DB_LOCATION_URI | NAME | OWNER_NAME | OWNER_TYPE |
+-------+-----------------------+---------------------------------------------------+---------+------------+------------+
| 1 | Default Hive database | hdfs://master:8020/user/hive/warehouse | default | public | ROLE |
| 6 | NULL | hdfs://master:8020/user/hive/warehouse/db_test.db | db_test | hadoop | USER |
+-------+-----------------------+---------------------------------------------------+---------+------------+------------+
2 rows in set (0.00 sec)
DB_ID:庫(kù)ID,具有唯一性。
DESC:庫(kù)描述信息。
DB_LOCATION_URI:庫(kù)在HDFS的URI地址。
NAME:庫(kù)名稱。
OWNER_NAME:庫(kù)的所有者,用什么系統(tǒng)用戶登錄Hive創(chuàng)建的,其所有者就是誰(shuí),一般要在Hadoop用戶下登錄Hive。
OWNER_TYPE:庫(kù)的所有者類型。
在hive庫(kù)下的TBLS表中存儲(chǔ)我們創(chuàng)建的表的元數(shù)據(jù)信息:
mysql> select * from TBLS;
+--------+-------------+-------+------------------+--------+-----------+-------+------------+----------------+--------------------+--------------------+
| TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER | RETENTION | SD_ID | TBL_NAME | TBL_TYPE | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT |
+--------+-------------+-------+------------------+--------+-----------+-------+------------+----------------+--------------------+--------------------+
| 11 | 1497579800 | 6 | 0 | root | 0 | 11 | table_test | MANAGED_TABLE | NULL | NULL |
| 16 | 1497581548 | 6 | 0 | hadoop | 0 | 16 | hdfs_table | MANAGED_TABLE | NULL | NULL |
| 26 | 1497584489 | 6 | 0 | hadoop | 0 | 26 | par_table | MANAGED_TABLE | NULL | NULL |
| 28 | 1497591914 | 6 | 0 | hadoop | 0 | 31 | ext_table | EXTERNAL_TABLE | NULL | NULL |
+--------+-------------+-------+------------------+--------+-----------+-------+------------+----------------+--------------------+--------------------+
4 rows in set (0.00 sec)
解釋幾個(gè)重要參數(shù):
TBL_ID:表ID,具有唯一性。
CREATE_TIME:表創(chuàng)建時(shí)間。
DB_ID:所屬庫(kù)的ID。
LAST_ACCESS_TIME:最后一次訪問(wèn)時(shí)間。
OWNER:表的所有者,用什么系統(tǒng)用戶登錄Hive創(chuàng)建的,其所有者就是誰(shuí),一般要在Hadoop用戶下登錄Hive。
TBL_NAME:表名稱。
TBL_TYPE:表類型,MANAGED_TABLE表示受托管的表(如內(nèi)部表、分區(qū)表、桶表),EXTERNAL_TABLE表示外部表,兩個(gè)有個(gè)很大的區(qū)別就是受托管的表,當(dāng)你執(zhí)行DROP TABLE動(dòng)作時(shí),會(huì)把Hive元數(shù)據(jù)信息連同HDFS數(shù)據(jù)也一同刪除。而外部表執(zhí)行DROP TABLE時(shí)不會(huì)刪除HDFS的數(shù)據(jù),只是把元數(shù)據(jù)信息刪除了。