从存储基础设施的一个位置移动数据到另一个位置是个艰难的过程。至少,过去是这样。而现在,在合适的工具和基础设施条件下,传统的数据迁移过程中涉及到的许多困难点都可以消除。使用Sqoop能够极大简化MySQL数据迁移至Hive之流程,并降低Hadoop处理分析任务时的难度。
使用Sqoop能够极大简化MySQL数据迁移至Hive之流程,并降低Hadoop处理分析任务时的难度。
先决条件:安装并运行有Sqoop与Hive的Hadoop环境。为了加快处理速度,我们还将使用Cloudera Quickstart VM(要求至少4 GB内存),不过大家也可以使用Hortonworks Data Platform(至少要求8 GB内存)。由于我的笔记本电脑只有8 GB内存,因此我在这里使用CLoudera VM镜像。
如果大家需要使用Virtualbox运行Cloudera/HDP VM,则可以轻松使用其它多种Hadoop生态系统预装软件包(包括MySQL、Oozie、Hadoop、Hive、Zookeeper、Storm、Kafka以及Spark等等)。
在MySQL中创建表
在Cloudera VM中,打开命令界面并确保MySQL已经安装完毕。
- shell>mysql--version
- mysqlVer14.14Distrib5.1.66,forredhat-linux-gnu(x86_64)usingreadline5.
示例当中自然要使用自己的数据库,因此使用以下命令在MySQL中创建一套数据库:
- mysql>createdatabasesqoop;
接下来:
- mysql>usesqoop;
- mysql>createtablecustomer(idvarchar(3),namevarchar(20),agevarchar(3),salaryinteger(10));
- QueryOK,0rowsaffected(0.09sec)
- mysql>desccustomer;
- +--------+-------------+------+-----+---------+-------+
- |Field|Type|Null|Key|Default|Extra|
- +--------+-------------+------+-----+---------+-------+
- |id|varchar(3)|YES||NULL||
- |name|varchar(20)|YES||NULL||
- |age|varchar(3)|YES||NULL||
- |salary|int(10)|YES||NULL||
- +--------+-------------+------+-----+---------+-------+
- mysql>select*fromcustomer;
- +------+--------+------+--------+
- |id|name|age|salary|
- +------+--------+------+--------+
- |1|John|30|80000|
- |2|Kevin|33|84000|
- |3|Mark|28|90000|
- |4|Jenna|34|93000|
- |5|Robert|32|100000|
- |6|Zoya|40|60000|
- |7|Sam|37|75000|
- |8|George|31|67000|
- |9|Peter|23|70000|
- |19|Alex|26|74000|
- +------+--------+------+-----
开始Sqoop之旅
如大家所见,其中customer表中并不包含主键。我在该表中并未添加多少记录。默认情况下,Sqoop能够识别出表中的主键列(如果有的话),并将其作为划分列。该划分列的低值与高值检索自该数据库,而映射任务则指向符合区间要求的均匀部分。
如果主键并未均匀分布在该区间当中,那么任务将出现不平衡状况。这时,大家应当明确选定一个与--split-by参数不同的列,例如--split-by id。
由于我们希望将此表直接导入至Hive中,因此需要在Sqoop命令中添加–hive-import:
- sqoopimport--connectjdbc:mysql://localhost:3306/sqoop
- --usernameroot
- -P
- --split-byid
- --columnsid,name
- --tablecustomer
- --target-dir/user/cloudera/ingest/raw/customers
- --fields-terminated-by","
- --hive-import
- --create-hive-table
- --hive-tablesqoop_workspace.customers
下面来看Sqoop命令各选项的具体作用:
connect – 提供jdbc字符串
username – 数据库用户名
-P – 将在控制台中询问密码。大家也可以使用-passwaord,但并不推荐这种作法,因为其会显示在任务执行日志中并可能导致问题。解决办法之一在于将数据库密码存储在HDFS中的文件内,并将其向运行时交付。
- table – 告知计算机我们希望导入哪个MySQL表。在这里,表名称为customer。
- split-by – 指定划分列。在这里我们指定id列。
- target-dir – HDFS目标目录。
- fields-terminated-by – 我已经指定了逗号作为分隔值(默认情况下,导入HDFS的数据以逗号作为分隔值)。
- hive-import – 将表导入Hive(如果不加设置,则使用Hive的默认分隔符)。
- create-hive-table – 检查如果已经存在一个Hive表,任务设置是否会因此失败。
- hive-table – 指定.。本示例中为sqoop_workspace.customers,其中sqoop_workspace为数据库名称,而customers则为表名称。
如下所示,Sqoop为一项map-reduce任务。请注意,这里我使用-P作为密码选项。除了这种方式,我们也可以使用-password实现参数化,并从文件中读取密码内容。
- sqoopimport--connectjdbc:mysql://localhost:3306/sqoop--usernameroot-P--split-byid--columnsid,name--tablecustomer--target-dir/user/cloudera/ingest/raw/customers--fields-terminated-by","--hive-import--create-hive-table--hive-tablesqoop_workspace.customers
- Warning:/usr/lib/sqoop/../accumulodoesnotexist!Accumuloimportswillfail.
- Pleaseset$ACCUMULO_HOMEtotherootofyourAccumuloinstallation.
- 16/03/0112:59:44INFOsqoop.Sqoop:RunningSqoopversion:1.4.6-cdh5.5.0
- Enterpassword:
- 16/03/0112:59:54INFOmanager.MySQLManager:PreparingtouseaMySQLstreamingresultset.
- 16/03/0112:59:54INFOtool.CodeGenTool:Beginningcodegeneration
- 16/03/0112:59:55INFOmanager.SqlManager:ExecutingSQLstatement:SELECTt.*FROM`customer`AStLIMIT1
- 16/03/0112:59:56INFOmanager.SqlManager:ExecutingSQLstatement:SELECTt.*FROM`customer`AStLIMIT1
- 16/03/0112:59:56INFOorm.CompilationManager:HADOOP_MAPRED_HOMEis/usr/lib/hadoop-mapreduce
- Note:/tmp/sqoop-cloudera/compile/6471c43b5c867834458d3bf5a67eade2/customer.javausesoroverridesadeprecatedAPI.
- Note:Recompilewith-Xlint:deprecationfordetails.
- 16/03/0113:00:01INFOorm.CompilationManager:Writingjarfile:/tmp/sqoop-cloudera/compile/6471c43b5c867834458d3bf5a67eade2/customer.jar
- 16/03/0113:00:01WARNmanager.MySQLManager:Itlookslikeyouareimportingfrommysql.
- 16/03/0113:00:01WARNmanager.MySQLManager:Thistransfercanbefaster!Usethe--direct
- 16/03/0113:00:01WARNmanager.MySQLManager:optiontoexerciseaMySQL-specificfastpath.
- 16/03/0113:00:01INFOmanager.MySQLManager:SettingzeroDATETIMEbehaviortoconvertToNull(mysql)
- 16/03/0113:00:01INFOmapreduce.ImportJobBase:Beginningimportofcustomer
- 16/03/0113:00:01INFOConfiguration.deprecation:mapred.job.trackerisdeprecated.Instead,usemapreduce.jobtracker.address
- 16/03/0113:00:02INFOConfiguration.deprecation:mapred.jarisdeprecated.Instead,usemapreduce.job.jar
- 16/03/0113:00:04INFOConfiguration.deprecation:mapred.map.tasksisdeprecated.Instead,usemapreduce.job.maps
- 16/03/0113:00:05INFOclient.RMProxy:ConnectingtoResourceManagerat/0.0.0.0:8032
- 16/03/0113:00:11INFOdb.DBInputFormat:Usingreadcommitedtransactionisolation
- 16/03/0113:00:11INFOdb.DataDrivenDBInputFormat:BoundingValsQuery:SELECTMIN(`id`),MAX(`id`)FROM`customer`
- 16/03/0113:00:11WARNdb.TextSplitter:Generatingsplitsforatextualindexcolumn.
- 16/03/0113:00:11WARNdb.TextSplitter:Ifyourdatabasesortsinacase-insensitiveorder,thismayresultinapartialimportorduplicaterecords.
- 16/03/0113:00:11WARNdb.TextSplitter:Youarestronglyencouragedtochooseanintegralsplitcolumn.
- 16/03/0113:00:11INFOmapreduce.JobSubmitter:numberofsplits:4
- 16/03/0113:00:12INFOmapreduce.JobSubmitter:Submittingtokensforjob:job_1456782715090_0004
- 16/03/0113:00:13INFOimpl.YarnClientImpl:Submittedapplicationapplication_1456782715090_0004
- 16/03/0113:00:13INFOmapreduce.Job:Theurltotrackthejob:http://quickstart.cloudera:8088/proxy/application_1456782715090_0004/
- 16/03/0113:00:13INFOmapreduce.Job:Runningjob:job_1456782715090_0004
- 16/03/0113:00:47INFOmapreduce.Job:Jobjob_1456782715090_0004runninginubermode:false
- 16/03/0113:00:48INFOmapreduce.Job:map0%reduce0%
- 16/03/0113:01:43INFOmapreduce.Job:map25%reduce0%
- 16/03/0113:01:46INFOmapreduce.Job:map50%reduce0%
- 16/03/0113:01:48INFOmapreduce.Job:map100%reduce0%
- 16/03/0113:01:48INFOmapreduce.Job:Jobjob_1456782715090_0004completedsuccessfully
- 16/03/0113:01:48INFOmapreduce.Job:Counters:30
- FileSystemCounters
- FILE:Numberofbytesread=0
- FILE:Numberofbyteswritten=548096
- FILE:Numberofreadoperations=0
- FILE:Numberoflargereadoperations=0
- FILE:Numberofwriteoperations=0
- HDFS:Numberofbytesread=409
- HDFS:Numberofbyteswritten=77
- HDFS:Numberofreadoperations=16
- HDFS:Numberoflargereadoperations=0
- HDFS:Numberofwriteoperations=8
- JobCounters
- Launchedmaptasks=4
- Otherlocalmaptasks=5
- Totaltimespentbyallmapsinoccupiedslots(ms)=216810
- Totaltimespentbyallreducesinoccupiedslots(ms)=0
- Totaltimespentbyallmaptasks(ms)=216810
- Totalvcore-secondstakenbyallmaptasks=216810
- Totalmegabyte-secondstakenbyallmaptasks=222013440
- Map-ReduceFramework
- Mapinputrecords=10
- Mapoutputrecords=10
- Inputsplitbytes=409
- SpilledRecords=0
- FailedShuffles=0
- MergedMapoutputs=0
- GCtimeelapsed(ms)=2400
- CPUtimespent(ms)=5200
- Physicalmemory(bytes)snapshot=418557952
- Virtualmemory(bytes)snapshot=6027804672
- Totalcommittedheapusage(bytes)=243007488
- FileInputFormatCounters
- BytesRead=0
- FileOutputFormatCounters
- BytesWritten=77
- 16/03/0113:01:48INFOmapreduce.ImportJobBase:Transferred77bytesin104.1093seconds(0.7396bytes/sec)
- 16/03/0113:01:48INFOmapreduce.ImportJobBase:Retrieved10records.
- 16/03/0113:01:49INFOmanager.SqlManager:ExecutingSQLstatement:SELECTt.*FROM`customer`AStLIMIT1
- 16/03/0113:01:49INFOhive.HiveImport:LoadinguploadeddataintoHive
- Logginginitializedusingconfigurationinjar:file:/usr/jars/hive-common-1.1.0-cdh5.5.0.jar!/hive-log4j.properties
- OK
- Timetaken:2.163seconds
- Loadingdatatotablesqoop_workspace.customers
- chgrp:changingownershipof'hdfs://quickstart.cloudera:8020/user/hive/warehouse/sqoop_workspace.db/customers/part-m-00000':Userdoesnotbelongtosupergroup
- chgrp:changingownershipof'hdfs://quickstart.cloudera:8020/user/hive/warehouse/sqoop_workspace.db/customers/part-m-00001':Userdoesnotbelongtosupergroup
- chgrp:changingownershipof'hdfs://quickstart.cloudera:8020/user/hive/warehouse/sqoop_workspace.db/customers/part-m-00002':Userdoesnotbelongtosupergroup
- chgrp:changingownershipof'hdfs://quickstart.cloudera:8020/user/hive/warehouse/sqoop_workspace.db/customers/part-m-00003':Userdoesnotbelongtosupergroup
- Tablesqoop_workspace.customersstats:[numFiles=4,totalSize=77]
- OK
- Timetaken:1.399seconds
***,让我们验证Hive中的输出结果:
- hive>showdatabases;
- OK
- default
- sqoop_workspace
- Timetaken:0.034seconds,Fetched:2row(s)
- hive>usesqoop_workspace;
- OK
- Timetaken:0.063seconds
- hive>showtables;
- OK
- customers
- Timetaken:0.036seconds,Fetched:1row(s)
- hive>showcreatetablecustomers;
- OK
- CREATETABLE`customers`(
- `id`string,
- `name`string)
- COMMENT'Importedbysqoopon2016/03/0113:01:49'
- ROWFORMATDELIMITED
- FIELDSTERMINATEDBY','
- LINESTERMINATEDBY'\n'
- STOREDASINPUTFORMAT
- 'org.apache.hadoop.mapred.TextInputFormat'
- OUTPUTFORMAT
- 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
- LOCATION
- 'hdfs://quickstart.cloudera:8020/user/hive/warehouse/sqoop_workspace.db/customers'
- TBLPROPERTIES(
- 'COLUMN_STATS_ACCURATE'='true',
- 'numFiles'='4',
- 'totalSize'='77',
- 'transient_lastDdlTime'='1456866115')
- Timetaken:0.26seconds,Fetched:18row(s)
hive> select * from customers;
OK
1 John
2 Kevin
19 Alex
3 Mark
4 Jenna
5 Robert
6 Zoya
7 Sam
8 George
9 Peter
Time taken: 1.123 seconds, Fetched: 10 row(s).
到此完成!从MySQL到Hive,数据迁移工作就是这么简单。
©本文为清一色官方代发,观点仅代表作者本人,与清一色无关。清一色对文中陈述、观点判断保持中立,不对所包含内容的准确性、可靠性或完整性提供任何明示或暗示的保证。本文不作为投资理财建议,请读者仅作参考,并请自行承担全部责任。文中部分文字/图片/视频/音频等来源于网络,如侵犯到著作权人的权利,请与我们联系(微信/QQ:1074760229)。转载请注明出处:清一色财经