1. 程式人生 > >hive 資料來源 使用mysql; hive 啟動報錯; 載入資料 建表等基本命令

hive 資料來源 使用mysql; hive 啟動報錯; 載入資料 建表等基本命令

-bash: se: command not found
[[email protected] local]# service mysqld status
mysqld is stopped
[[email protected] local]# service mysqld start
Starting mysqld:                                           [  OK  ]
[[email protected] local]# service mysqld status
mysqld (pid  16760) is running...
[
[email protected]
local]# mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) [[email protected] local]# mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) [[email protected]
local]# /usr/bin/mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MySQL to secure it, we'll need the current password for the root user. If you've just installed MySQL, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Enter current password for root (enter for none): 【回車】 OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MySQL root user without the proper authorisation. Set root password? [Y/n] y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] n ... skipping. Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] n ... skipping. By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] n ... skipping. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL! [
[email protected]
local]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.1.47 Source distribution Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show tables; ERROR 1046 (3D000): No database selected mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | test | +--------------------+ 3 rows in set (0.00 sec) mysql> create database hivedb; Query OK, 1 row affected (0.00 sec) mysql> create user 'hiveuser' identified by 'hivepwd'; Query OK, 0 rows affected (0.00 sec) mysql> select user(); +----------------+ | user() | +----------------+ | [email protected] | +----------------+ 1 row in set (0.00 sec) mysql> grant all privileges on *.* to [email protected]"localhost" identified by "hivepwd" with grant option; Query OK, 0 rows affected (0.00 sec) mysql> select user(); +----------------+ | user() | +----------------+ | [email protected] | +----------------+ 1 row in set (0.00 sec) mysql> exit; Bye [[email protected] local]# mysql -u hiveuser -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 10 Server version: 5.1.47 Source distribution Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select user(); +--------------------+ | user() | +--------------------+ | [email protected] | +--------------------+ 1 row in set (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hivedb | | mysql | | test | +--------------------+ 4 rows in set (0.00 sec) mysql> show tables; ERROR 1046 (3D000): No database selected mysql> use hivedb; Database changed mysql> show tables; Empty set (0.00 sec) mysql> show tables; Empty set (0.00 sec) mysql> exit; Bye [[email protected] local]# hive Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path [[email protected] local]# echo $HADOOP_HOME /user/local/hadoop-2.6.0 [[email protected] local]# pwd /user/local [[email protected] local]# cat /etc/profile # /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT good idea to change this file unless you know what you # are doing. Much better way is to create custom.sh shell script in # /etc/profile.d/ to make custom changes to environment. This will # prevent need for merging in future updates. pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } if [ -x /usr/bin/id ]; then if [ -z "$EUID" ]; then # ksh workaround EUID=`id -u` UID=`id -ru` fi USER="`id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /sbin pathmunge /usr/sbin pathmunge /usr/local/sbin else pathmunge /usr/local/sbin after pathmunge /usr/sbin after pathmunge /sbin after fi HOSTNAME=`/bin/hostname 2>/dev/null` HISTSIZE=1000 if [ "$HISTCONTROL" = "ignorespace" ] ; then export HISTCONTROL=ignoreboth else export HISTCONTROL=ignoredups fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL for i in /etc/profile.d/*.sh ; do if [ -r "$i" ]; then if [ "$PS1" ]; then . $i else . $i >/dev/null 2>&1 fi fi done unset i unset pathmunge #Hadoop Env export HADOOP_HOME_WARN_SUPPRESS=1 export JAVA_HOME=/user/local/jdk #export HADOOP_HOME=/user/local/hadoop #export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH #Hadoop Env export HADOOP_HOME_WARN_SUPPRESS=1 export JAVA_HOME=/user/local/jdk export HADOOP_HOME=/user/local/hadoop-2.6.0 export HIVE_HOME=/usr/local/hive export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH export JRE_HOME=$JAVA_HOME/jre export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib #export TOMCAT_HOME=/root/solr/apache-tomcat-6.0.37 #export JRE_HOME=$JAVA_HOME/jre #export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH #FLUME #export FLUME_HOME=/usr/local/hadoop/flume/apache-flume-1.5.0-bin #export FLUME_CONF_DIR=$FLUME_HOME/conf #export PATH=$PATH:$FLUME_HOME/bin #mvn export MAVEN_HOME=/user/local/apache-maven-3.3.9 export PATH=$PATH:$MAVEN_HOME/bin #scala export SCALA_HOME=/user/local/scala-2.9.3 export PATH=$PATH:$SCALA_HOME/bin #spark export SPARK_HOME=/user/local/spark-1.4.0-bin-hadoop2.6 export PATH=$PATH:$SPARK_HOME/bin #hbase export HBASE_HOME=/user/local/hbase-0.98.20-hadoop2 export PATH=$PATH:$HBASE_HOME/bin #zk export ZOOKEEPER_HOME=/user/local/zookeeper-3.4.6 export PATH=$PATH:$ZOOKEEPER_HOME/bin #storm export STORM_HOME=/user/local/apache-storm-0.9.2-incubating export PATH=$PATH:$STORM_HOME/bin #kafaka export KAFKA_HOME=/user/local/kafka_2.9.2-0.8.1.1 export PATH=$PATH:$KAFKA_HOME/bin [[email protected] local]# source /etc/profile hive 啟動報錯 [[email protected] local]# hive Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path [[email protected] local]# cd $HIVE_HOME [[email protected] hive]# pwd /usr/local/hive [[email protected] hive]# ll total 232 drwxr-xr-x. 4 1106 592 4096 Mar 10 18:27 bin drwxr-xr-x. 2 1106 592 4096 Jun 11 09:30 conf drwxr-xr-x. 6 1106 592 4096 Jul 12 2014 docs drwxr-xr-x. 4 1106 592 4096 Jul 12 2014 examples drwxr-xr-x. 7 1106 592 4096 Jul 12 2014 hcatalog drwxr-xr-x. 4 1106 592 4096 Mar 10 17:57 lib -rw-r--r--. 1 1106 592 23828 Jul 12 2014 LICENSE -rw-r--r--. 1 1106 592 1559 Jul 12 2014 NOTICE -rw-r--r--. 1 1106 592 3838 Jul 12 2014 README.txt -rw-r--r--. 1 1106 592 166452 Jul 12 2014 RELEASE_NOTES.txt drwxr-xr-x. 3 1106 592 4096 Jul 12 2014 scripts drwxr-xr-x. 2 root root 4096 Jul 4 10:32 yc_test [[email protected] hive]# cd conf/ [[email protected] conf]# ll total 200 -rw-r--r--. 1 1106 592 83116 Jul 12 2014 hive-default.xml -rw-r--r--. 1 1106 592 2378 Jul 12 2014 hive-env.sh -rw-r--r--. 1 1106 592 2651 Jul 12 2014 hive-exec-log4j.properties -rw-r--r--. 1 1106 592 3494 Jul 12 2014 hive-log4j.properties -rw-r--r--. 1 yc yc 82038 Mar 10 18:13 hive-site.xml -rw-r--r--. 1 root root 11287 Jun 11 09:30 tore.local [[email protected] conf]# less hive-env.sh [[email protected] conf]# cat hive-env.sh # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hive and Hadoop environment variables here. These variables can be used # to control the execution of Hive. It should be used by admins to configure # the Hive installation (so that users do not have to set environment variables # or set command line parameters to get correct behavior). # # The hive service being invoked (CLI/HWI etc.) is available via the environment # variable SERVICE # Hive Client memory usage can be an issue if a large number of clients # are running at the same time. The flags below have been useful in # reducing memory usage: # # if [ "$SERVICE" = "cli" ]; then # if [ -z "$DEBUG" ]; then # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit" # else # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit" # fi # fi # The heap size of the jvm stared by hive shell script can be controlled via: # # export HADOOP_HEAPSIZE=1024 # # Larger heap size may be required when running queries over large number of files or partitions. # By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be # appropriate for hive server (hwi etc). # Set HADOOP_HOME to point to a specific hadoop install directory # HADOOP_HOME=${bin}/../../hadoop # Hive Configuration Directory can be controlled by: # export HIVE_CONF_DIR= # Folder containing extra ibraries required for hive compilation/execution can be controlled by: # export HIVE_AUX_JARS_PATH= [[email protected] conf]# vim hive-env.sh [[email protected] conf]# cat hive-env.sh # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hive and Hadoop environment variables here. These variables can be used # to control the execution of Hive. It should be used by admins to configure # the Hive installation (so that users do not have to set environment variables # or set command line parameters to get correct behavior). # # The hive service being invoked (CLI/HWI etc.) is available via the environment # variable SERVICE # Hive Client memory usage can be an issue if a large number of clients # are running at the same time. The flags below have been useful in # reducing memory usage: # # if [ "$SERVICE" = "cli" ]; then # if [ -z "$DEBUG" ]; then # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit" # else # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit" # fi # fi # The heap size of the jvm stared by hive shell script can be controlled via: # # export HADOOP_HEAPSIZE=1024 # # Larger heap size may be required when running queries over large number of files or partitions. # By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be # appropriate for hive server (hwi etc). # Set HADOOP_HOME to point to a specific hadoop install directory # HADOOP_HOME=${bin}/../../hadoop 這裡要配置HADOOP_HOME的 export HADOOP_HOME=/user/local/hadoop-2.6.0 # Hive Configuration Directory can be controlled by: # export HIVE_CONF_DIR= # Folder containing extra ibraries required for hive compilation/execution can be controlled by: # export HIVE_AUX_JARS_PATH= [[email protected] conf]# hive 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 16/07/19 21:11:21 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 16/07/19 21:11:22 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/user/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] hive> > create table hivetest(id int,name string); OK Time taken: 8.987 seconds hive> > show tables; OK hivetest Time taken: 0.224 seconds, Fetched: 1 row(s) hive> set hive.cli.print.current.db=true; hive (default)> show tables; OK hivetest Time taken: 0.047 seconds, Fetched: 1 row(s) hive (default)> select * from hivetest; OK Time taken: 0.664 seconds hive (default)> create table if not exists ljh_emp( > name string, > salary float, > gender string) > comment 'basic information of a employee' > row format delimited fields terminated by ','; OK Time taken: 0.096 seconds hive (default)> show tables; OK hivetest ljh_emp Time taken: 0.024 seconds, Fetched: 2 row(s) hive (default)> select * from ljh_emp; OK Time taken: 0.062 seconds hive (default)> load data local inpath '/usr/local/hive/yc_test/test' into table ljh_emp; Copying data from file:/usr/local/hive/yc_test/test Copying file: file:/usr/local/hive/yc_test/test Loading data to table default.ljh_emp Table default.ljh_emp stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 51, raw_data_size: 0] OK Time taken: 13.199 seconds hive (default)> select * from ljh_emp; OK ljh 25000.0 male jediael 25000.0 male llq 15000.0 female Time taken: 0.144 seconds, Fetched: 3 row(s) hive (default)> create table if not exists ljh_emp_test( > name string, > salary float, > gender string) > comment 'basic information of a employee'; OK Time taken: 0.181 seconds hive (default)> show tables; OK hivetest ljh_emp ljh_emp_test Time taken: 0.025 seconds, Fetched: 3 row(s) hive (default)> create table if not exists ljh_emp( > name string, > salary float, > gender string) > comment 'basic information of a employee' > row format delimited fields terminated by ',Exception in thread "main" java.io.IOException: invalid UTF-8 first byte: -95 at jline.UnixTerminal$ReplayPrefixOneCharInputStream.setInputUTF8(UnixTerminal.java:403) at jline.UnixTerminal$ReplayPrefixOneCharInputStream.setInput(UnixTerminal.java:384) at jline.UnixTerminal.readVirtualKey(UnixTerminal.java:172) at jline.ConsoleReader.readVirtualKey(ConsoleReader.java:1453) at jline.ConsoleReader.readBinding(ConsoleReader.java:654) at jline.ConsoleReader.readLine(ConsoleReader.java:494) at jline.ConsoleReader.readLine(ConsoleReader.java:448) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:784) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) [[email protected] conf]# . -bash: . command not found [[email protected] conf]# hive 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 16/07/19 21:28:23 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 16/07/19 21:28:23 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/user/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] hive> show tables; OK hivetest ljh_emp ljh_emp_test Time taken: 3.827 seconds, Fetched: 3 row(s) hive> set hive.cli.print.current.db=true; hive (default)> show tables; OK hivetest ljh_emp ljh_emp_test Time taken: 0.04 seconds, Fetched: 3 row(s) hive (default)> create table if not exists ljh_emp( > name string, > salary float, > gender string) > comment 'basic information of a employee' > row format delimited fields terminated by ','; OK Time taken: 0.139 seconds hive (default)> show tables; OK hivetest ljh_emp ljh_emp_test Time taken: 0.048 seconds, Fetched: 3 row(s) hive (default)> create table if not exists ljh_emp2( > name string, > salary float, > gender string) > comment 'basic information of a employee' > row format delimited fields terminated by ','; OK Time taken: 0.777 seconds hive (default)> show tables; OK hivetest ljh_emp ljh_emp2 ljh_emp_test Time taken: 0.041 seconds, Fetched: 4 row(s) hive (default)> select * from ljh_emp; OK ljh 25000.0 male jediael 25000.0 male llq 15000.0 female Time taken: 0.436 seconds, Fetched: 3 row(s) hive (default)> select * from ljh_emp where 1=1; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1468930192790_0001, Tracking URL = http://cdh1:8088/proxy/application_1468930192790_0001/ Kill Command = /user/local/hadoop-2.6.0/bin/hadoop job -kill job_1468930192790_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2016-07-19 21:32:07,053 Stage-1 map = 0%, reduce = 0% 2016-07-19 21:32:15,673 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec 2016-07-19 21:32:16,728 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec MapReduce Total cumulative CPU time: 1 seconds 740 msec Ended Job = job_1468930192790_0001 MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 1.74 sec HDFS Read: 254 HDFS Write: 57 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 740 msec OK ljh 25000.0 male jediael 25000.0 male llq 15000.0 female Time taken: 24.427 seconds, Fetched: 3 row(s) hive (default)> show tables; OK hivetest ljh_emp ljh_emp2 ljh_emp_test Time taken: 0.046 seconds, Fetched: 4 row(s) hive (default)> select * from ljh_emp; OK ljh 25000.0 male jediael 25000.0 male llq 15000.0 female Time taken: 0.077 seconds, Fetched: 3 row(s) hive (default)> select * from ljh_emp where 1=1; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1468930192790_0002, Tracking URL = http://cdh1:8088/proxy/application_1468930192790_0002/ Kill Command = /user/local/hadoop-2.6.0/bin/hadoop job -kill job_1468930192790_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2016-07-19 21:51:35,688 Stage-1 map = 0%, reduce = 0% 2016-07-19 21:51:43,365 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.03 sec 2016-07-19 21:51:44,412 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.03 sec MapReduce Total cumulative CPU time: 2 seconds 30 msec Ended Job = job_1468930192790_0002 MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 2.03 sec HDFS Read: 254 HDFS Write: 57 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 30 msec OK ljh 25000.0 male jediael 25000.0 male llq 15000.0 female Time taken: 18.777 seconds, Fetched: 3 row(s) hive (default)> select * from ljh_emp where 1=1; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1468930192790_0003, Tracking URL = http://cdh1:8088/proxy/application_1468930192790_0003/ Kill Command = /user/local/hadoop-2.6.0/bin/hadoop job -kill job_1468930192790_0003 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2016-07-19 21:52:04,406 Stage-1 map = 0%, reduce = 0% 2016-07-19 21:52:11,859 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.82 sec 2016-07-19 21:52:12,926 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.82 sec MapReduce Total cumulative CPU time: 1 seconds 820 msec Ended Job = job_1468930192790_0003 MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 1.82 sec HDFS Read: 254 HDFS Write: 57 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 820 msec OK ljh 25000.0 male jediael 25000.0 male llq 15000.0 female Time taken: 18.887 seconds, Fetched: 3 row(s) hive (default)>

相關推薦

hive 資料來源 使用mysql; hive 啟動; 載入資料 基本命令

-bash: se: command not found [[email protected] local]# service mysqld status mysqld is stopped [[email protected] local]# serv

mysql-mariadb啟動恢復資料([ERROR] mysqld got signal 6)

160226 11:00:21  InnoDB: Page checksum 913642282 (32bit_calc: 472052024), prior-to-4.0.14-form checksum 2048873750 InnoDB: stored checksum 913642282, prio

mysql服務啟動1607

安裝mysql成功後,會有my.ini配置檔案,如下 當我把datadir路徑修改為其他時(配置檔案中時預設有的),啟動服務就會出現如下錯誤,使用它預設配置的就可以正常啟動服務 我也不知道為什麼不能修改datadir的路徑

SpringBoot中配置起動時的數據庫sql文件執行控制臺不卻沒有

控制臺 scheme eas app ini 控制 執行 不為 release 使用SpringBoot2.0.4Release版本 因為SpringBoot在啟動時,只有檢測到spring.datasource.initialization-mode=ALWAYS配置,

hive啟動之2、The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH.

報錯資訊如下: [[email protected] bin]$ ./hive Missing Hive Execution Jar: /opt/soft/apache-hive-1.1.0-cdh5.7.0-bin/lib/hive-exec-*.jar 考慮的

Hive環境搭建啟動

hive-site.xml檔案內容: 1 <?xml version="1.0" encoding="UTF-8" standalone="no"?> 2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

Hive啟動spark-assembly-*.jar: No such file or directory

安裝完成spark後,啟動hive,發生錯誤: ls: cannot access /opt/soft/spark-2.1.3-bin-hadoop2.6/lib/spark-assembly-*.jar: No such file or directory 問題原因:   新

hive啟動

首先,hive在安裝配置完之後,在啟動之前,要先啟動hadoop。在zookeeper實現高可用的情況下,還要先啟動zookeeper,再啟動hadoop。 hadoop啟動之後,由於是初次安裝hive,因此要初始化元資料庫。使用的命令為schematool -dbType

資料hive啟動:system:java.io.tmpdir

解決方法: 在hive下建立個tmpdir目錄 在hive-site.xml中新增以下內容 <property> <name>system:java.io.tmpdir</name> <value&

hive1.1.0 啟動Missing Hive Execution Jar:lib/hive-exec-*.jar

hive啟動時報下面的錯誤資訊 [[email protected] bin]# hive Missing Hive Execution Jar: /data0/hive/hive1.2.1/lib/hive-exec-*.jar 解決: 1、檢查/etc/profile檔

hive啟動 java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf

bin/hive Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf 這裡分享一種可能 到hadoop的etc/hadoo

hive啟動:Unable to instantiate SessionHiveMetaStoreClient

在配置完apache-hive-1.2.2之後,啟動hive,提示如下錯誤:Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.h

Hive啟動(已解決!)

今天在測試的時候發現這樣一個小錯誤,分享如下: 錯誤原因:因為Hive中的真實資料是儲存在Hdfs上的,所以在啟動Hive前,需要先啟動Hadoop叢集,在啟動Hadoop叢集的時候,我同時啟動了Hive,導致叢集進入了安全模式。 解決辦法: 第一種方法:稍等一點時間,重新啟動Hive。

hive啟動 java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7B

啟動hive報錯: [[email protected] conf]# hive Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j.properties

關於Hive使用多個連線引數連線MySQL時的問題

在hive的配置檔案hive-site.xml中的javax.jdo.option.ConnectionURL配置值存在多個連線引數時,如: <property> <name>javax.jdo.option.ConnectionURL</na

hive啟動 hive.metastore.HiveMetaStoreClient

之前用的是hive-0.90 ,想與hbase整合下,所以更換hive為0.13.1版本,因為偷懶將原來的conf配置檔案拷貝, 結果出現如下錯誤,hive.metastore.local,hive.metastore.ds.retry.*是新版本不建議使用,刪除配置檔案中

MySql啟動,無法更新PID文件

mysql error pidMySql啟動報錯Starting MySQL.. ERROR! The server quit without updating PID file (/var/lib/mysql..)1,查看錯誤日誌 2017-08-10 19:38:14 31865 [Note] In

MySQL配置文件指定了 log-error配置項,啟動的問題

mysql mysql bug log-error 啟動報錯 找不到文件 mysqld 的配置文件 my.cnf 在 [mysqld_safe] 配置區塊內指定了 log-error 項後,導致mysqld 服務啟動因找不到日誌文件,而報錯退出的問題。 service mysql res

mysql-5.7.21啟動、修改數據庫存放目錄

mysql啟動失敗 mysql啟動不了 mysql 修改mysql的數據庫存放目錄 linux的mysql mysql-5.7.21啟動報錯、永久解決chmod +x /etc/rc.d/rc.local#往裏面寫入兩條命令,意為開機自動創建mysqld並修改所有者權限 ehco &q

linux安裝MySQL啟動

需要 技術 image 手動 安裝 col img png roc 其實前面已經改過路徑了 路徑確認沒問題 後來思考mysql最後一步 配置文件 大家都知道 配置文件 根據實際情況選擇 配置 我選擇的是medium 於是 配置文件裏面缺少路徑 因此需要手動添加 (M