Hbase+HDFS單機版配置
環境:ubuntu 13.04 ,hadoop-1.2.1+hbase-0.94.11 ubuntu 的/etc/hosts的文件修改如下,避免某些域名被對映成,127.0.1.1。 127.0.0.1 localhost
127.0.0.1 shallon-ThinkPad-X230
127.0.0.1 ubuntu.ubuntu-domain ubuntu
1、hadoop的配置 [email protected]:~/hadoop-1.2.1/conf$ more core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://shallon-ThinkPad-X230:9000/hbase
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
[email protected]
shallon-ThinkPad-X230
[email protected]:~/hadoop-1.2.1/conf$ more slaves
shallon-ThinkPad-X230
[email protected]:~/hadoop-1.2.1/conf$ more hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
啟動hadoop的檔案HDFS:
1493 NameNode
1780 DataNode
2226 SecondaryNameNode
嘗試訪問dfs。 [email protected]:~/hadoop-1.2.1$ bin/hadoop dfs -ls /
Found 3 items
drwxr-xr-x - hadoop supergroup 0 2013-09-22 22:05 /hbase
drwxr-xr-x - hadoop supergroup 0 2013-09-22 15:54 /home
drwxr-xr-x - hadoop supergroup 0 2013-08-30 15:18 /user
2、Hbase 配置 [email protected]:~/hbase-0.94.11/conf$ vi hbase-site.xml <configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://shallon-ThinkPad-X230:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
黑體部分與上面配置的HDFS根路徑匹配。指向本地配置的HDFS [email protected]:~/hbase-0.94.11/bin$ ./start-hbase.sh
[email protected]'s password:
localhost: starting zookeeper, logging to /home/hadoop/hbase-0.94.11/bin/../logs/hbase-hadoop-zookeeper-shallon-ThinkPad-X230.out
starting master, logging to /home/hadoop/hbase-0.94.11/bin/../logs/hbase-hadoop-master-shallon-ThinkPad-X230.out
[email protected]'s password:
shallon-ThinkPad-X230: starting regionserver, logging to /home/hadoop/hbase-0.94.11/bin/../logs/hbase-hadoop-regionserver-shallon-ThinkPad-X230.out
檢視HBase的啟動程序
[email protected]:~/hbase-0.94.11$ jps
1493 NameNode
1780 DataNode
2226 SecondaryNameNode
20273 Jps
14163 HMaster
14081 HQuorumPeer
14655 HRegionServer
檢視hbase的master的狀態
Master: localhost:60000
Attributes
Attribute Name | Value | Description |
---|---|---|
HBase Version | 0.94.11, r1513697 | HBase version and revision |
HBase Compiled | Wed Aug 14 04:54:46 UTC 2013, jenkins | When HBase version was compiled and by whom |
Hadoop Version | 1.0.4, r1393290 | Hadoop version and revision |
Hadoop Compiled | Thu Oct 4 20:40:32 UTC 2012, hortonfo | When Hadoop version was compiled and by whom |
HBase Root Directory | hdfs://shallon-ThinkPad-X230:9000/hbase | Location of HBase home directory |
Zookeeper Quorum | localhost:2181 | Addresses of all registered ZK servers. For more, seezk dump. |
HMaster Start Time | Mon Sep 23 10:06:53 CST 2013 | Date stamp of when this HMaster was started |
HMaster Active Time | Mon Sep 23 10:06:53 CST 2013 | Date stamp of when this HMaster became active |
Load average | 3 | Average number of regions per regionserver. Naive computation. |
HBase Cluster ID | 4d409e24-108f-41bb-ad32-a49977445601 | Unique identifier generated for each HBase cluster |
Coprocessors | [] | Coprocessors currently loaded loaded by the master |
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.11, r1513697, Wed Aug 14 04:54:46 UTC 2013
hbase(main):003:0> create 'test', 'cf' 0 row(s) in 1.2200 seconds hbase(main):003:0> list 'test' .. 1 row(s) in 0.0550 seconds hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1' 0 row(s) in 0.0560 seconds hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2' 0 row(s) in 0.0370 seconds hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3' 0 row(s) in 0.0450 secondshbase(main):001:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1379858801692, value=value1
row2 column=cf:b, timestamp=1379858810975, value=value2
row3 column=cf:c, timestamp=1379858822233, value=value3
3 row(s) in 1.3100 seconds
檢視HBase Root Directory的檔案內容
Contents of directory /hbase
Goto : Go to parent directoryName | Type | Size | Replication | Block Size | Modification Time | Permission | Owner | Group |
dir | 2013-09-22 22:02 | rwxr-xr-x | hadoop | supergroup | ||||
dir | 2013-09-22 16:32 | rwxr-xr-x | hadoop | supergroup | ||||
dir | 2013-09-22 22:02 | rwxr-xr-x | hadoop | supergroup | ||||
.logs | dir | 2013-09-23 10:06 | rwxr-xr-x | hadoop | supergroup | |||
dir | 2013-09-23 10:07 | rwxr-xr-x | hadoop | supergroup | ||||
.tmp | dir | 2013-09-23 10:06 | rwxr-xr-x | hadoop | supergroup | |||
file | 0.04 KB | 3 | 64 MB | 2013-09-22 16:32 | rw-r--r-- | hadoop | supergroup | |
file | 0 KB | 3 | 64 MB | 2013-09-22 16:32 | rw-r--r-- | hadoop | supergroup | |
test | dir | 2013-09-22 22:05 | rwxr-xr-x | hadoop |
相關推薦
Hbase+HDFS單機版配置
環境:ubuntu 13.04 ,hadoop-1.2.1+hbase-0.94.11 ubuntu 的/etc/hosts的文件修改如下,避免某些域名被對映成,127.0.1.1。 127.0.0.1 localhost 127.0.0.1
hbase+opentsdb 單機版搭建
sync dfs 寫入 -- har all 配置 web 返回值 2018年2月19日星期一 Lee 這個實驗步驟比較簡單,只能用來演示下搭建過程,實際生產環境復雜的很多。 實驗環境: centos6.5 x86_64IP: 10.0.20.25 這裏實驗沒有
hdfs單機版的安裝
一、 準備機器 機器編號 地址 埠 1
win7 Redis 單機版配置
開發環境:win7x64 redis官網只提供linux的下載 (redis官網https://redis.io) 去微軟的github下載win系統對應版本https://github.com/MicrosoftArchive/redis/releases下載對應版本 此
Hbase 單機版的安裝配置
1 去conf目錄修改hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://loc
kafka+zookeeper環境配置(linux環境單機版)
producer lai ror detail gin prop tex wget start 版本: CentOS-6.5-x86_64 zookeeper-3.4.6 kafka_2.10-0.10.1.0 一.zookeeper下載與安裝 1)下載 $ wge
Hadoop單機版安裝配置
大數據 Hadoop [toc] Hadoop單機版安裝配置 前言 Hadoop單機版本的環境搭建比較簡單,這裏總結一下。 Centos基本配置 網絡配置 IPADDR=192.168.43.101 NETMASK=255.255.255.0 GATEWAY=192.168.43.2 DNS1=202
centos6下部署單機版hbase+opentsdb
ESS env color -h base tab text 下載 mark 一.安裝jdk①下載jdkhttp://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html②傳
Windows環境下HBase單機版安裝全部過程
下載 HBase: http://hbase.apache.org/downloads.html Hadoop: https://hadoop.apache.org/releases.html 下載後會得到兩個壓縮檔案hadoop-common-[版本號]-bin-master.zi
阿里雲上cdh5 hbase搭建(單機版)
一、簡介 HBase是一種構建在HDFS之上的分散式、面向列的儲存系統。在需要實時讀寫、隨機訪問超大規模資料集時,可以使用HBase。 儘管已經有許多資料儲存和訪問的策略和實現方法,但事實上大多數解決方案,特別是一些關係型別的,在構建時並沒有考慮超大規模和分散式的特點。許多商家通過複製和分割
Redis單機版安裝與配置
1,採用tar命令,解壓redis到指定目錄。 具體命令為 tar -xzvf redis-3.2.5.tar.gz -C ./ -C 指定解壓到目錄 2.進入redis目錄,將redis安裝到指定目錄。使用make PREFIX=/usr/local/redis i
CentOS7.5 redis 單機版安裝與配置
一、環境準備: CentOS Linux release 7.5.1804 (Core) redis-4.0.10 wget http://download.redis.io/releases/redis-4.0.10.tar.gz 2、解壓到*opt* 目
Hadoop 和 Hbase 的安裝與配置 (單機模式)
(一定要看最後我趟過的坑,如果安裝過程有問題,可參考最後我列出的問題及解決方法) 下載Hadoop安裝包 這裡安裝版本:hadoop-1.0.4.tar.gz 在安裝Hadoop之前,伺服器上一定要有安裝的jdk jdk安裝方式之一:在官網上下載Linux下的rpm
【Linux環境搭建】——Centos7下安裝配置單機版RabbitMQ
前提準備好yum和wget環境說明系統 Centos7RabbitMQ版本 :rabbitmq-server-3.7.6-1.el7.noarch.rpm安裝Erlang因為RabbitMQ使用Erla
部署Redis4.x單機版及配置RDB和AOF持久化
一、環境及軟體 OS soft version CentOS 7.5 redis-4.0.12(目前是4.x最新) 二、下載及編譯Redis [[email prot
Centos7 單機版zookeeper安裝 ----以及叢集配置說明
1.到官網下載zookeeper穩定版本 2.上傳到linux系統 3.解壓tar.gz型別的檔案 [[email protected] ~]# cd /usr/local/packages/ [[email protected] packa
sparkSQL本地單機版測試配置
配置SparkConfig,並採用直連方式搜尋資料來源中所有資料表格並且建立檢視,不這樣做的話,每一次查詢之前都必須將要查詢的表建立試圖才能找到(目前認為Spark不會自動遍歷資料來源中的表格並且自動
【一】linux安裝redis(單機版)、3種啟動方式、及配置檔案介紹。
環境ubuntu16.04 解壓 tar -zxvf redis-3.2.6.tar.gz 修改資料夾名稱 mv redis-3.2.6 redis 編譯 cd /app/redis make 編譯好後會看到redis.conf和src檔案 安裝 cd
Hbase單機版安裝
注意 1. 需要oracle jdk 7+。 2. Hbase0.98分別支援hadoop1和hadoop2,hadoop1已經不更新了,所以選用hbase0.98-hadoop-2。 3. 單機版hbase: hbaseMaster,zookeeper,re
HBase1.2.0 windows單機版安裝配置
1、首先從官網上下載HBase1.2.0安裝包 http://archive.apache.org/dist/hbase/ 2、解壓到指定目錄 3、修改conf/hbase-env.cmd set HBASE_MANAGES_ZK=true set H