hbase-2.2.4 部署呼叫
一、下載hbase-2.2.4-bin.tar.gz並解壓至目錄/home/hbase-2.2.4
二、配置/home/hbase-2.2.4/conf/hbase-env.sh
# The java implementation to use. Java 1.8+ required. export JAVA_HOME=/usr/java/jdk-11.0.4/ # Extra Java CLASSPATH elements. Optional. export HBASE_CLASSPATH=/home/hbase-2.2.4/conf
三、配置/home/hbase-2.2.4/conf/hbase-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.rootdir</name> <value>file:///home/hbase-2.2.4/hdfs</value> <description>
手動建立hdfs目錄;hbase.rootdir是RegionServer的共享目錄,用於持久化儲存HBase資料,預設寫入/tmp中。如果不修改此配置,在HBase重啟時,資料會丟失。此處一般設定的是hdfs的檔案目錄,如NameNode執行在namenode.Example.org主機的9090埠,則需要設定為hdfs://namenode.example.org:9090/hbase</description> </property> <property> <name>hbase.cluster.distributed</name> <value>false</value> <description> 此項用於配置HBase的部署模式,false表示單機或者偽分散式模式,true表不完全分散式模式。 </description> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hbase-2.2.4/zookeeperData</value> <description> 手動建立zookeeperData目錄;用於設定儲存ZooKeeper的元資料,如果不設定預設存在/tmp下,重啟時資料會丟失。 </description> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <!-- <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>39.108.182.20</value> <description> 此項用於配置ZooKeeper叢集所在的主機地址。examplel、 example2、example3是執行資料節點的主機地址,zookeeper服務的預設埠為2181。 分散式模式:修改 conf/regionservers 檔案,羅列了所有region伺服器的主機名.同時在配置完成後,需要同步這些檔案到叢集上的其他節點 . </description> </property> --> <property> <name>hbase.master.info.port</name> <value>60010</value> </property> </configuration>
四、配置/home/hbase-2.2.4/conf/regionservers
localhost 39.108.***.***{本機IP}
五、啟動、停止hbase服務
/home/hbase-2.2.4/bin# ./start-hbase.sh
檢視是否啟動成功:#jps
有出現HMaster表示啟動成功。
/home/hbase-2.2.4/bin# ./stop-hbase.sh
六、開放相關埠:2181(zookeeper)、60010(web)、16000(hbase),16020(hbase)
訪問http://39.108.***.***:60010/master-status
七、windows環境下采用hbase-client-2.2.4 JAVA客戶端連結
7.1 下載hadoop-2.10.0.tar.gz解壓至D:\server\hadoop-2.10.0
用winRAR解壓錯誤時,進入dos視窗,執行 start winrar x -y hadoop-2.10.0.tar.gz 即可
7.2 配置系統環境變數
新建變數名:HADOOP_HOME 變數值:D:\server\hadoop-2.10.0
Path新增 %HADOOP_HOME%\bin 項
7.3 下載 winutils.exe、hadoop.dll 放到 D:\server\hadoop-2.10.0\bin 目錄下
下載地址:https://github.com/cdarlint/winutils/tree/master/hadoop-2.9.2/bin
7.4 配置C:\Windows\System32\drivers\etc\hosts 檔案
增加Region Servers的ServerName(在http://39.108.***.***:60010/master-status檢視)與伺服器IP地址對映
例:39.108.***.*** iZwz974yt1dail4ihlqh6fZ
7.5 JAVA工程jar包
包括 /home/hbase-2.2.4/lib 下所有包及htrace-core4-4.2.0-incubating.jar(下載地址https://www.mvnjar.com/org.apache.htrace/htrace-core4/4.2.0-incubating/detail.html)
7.6 JAVA程式碼示例:
package test; import java.io.IOException; import java.util.ArrayList; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; import java.util.logging.Logger; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.CompareOperator; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.client.TableDescriptor; import org.apache.hadoop.hbase.client.TableDescriptorBuilder; import org.apache.hadoop.hbase.filter.BinaryComparator; import org.apache.hadoop.hbase.filter.ColumnPaginationFilter; import org.apache.hadoop.hbase.filter.ColumnPrefixFilter; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.FilterList; import org.apache.hadoop.hbase.filter.MultipleColumnPrefixFilter; import org.apache.hadoop.hbase.filter.PageFilter; import org.apache.hadoop.hbase.filter.PrefixFilter; import org.apache.hadoop.hbase.filter.RandomRowFilter; import org.apache.hadoop.hbase.filter.RowFilter; import org.apache.hadoop.hbase.filter.SkipFilter; import org.apache.hadoop.hbase.filter.SubstringComparator; import org.apache.hadoop.hbase.filter.TimestampsFilter; import org.apache.hadoop.hbase.filter.ValueFilter; import org.apache.hadoop.hbase.util.Bytes; /** * * @author 李小家 * */ public class HbaseClient { private static Logger logger = Logger.getLogger(HbaseClient.class.getName()); private static Connection conn = null; /** * 建立連線 * */ static{ Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.property.clientPort", "2181"); conf.set("hbase.zookeeper.quorum", "39.108.***.***"); try { conn = ConnectionFactory.createConnection(conf); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } /** * 建立表 * @param tableName 表名 * @param familys 列族 * @throws IOException */ public void createTable(String tableName,String ...familys){ try { Admin admin = conn.getAdmin(); TableName tname = TableName.valueOf(tableName); if ( admin.tableExists(tname) ){ logger.warning(tableName + "表已經存在,不能重複建立."); } else { TableDescriptorBuilder tdesc = TableDescriptorBuilder.newBuilder(tname); for(String family: familys){ ColumnFamilyDescriptor cfd = ColumnFamilyDescriptorBuilder.of(family); tdesc.setColumnFamily(cfd); } TableDescriptor desc=tdesc.build(); admin.createTable(desc); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void createTables(String[] tableNames, List<List<String>> familys){ try { Admin admin = conn.getAdmin(); if(tableNames.length == familys.size()){ for(int i = 0 ; i < tableNames.length; i++){ TableName tname=TableName.valueOf(tableNames[i]); if(admin.tableExists(tname)){ logger.warning(tableNames[i]+"表已經存在,不能重複建立."); }else{ TableDescriptorBuilder tdesc = TableDescriptorBuilder.newBuilder(tname); for(String family: familys.get(i)){ ColumnFamilyDescriptor cfd = ColumnFamilyDescriptorBuilder.of(family); tdesc.setColumnFamily(cfd); } TableDescriptor desc = tdesc.build(); admin.createTable(desc); } } }else{ logger.warning("每張表必須要有列族"); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void deleteTable(String tableName){ try { Admin admin = conn.getAdmin(); TableName tName = TableName.valueOf(tableName); if (admin.tableExists(tName)) { admin.disableTable(tName); admin.deleteTable(tName); logger.info("刪除" + tableName + "表成功"); }else{ logger.warning("需要刪除的" + tableName + "表不存在"); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void deleteTables(String ...tableNames){ try { Admin admin = conn.getAdmin(); for(String tableName : tableNames){ TableName tName = TableName.valueOf(tableName); admin.disableTable(tName); admin.deleteTable(tName); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void deleteFamily(String tableName, String family){ try { Admin admin=conn.getAdmin(); admin.deleteColumnFamily(TableName.valueOf(tableName), Bytes.toBytes(family)); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void addFamily(String tableName, String family){ try { Admin admin = conn.getAdmin(); ColumnFamilyDescriptor columnFamily = ColumnFamilyDescriptorBuilder.newBuilder(family.getBytes()).build(); admin.addColumnFamily(TableName.valueOf(tableName), columnFamily); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void addRow(String tableName, String rowKey, String family, String qualifier, String value){ try { Table table = (Table) conn.getTable(TableName.valueOf(tableName)); //通過rowkey建立一個 put 物件 Put put = new Put(Bytes.toBytes(rowKey)); //在 put 物件中設定 列族、列、值 put.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes(value)); //插入資料,可通過 put(List<Put>) 批量插入 table.put(put); table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void addrow(String tname,String family, Map<String,Object> params){ try { Table table = conn.getTable(TableName.valueOf(tname)); Put put = new Put(params.get("row").toString().getBytes()); for(Map.Entry<String, Object> m:params.entrySet()){ if(m.getKey().equals("row")){ continue; } put.addColumn(family.getBytes(), m.getKey().getBytes(), m.getValue().toString().getBytes()); } table.put(put); table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void addrows(String tname,Map<String,Map<String,Object>> params){ try { Table table = conn.getTable(TableName.valueOf(tname)); List<Put> listput=new ArrayList<Put>(); for(Map.Entry<String, Map<String,Object>> map:params.entrySet()){ Put put=new Put(map.getKey().getBytes()); String family=map.getValue().get("family").toString(); for(Map.Entry<String, Object> m:map.getValue().entrySet()){ if(m.getKey().equals("row")){ continue; } put.addColumn(family.getBytes(), m.getKey().getBytes(), m.getValue().toString().getBytes()); } listput.add(put); } table.put(listput); table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public boolean deleteRow(String tname,String row, Map<String,Object> params){ TableName tableName = TableName.valueOf(tname); try { Table table = conn.getTable(tableName); Delete delete = new Delete(row.getBytes()); if(params != null){ for(Map.Entry<String, Object> m:params.entrySet()){ delete.addColumn(m.getKey().getBytes(), m.getValue().toString().getBytes()); } } table.delete(delete); table.close(); return true; } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return false; } public void deleteRows(String tableName, String[] rows) { try { Table table = (Table) conn.getTable(TableName.valueOf(tableName)); List<Delete> list = new ArrayList<Delete>(); for (String row : rows) { Delete delete = new Delete(Bytes.toBytes(row)); list.add(delete); } table.delete(list); table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void deleteRows(String tname, Map<String,Object> params,String ...rows) { try { Table table = conn.getTable(TableName.valueOf(tname)); List<Delete> deletes = new ArrayList<Delete>(); for(String row : rows){ Delete delete = new Delete(row.getBytes()); if(params != null){ for(Map.Entry<String, Object> m:params.entrySet()){ delete.addColumn(m.getKey().getBytes(), m.getValue().toString().getBytes()); } } deletes.add(delete); } table.delete(deletes); table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public Map<String, Object> getRow(String tableName, String rowKey) { Map<String, Object> data = new HashMap<String, Object>(); try { Table table = (Table) conn.getTable(TableName.valueOf(tableName)); Get get = new Get(Bytes.toBytes(rowKey)); //通過rowkey建立一個 get 物件 Result result = table.get(get); if( !get.isCheckExistenceOnly() ){ for (Cell cell : result.rawCells()) { data.put("row", new String(CellUtil.cloneRow(cell))); data.put("family", new String(CellUtil.cloneFamily(cell))); data.put("qualifier", new String(CellUtil.cloneQualifier(cell))); data.put("value", new String(CellUtil.cloneValue(cell))); } } table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return data; } public List<Map<String, Object>> getAllData(String tname) { List<Map<String, Object>> list = new ArrayList<Map<String, Object>>(); TableName tableName = TableName.valueOf(tname); try { Table table = conn.getTable(tableName); Set<byte []> familyNames = table.getDescriptor().getColumnFamilyNames(); for(byte[] familyName : familyNames){ ResultScanner rs = table.getScanner(familyName); Iterator<Result> iterator = rs.iterator(); while(iterator.hasNext()){ Result r = iterator.next(); for (Cell cell : r.rawCells()){ String family = Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()); String qualifier = Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()); String row = Bytes.toString(cell.getRowArray(), cell.getRowOffset(), cell.getRowLength()); String value = Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); Map map = new HashMap(); map.put("row", row); map.put("family", family); map.put("qualifier", qualifier); map.put("value", value); list.add(map); logger.info("row="+row+",family="+family +",qualifier="+qualifier+",value="+value); } } } table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return list; } public void queryData(String tableName){ Table table = null; try { table = (Table) conn.getTable(TableName.valueOf(tableName)); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } Scan scan = new Scan(); scan.setMaxResultSize(1000); //指定最多返回的Cell數目。用於防止一行中有過多的資料,導致OutofMemory錯誤。 // scan.setBatch(1000); scan.withStartRow(Bytes.toBytes("row001")); scan.withStopRow(Bytes.toBytes("row010")); scan.addFamily(Bytes.toBytes("cf01")); // scan.addColumn(Bytes.toBytes("cf01"), Bytes.toBytes("name")); //該過濾器是隨機選擇一行的過濾器。引數 chance 是一個浮點值,介於0.1 和 1.0 之間。隨機輸出一半的行資料 RandomRowFilter randomRowFilter = new RandomRowFilter(0.5f); //ColumnPrefixFilter :用於列名 Qualifier 字首過濾,即包含某個字首的所有列名。 ColumnPrefixFilter columnPrefixFilter = new ColumnPrefixFilter("bir".getBytes()); byte[][] prefixes = new byte[][] {"author".getBytes(),"bookname".getBytes()}; MultipleColumnPrefixFilter multipleColumnPrefixFilter = new MultipleColumnPrefixFilter(prefixes); //分頁,PageFilter :用於按行分頁。 PageFilter pageFilter = new PageFilter(3);//指定3行分一頁 //這是一種附加過濾器,其與 ValueFilter 結合使用,如果發現一行中的某一列不符合條件,那麼整行就會被過濾掉。 Filter skipFilter = new SkipFilter(columnPrefixFilter); PrefixFilter prefixFilter = new PrefixFilter(Bytes.toBytes("李")); Filter columnPaginationFilter = new ColumnPaginationFilter(5,15); Filter valueFilter = new ValueFilter(CompareOperator.EQUAL, new SubstringComparator("test")); Filter rowFilter1 = new RowFilter(CompareOperator.GREATER_OR_EQUAL, new BinaryComparator(Bytes.toBytes("row-3"))); //時間戳過濾器 List<Long> timestamp = new ArrayList<>(); timestamp.add(1571438854697L); timestamp.add(1571438854543L); TimestampsFilter timestampsFilter = new TimestampsFilter(timestamp); List<Filter> filters = new ArrayList <Filter>(); filters.add(pageFilter); filters.add(valueFilter); FilterList filter = new FilterList (FilterList.Operator.MUST_PASS_ALL,filters); scan.setFilter(filter); ResultScanner rs = null; try { rs = table.getScanner(scan); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } if ( rs != null ){ for (Result r : rs) { for (Cell cell : r.rawCells()) { System.out.println(String.format("row:%s, family:%s, qualifier:%s, value:%s, timestamp:%s.", Bytes.toString(cell.getRowArray(), cell.getRowOffset(), cell.getRowLength()), Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()), Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()), Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()), cell.getTimestamp())); } } rs.close(); } } public List<String> getQualifierValue(String tableName,String family,String qualifier){ List<String> list = new ArrayList<String>(); TableName tName = TableName.valueOf(tableName); try { Table table = conn.getTable(tName); ResultScanner rs = table.getScanner(family.getBytes(), qualifier.getBytes()); Iterator<Result> iterator = rs.iterator(); while(iterator.hasNext()){ Result r= iterator.next(); for (Cell cell:r.rawCells()){ String value = Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); list.add(value); } } rs.close(); table.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return list; } public static void main(String[] args){ HbaseClient client = new HbaseClient(); if (client.conn != null){ try { logger.info("連結成功 client.conn="+client.conn); //client.deleteTable("table1"); //client.createTable("table1", "cf01", "cf02"); // Map data = new HashMap(); // data.put("row", "row001"); // data.put("name", "李小家(cf01)"); // data.put("sex", 2+"(cf01)"); // data.put("birthday", new Date()); // data.put("describe", "test(cf01)"); // client.addrow("table1", "cf01", data); // client.addFamily("table1", "cf02"); // Map data = new HashMap(); // data.put("row", "row001"); // data.put("name", "李小家(cf02)"); // data.put("sex", 2+"(cf02)"); // data.put("birthday", new Date()); // data.put("describe", "test(cf02)"); // client.addrow("table1", "cf02", data); // Map params = new HashMap(); // params.put("cf01", "sex"); // client.deleteRow("table1", "row001", params); // logger.info(client.getAllData("table1").toString()); client.queryData("table1"); // logger.info(client.getQualifierValue("table1", "cf01", "name").toString()); // logger.info(client.getRow("table1", "row001").toString()); // client.deleteFamily("table1", "cf02"); client.conn.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }