Phoenix事物和安裝部署:CDH5.12.1和phoenix4.8結合
Phoenix安裝部署和事物支援配置遇到的問題:本人在cdh5.12叢集上部署phoenix,並讓其支援事物的經驗總結!!
過程遇到幾個比較關鍵的問題,希望對你們有所幫助
1)準備安裝包:
編譯完成的包:phoenix-4.9.0-cdh5.9.1.tar.gz
2)部署:
解壓:tar -zxvf phoenix-4.9.0-cdh5.9.1.tar.gz
把解壓出的檔案放到hbase的lib包目錄下:並分發到所有叢集
Cp phoenix-4.9.0-cdh5.9.1-server.jar /opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hbase/lib/
然後記得:重啟hbase叢集服務!!
修改配置檔案:很關鍵!
cd /home/test/phoenix-4.9.0-cdh5.9.1/bin
Vi hadoop-metrics2-hbase.properties
最後增加一行:phoenix.schema.isNamespaceMappingEnabled=true
Vi hadoop-metrics2-phoenix.properties
最後增加一行:phoenix.schema.isNamespaceMappingEnabled=true
替換hbase-site.xml:替換不替換都行,最好替換!
cp /etc/hbase/conf/hbase-site.xml /home/test/phoenix-4.9.0-cdh5.9.1/bin
增加支援事物配置:vi hbase-site.xml
<property>
<name>phoenix.schema.isNamespaceMappingEnabled</name>
<value>true</value>
</property>
<property>
<name>phoenix.transactions.enabled</name>
<value>true</value>
</property>
<property>
<name>data.tx.snapshot.dir</name>
<value>/tmp/tephra/snapshots</value>
</property>
<property>
<name>data.tx.timeout</name>
<value>60</value>
</property>
修改啟動指令碼:讀取自己修改後的配置檔案,否則報錯,最好保證/etc/hbase/conf寫在$CLASSPATH的前面
Vi tephra
CLASSPATH = /etc/hbase/conf:$CLASSPATH
啟動事物支援服務:./bin/tephra restart
檢視日誌:tail -f /tmp/tephra-bdmp_test/*.log
啟動phoenix指令碼:
Vi startpnew49
cd ~
kinit -kt bdmp_test.keytab bdmp_test//kerbers
cd /home/test/phoenix-4.9.0-cdh5.9.1/bin
./sqlline.py zk01,zk02,zk03:2181:/hbase
3)驗證命令:
進入phoenix命令列:./startpnew49
關閉自動提交:!autocommit off
手工提交:!commit
建立表:
!autocommit off
drop table pdbname.my_table2 ;
CREATE TABLE pdbname.my_table2 (k BIGINT PRIMARY KEY, v VARCHAR) TRANSACTIONAL=true;
插入資料:
UPSERT INTO pdbname.my_table2 VALUES (1,'A');
SELECT count(*) FROM pdbname.my_table2 WHERE k=1; -- Will see uncommitted row
Result: 1
不執行提交,重新開啟一個命令列查詢:
SELECT count(*) FROM pdbname.my_table2 WHERE k=1;
Result: 0
切換原來的命令列:重新提交!
!commit執行提交。
在新開的命令列查詢:
SELECT count(*) FROM pdbname.my_table2 WHERE k=1;
Result: 1
4) 遇到的問題和解決方法:
問題資訊一:
Exception in thread "HDFSTransactionStateStorage STARTING" java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration. at com.google.common.base.Preconditions.checkState(Preconditions.java:149) at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93) at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43) at java.lang.Thread.run(Thread.java:745)0 [ThriftRPCServer] ERROR org.apache.tephra.distributed.TransactionService - Transaction manager aborted, stopping transaction serviceException in thread "ThriftRPCServer" com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration. at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015) at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001) at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220) at org.apache.tephra.distributed.TransactionServiceThriftHandler.init(TransactionServiceThriftHandler.java:177) at org.apache.tephra.rpc.ThriftRPCServer.startUp(ThriftRPCServer.java:177) at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) at java.lang.Thread.run(Thread.java:745)Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration. at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015) at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001) at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220) at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106) at org.apache.tephra.TransactionManager.doStart(TransactionManager.java:216) at com.google.common.util.concurrent.AbstractService.start(AbstractService.java:170) ... 5 more
Caused by: java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration.
at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
... 1 more
問題分析:配置了這個專案為何還報錯呢?配置檔案讀取的還是為修改支援事物的。
解決方法:強制使用讀取自己修改後的配置檔案即可,通過上面的修改tephra
Vi tephra
CLASSPATH = /etc/hbase/conf:$CLASSPATH
異常資訊二:
Error: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes; (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes;
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:774)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:720)
at org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at sqlline.BufferedRows.<init>(BufferedRows.java:37)
at sqlline.SqlLine.print(SqlLine.java:1649)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes;
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:769)
... 12 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes;
at org.apache.tephra.hbase.TransactionAwareHTable.addToOperation(TransactionAwareHTable.java:672)
at org.apache.tephra.hbase.TransactionAwareHTable.transactionalizeAction(TransactionAwareHTable.java:561)
at org.apache.tephra.hbase.TransactionAwareHTable.getScanner(TransactionAwareHTable.java:289)
at org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:170)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:124)
at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
問題分析:使用phoenix-4.8.0-cdh5.8.0時,打包編譯問題引起,換成phoenix-4.9.0-cdh5.9.1解決 或者 獲取原始碼重新編譯(phoenix-4.8.0-cdh5.8.0,待嘗試)
異常資訊三:
18/05/29 10:27:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue May 29 10:28:24 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0 (state=,code=0)
java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue May 29 10:28:24 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2492)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue May 29 10:28:24 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:862)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:421)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2412)
... 20 more
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to redhat214.life.com/10.31.20.214:60020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to redhat214.life.com/10.31.20.214:60020 is closing. Call id=9, waitTime=37
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:289)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1273)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:355)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to redhat214.life.com/10.31.20.214:60020 is closing. Call id=9, waitTime=37
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1085)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:864)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:581)
sqlline version 1.2.0
解決方法: CATALOG表是系統自帶的表,HBase中已存在的表不會能自動對映,需要修改配置:phoenix.schema.isNamespaceMappingEnabled
通過比對:發現配置問題!
C:\Users\dell\Desktop\wMyWork\bigData\phoenix\phoenix-4.8.0-cdh5.8.0
C:\Users\dell\Desktop\wMyWork\bigData\phoenix\phoenix-4.9.0-cdh5.9.1
未修改配置檔案引起,按照以上步驟修改相關phoenix配置檔案即可,例如以上關鍵配置:phoenix.schema.isNamespaceMappingEnabled=true
問題資訊四:
ERROR: not found class org.apache.tephra.TransactionServiceMain
解決方法:把啟動的jar包放入一份到phoenix的lib包下面即可!
Cp phoenix-4.9.0-cdh5.9.1-server.jar /home/test/phoenix-4.9.0-cdh5.9.1/lib
或者修改:待實驗!
phoenix-4.9.0-cdh5.9.1\dev\make_rc.sh
phx_jars=$(find -iwholename "./*/target/phoenix-*.jar")
改為:phx_jars=$(find -iname phoenix-*.jar)