1. 程式人生 > >KYlin build cube時出現問題的彙總

KYlin build cube時出現問題的彙總

1.在單條非空資料時出現null值而導致報錯

Vertex failed, vertexName=Map 1, vertexId=vertex_1494251465823_0017_1_01, diagnostics=[Task failed, taskId=task_1494251465823_0017_1_01_000021, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: org.apache
.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"weibo_id":"3959784874733088","content":null,"json_file":null,"geohash":null,"user_id":"3190257607","time_id":null,"city_id":null,"province_id":null,"country_id":null,"unix_time":null,"pic_url":null,"lat":null,"lon":null} at org.apache
.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) at org.apache.tez.runtime
.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"weibo_id":"3959784874733088","content":null,"json_file":null,"geohash":null,"user_id":"3190257607","time_id":null,"city_id":null,"province_id":null,"country_id":null,"unix_time":null,"pic_url":null,"lat":null,"lon":null} at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:325) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150) ... 14 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"weibo_id":"3959784874733088","content":null,"json_file":null,"geohash":null,"user_id":"3190257607","time_id":null,"city_id":null,"province_id":null,"country_id":null,"unix_time":null,"pic_url":null,"lat":null,"lon":null} at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:565) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83) ... 17 more

實際檢索Hive時發現該條微博是非空的

select * from check_in_table where weibo_id = '3959784874733088';

result:
3959784874733088    #清明祭英烈#今天的和平安定是先烈們用生命換來的,我們要珍惜今天的和平生活,努力學習,早日實現中國夢。 http://t.cn/R2dLEhU {...}   wtn901f5q32n    3190257607  1459526400000   1901    19    00    2016-04-02 12:02:26 0   28.31096    121.64364

修改星型模型得以解決

2.在Kylin Build時失敗,顯示了一個Hbase scan超時的錯誤

報錯如下:

Vertex re-running, vertexName=Map 2, vertexId=vertex_1494251465823_0016_1_00
Vertex failed, vertexName=Map 1, vertexId=vertex_1494251465823_0016_1_01, diagnostics=[Task failed, taskId=task_1494251465823_0016_1_01_000015, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 425752ms passed since the last invocation, timeout is currently set to 60000
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 425752ms passed since the last invocation, timeout is currently set to 60000
    at ...

解決方法:

1方法.通過修改conf

Configuration conf = HBaseConfiguration.create()
conf.setLong(HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY,120000)

通過程式碼實現修改 時間、這個值是在客戶端應用中配置的,我測試的時候是不會被傳遞到遠端region伺服器,所以這樣的修改是無效的、不知是否人通過這種測試。

2方法直接修改配置檔案

<property>
    <name>hbase.regionserver.lease.period</name>    
    <value>900000</value> 
    <!-- 900 000, 15 minutes -->  
</property>  
<property>    
    <name>hbase.rpc.timeout</name>    
    <value>900000</value> 
    <!-- 15 minutes -->  
</property>

3.#4 Step Name: Build Dimension Dictionary 時出現cardinality Too high

java.lang.RuntimeException: Failed to create dictionary on WEIBODATA.CHECK_IN_TABLE.USER_ID
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:325)
    at org.apache.kylin.cube.CubeManager.buildDictionary(CubeManager.java:222)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:50)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:41)
    at org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 11824431
    at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:96)
    at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:73)
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:321)
    ... 14 more

result code:2

解決方法
手動修改Cube(JSON)
如果不修改,精確Count Distinct使用了Default dictionary來儲存編碼後的user_id,而Default dictionary的最大容量為500萬,並且,會為每個Segment生成一個Default dictionary,這樣的話,跨天進行UV分析的時候,便會產生錯誤的結果,如果每天不重複的user_id超過500萬,那麼build的時候會報錯。

該值由引數 kylin.dictionary.max.cardinality 來控制,當然,你可以修改該值為1億,但是Build時候可能會因為記憶體溢位而導致Kylin Server掛掉。

Apache Kylin中對上億字串的精確Count_Distinct示例
由於Global Dictionary 底層基於bitmap,其最大容量為Integer.MAX_VALUE,即21億多,如果全域性字典中,累計值超過Integer.MAX_VALUE,那麼在Build時候便會報錯。

其他請按實際業務需求配置。
手動修改Cube(JSON)
如果不修改,精確Count Distinct使用了Default dictionary來儲存編碼後的user_id,而Default dictionary的最大容量為500萬,並且,會為每個Segment生成一個Default dictionary,這樣的話,跨天進行UV分析的時候,便會產生錯誤的結果,如果每天不重複的user_id超過500萬,那麼build的時候會報錯:
java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary — cardinality: 43377845
at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:96)
at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:73)
該值由引數 kylin.dictionary.max.cardinality 來控制,當然,你可以修改該值為1億,但是Build時候可能會因為記憶體溢位而導致Kylin Server掛掉:

因此,這種需求我們需要手動使用Global Dictionary,顧名思義,它是一個全域性的字典,不分Segments,同一個user_id,在全域性字典中只有一個ID。

新增JSON欄位

4.Dup key found問題

java.lang.IllegalStateException: Dup key found, key=[24], value1=[1456243200000,2016,02,24,0,1456272000000], value2=[1458748800000,2016,03,24,0,1458777600000]
    at org.apache.kylin.dict.lookup.LookupTable.initRow(LookupTable.java:85)
    at org.apache.kylin.dict.lookup.LookupTable.init(LookupTable.java:68)
    at org.apache.kylin.dict.lookup.LookupStringTable.init(LookupStringTable.java:79)
    at org.apache.kylin.dict.lookup.LookupTable.<init>(LookupTable.java:56)
    at org.apache.kylin.dict.lookup.LookupStringTable.<init>(LookupStringTable.java:65)
    at org.apache.kylin.cube.CubeManager.getLookupTable(CubeManager.java:674)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:60)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:41)
    at org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecu`
or.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

result code:2

修改model結構
從有重複行的形式=》 無重複行

5.USER_ID全域性dict構建失敗,提示/dict/WEIBODATA.USER_TABLE/USER_ID should have 0 or 1 append dict but 2

java.lang.RuntimeException: Failed to create dictionary on WEIBODATA.CHECK_IN_TABLE.USER_ID
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:325)
    at org.apache.kylin.cube.CubeManager.buildDictionary(CubeManager.java:222)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:50)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:41)
    at org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: GlobalDict /dict/WEIBODATA.USER_TABLE/USER_ID should have 0 or 1 append dict but 2
    at org.apache.kylin.dict.GlobalDictionaryBuilder.build(GlobalDictionaryBuilder.java:68)
    at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:81)
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:323)
    ... 14 more

result code:2

這個網站 提到了這個問題,有一個解決了這個問題的人用的解決方法是:

1.檢查metadata

scan 'kylin_metadata', {STARTROW=>'/dict/WEIBODATA.USER_TABLE/USER_ID', ENDROW=> '/dict/WEIBODATA.USER_TABLE/USER_ID', FILTER=>"KeyOnlyFilter()"} 

使用該方法對dict的metadata進行檢查,該網站中說他這次scan出了兩個metadata,清理後就不報這個錯了
但是我掃描的時候的結果是

hbase(main):011:0> scan 'kylin_metadata', {STARTROW=>'/dict/WEIBODATA.USER_TABLE/USER_ID', ENDROW=> '/dict/WEIBODATA.USER_TABLE/USER_ID', FILTER=>"KeyOnlyFilter()"} 
ROW                                                                  COLUMN+CELL                                                                                                                                                                                              
0 row(s) in 0.0220 seconds

hbase(main):010:0> scan 'kylin_metadata', {STARTROW=>'/dict/WEIBODATA.USER_TABLE/USER_ID', ENDROW=> '/dict/WEIBODATA.USER_TABLE/USER_ID'}
ROW                                                                  COLUMN+CELL                                                                                                                                                                                              
0 row(s) in 0.0190 seconds

甚至沒有出現這一行
使用Kylin自帶的clean快取功能

${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true

re-build無效

在那個網頁中發現其提到把USER_ID設定為distinct_count方法,這一點跟我一樣,在去除這個count欄位後再次rebuild

無效,證明不是這個的原因

暫時無法解決這個問題

6.Reduce階段出現OOM

2017-05-24 06:31:27,282 ERROR [main] org.apache.kylin.engine.mr.KylinReducer: 
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at org.apache.kylin.dict.TrieDictionaryBuilder$Node.reset(TrieDictionaryBuilder.java:60)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:125)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValue(TrieDictionaryBuilder.java:92)
    at org.apache.kylin.dict.TrieDictionaryForestBuilder.addValue(TrieDictionaryForestBuilder.java:97)
    at org.apache.kylin.dict.TrieDictionaryForestBuilder.addValue(TrieDictionaryForestBuilder.java:78)
    at org.apache.kylin.dict.DictionaryGenerator$StringTrieDictForestBuilder.addValue(DictionaryGenerator.java:212)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsReducer.doReduce(FactDistinctColumnsReducer.java:197)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsReducer.doReduce(FactDistinctColumnsReducer.java:60)
    at org.apache.kylin.engine.mr.KylinReducer.reduce(KylinReducer.java:48)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
2017-05-24 06:31:44,672 ERROR [main] org.apache.kylin.engine.mr.KylinReducer: 
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:3210)
    at java.util.Arrays.copyOf(Arrays.java:3181)
    at java.util.ArrayList.toArray(ArrayList.java:376)
    at java.util.LinkedList.addAll(LinkedList.java:408)
    at java.util.LinkedList.addAll(LinkedList.java:387)
    at java.util.LinkedList.<init>(LinkedList.java:119)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:384)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.buildTrieBytes(TrieDictionaryBuilder.java:424)
    at org.apache