1. 程式人生 > >Ambari學習13_安裝ambari的時候遇到的ambari和hadoop問題集

Ambari學習13_安裝ambari的時候遇到的ambari和hadoop問題集

5.在安裝的時候遇到的問題

5.1使用ambari-server start的時候出現ERROR: Exiting with exit code -1.

5.1.1REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information

解決:

由於是重新安裝,所以在使用/etc/init.d/postgresql  initdb初始化資料庫的時候會出現這個錯誤,所以需要

先用yum –y remove postgresql*

命令把postgresql解除安裝

然後把/var/lib/pgsql/data目錄下的檔案全部刪除

然後再配置postgresql資料庫(執行1.6章節內容)

然後再次安裝(3章節內容)

5.1.2在日誌中有如下錯誤:ERROR [main] AmbariServer:820 - Failed to run the Ambari Server

com.google.inject.ProvisionException: Guice provision errors:

1) Error injecting method, java.lang.NullPointerException

  at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)

  at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)

  while locating org.apache.ambari.server.api.services.AmbariMetaInfo

    for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)

  at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)

  while locating org.apache.ambari.server.controller.AmbariServer

1 error

        at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)

        at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)

        at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)

Caused by: java.lang.NullPointerException

        at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)

        at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)

        at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)

        at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)

        at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()

        at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)

        at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)

        at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)

        at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

        at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

        at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

        at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

        at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)

        at com.sun.proxy.$Proxy26.create(Unknown Source)

        at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)

5.2安裝HDFSHBASE的時候出現/usr/hdp/current/hadoop-client/conf  doesn't exist

5.2.1/etc/Hadoop/conf檔案連結存在

是由於/etc/hadoop/conf/usr/hdp/current/hadoop-client/conf目錄互相連結,造成死迴圈,所以要改變一個的連結

cd /etc/hadoop

rm -rf conf

ln -s /etc/hadoop/conf.backup /etc/hadoop/conf

HBASE也會遇到同樣的問題,解決方式同上

cd /etc/hbase

rm -rf conf

ln -s /etc/hbase/conf.backup /etc/hbase/conf

ZooKeeper也會遇到同樣的問題,解決方式同上

cd /etc/zookeeper

rm -rf conf

ln -s /etc/zookeeper/conf.backup /etc/zookeeper/conf

5.2.2/etc/Hadoop/conf檔案連結不存在

檢視正確的配置,發現缺少兩個目錄檔案config.backup2.4.0.0-169,把資料夾拷貝到/etc/hadoop目錄下

重新建立/etc/hadoop目錄下的conf連結:

cd /etc/hadoop

rm -rf conf

ln -s /usr/hdp/current/hadoop-client/conf conf

問題解決

5.3在認證機器(Confirm Hosts)的時候出現錯誤Ambari agent machine hostname (localhost) does not match expected ambari server hostname

Ambari配置時在Confirm Hosts的步驟時,中間遇到一個很奇怪的問題:總是報錯誤:

Ambari agent machine hostname (localhost.localdomain) does not match expected ambari server hostname (xxx).

後來修改的/etc/hosts檔案中

修改前:

127.0.0.1   localhost dsj-kj1
::1         localhost dsj-kj1

10.13.39.32     dsj-kj1

10.13.39.33     dsj-kj2

10.13.39.34     dsj-kj3

10.13.39.35     dsj-kj4

10.13.39.36     dsj-kj5

修改後:

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1          localhost localhost.localdomain localhost6 localhost6.localdomain6

10.13.39.32     dsj-kj1

10.13.39.33     dsj-kj2

10.13.39.34     dsj-kj3

10.13.39.35     dsj-kj4

10.13.39.36     dsj-kj5

感覺應該是走的ipv6協議,很奇怪,不過修改後就可以了。

5.4ambary-server重灌

刪除使用指令碼刪除

注意刪除後要安裝兩個系統元件

yum -y install ruby*

yum -y install redhat-lsb*

yum -y install snappy*

安裝參考3

5.5Ambari連線mysql設定

在主節點把mysql資料庫連線包拷貝在/var/lib/ambary-server/resources目錄下並改名為mysql-jdbc-driver.jar

cp /usr/share/java/mysql-connector-java-5.1.17.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar

再在圖形介面下啟動hive

5.6在註冊機器(Confirm Hosts)的時候出現錯誤Failed to start ping port listener of: [Errno 98] Address already in use

某個埠或者程序一直陪佔用解決方法:發現df命令一直執行沒有完成,

[[email protected] ~]# netstat -lanp|grep 8670
tcp        0      0 0.0.0.0:8670                0.0.0.0:*                   LISTEN      2587/df

[[email protected] ~]# kill -9 2587
kill
後,再重啟ambari-agent問題解決

[[email protected] ~]# service ambari-agent restart
Verifying Python version compatibility...
Using python  /usr/bin/python2.6
ambari-agent is not running. No PID found at /var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility...
Using python  /usr/bin/python2.6
Checking for previously running Ambari Agent...
Starting ambari-agent
Verifying ambari-agent process status...
Ambari Agent successfully started
Agent PID at: /var/run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log

5.7在註冊機器(Confirm Hosts)的時候出現錯誤The following hosts have Transparent HugePages (THP) enabledTHP should be disabled to avoid potential Hadoop performance issues

解決方法:在Linux下執行:

echo never >/sys/kernel/mm/redhat_transparent_hugepage/defrag

echo never >/sys/kernel/mm/redhat_transparent_hugepage/enabled

echo never >/sys/kernel/mm/transparent_hugepage/enabled

echo never >/sys/kernel/mm/transparent_hugepage/defrag

5.8啟動hive的時候出現錯誤unicodedecodeerror ambari in position 117

檢視/etc/sysconfig/i18n檔案,發現內容如下:

LANG=”zh_CN.UTF8”

原來系統字符集設定成了中文,改成如下內容,問題解決:

LANG="en_US.UTF-8"

5.9安裝Metrics的時候報如下錯誤,安裝包找不到

1.failure: Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.

ftp源伺服器上執行命令:

cd /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6

mkdir Updates-ambari-2.2.1.0

cp -r /var/www/html/ambari/Updates-ambari-2.2.1.0/ambari /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0

然後重新生成repodata

cd /var/www/html/ambari

rm -rf repodata

createrepo ./

2.failure: HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.

/etc/yum.repos.d目錄下刪除mnt.repo,並使用yum clean all命令來清空yum的快取

cd /ec/yum.repos.d

rm -rf mnt.repo

yum clean all

5.11jps 報process information unavailable解決辦法

4791 -- process information unavailable

解決辦法:

進入tmp目錄,

cd /tmp

刪除該目錄下

名稱為hsperfdata_{ username}的資料夾

然後jps,清淨了。

指令碼:

cd /tmp

ls -l | grep hsperf | xargs rm -rf

ls -l | grep hsperf

5.12namenode啟動報錯在日誌檔案中ERROR namenode.NameNode (NameNode.java:main(1712)) - Failed to start namenode

日誌中還有java.net.BindException: Port in use: gmaster:50070

Caused by: java.net.BindException: Address already in use

判斷原因是50070上一次沒有釋放,端口占用

netstat下time_wait狀態的tcp連線:
1.這是一種處於連線完全關閉狀態前的狀態;
2.通常要等上4分鐘(windows server)的時間才能完全關閉;
3.這種狀態下的tcp連線佔用控制代碼與埠等資源,伺服器也要為維護這些連線狀態消耗資源;
4.解決這種time_wait的tcp連線只有讓伺服器能夠快速回收和重用那些TIME_WAIT的資源:修改登錄檔[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters]新增dword值TcpTimedWaitDelay=30(30也為微軟建議值;預設為2分鐘)和MaxUserPort:65534(可選值5000 - 65534);
5.具體tcpip連線引數配置還可參照這裡:http://technet.microsoft.com/zh-tw/library/cc776295%28v=ws.10%29.aspx
6.linux下:
vi /etc/sysctl.conf
新增如下內容:
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies=1

net.ipv4.tcp_fin_timeout=30

net.ipv4.tcp_keepalive_time=1800

net.ipv4.tcp_max_syn_backlog=8192


使核心引數生效:
[[email protected] ~]# sysctl -p
readme:
net.ipv4.tcp_syncookies=1 開啟TIME-WAIT套接字重用功能,對於存在大量連線的Web伺服器非常有效。
net.ipv4.tcp_tw_recyle=1
net.ipv4.tcp_tw_reuse=1 減少處於FIN-WAIT-2連線狀態的時間,使系統可以處理更多的連線。
net.ipv4.tcp_fin_timeout=30 減少TCP KeepAlive連線偵測的時間,使系統可以處理更多的連線。
net.ipv4.tcp_keepalive_time=1800 增加TCP SYN佇列長度,使系統可以處理更多的併發連線。
net.ipv4.tcp_max_syn_backlog=8192

5.13在啟動的時候報錯誤resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh

在日誌中有如下內容:

2016-03-31 13:55:28,090 INFO  security.ShellBasedIdMapping (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.

2016-03-31 13:55:28,096 INFO  nfs3.WriteManager (WriteManager.java:(92)) - Stream timeout is 600000ms.

2016-03-31 13:55:28,096 INFO  nfs3.WriteManager (WriteManager.java:(100)) - Maximum open streams is 256

2016-03-31 13:55:28,096 INFO  nfs3.OpenFileCtxCache (OpenFileCtxCache.java:(54)) - Maximum open streams is 256

2016-03-31 13:55:28,259 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:(205)) - Configured HDFS superuser is

2016-03-31 13:55:28,261 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory /tmp/.hdfs-nfs

2016-03-31 13:55:28,269 WARN  fs.FileUtil (FileUtil.java:deleteImpl(187)) - Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists.

說明hdfs這個使用者對/tmp沒有許可權

賦予許可權給hdfs使用者:

chown  hdfs:hadoop /tmp

再啟動問題解決

5.14在安裝ranger元件的時候出現錯誤連線不上mysql資料庫rangeradmin使用者和不能賦權的問題

在資料庫中先刪除所有rangeradmin使用者,注意使用drop user命令:

drop user 'rangeradmin'@'%';

drop user 'rangeradmin'@'localhost';

drop user 'rangeradmin'@'gmaster';

drop user 'rangeradmin'@'gslave1';

drop user 'rangeradmin'@'gslave2';

FLUSH PRIVILEGES;

再建立使用者(注意gmasterranger安裝的伺服器機器名)

CREATE USER 'rangeradmin'@'%' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'%'  with grant option;

CREATE USER 'rangeradmin'@'localhost' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'localhost'  with grant option;

CREATE USER 'rangeradmin'@'gmaster' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'gmaster'  with grant option;

FLUSH PRIVILEGES;

再檢視許可權:

SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user

select * from mysql.user where user='rangeradmin' \G;

問題解決

5.15ambari啟動的時候出現錯誤:AmbariServer:820 - Failed to run the Ambari Server

這個問題困擾了我很久,最後通過檢視原始碼找到了問題所在:

/var/log/ambari-server/ambary-server.log檔案中報有錯誤:

13 Apr 2016 14:16:01,723  INFO [main] StackDirectory:458 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain an upgrade directory

13 Apr 2016 14:16:01,723  INFO [main] StackDirectory:468 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain config upgrade pack file

13 Apr 2016 14:16:01,744  INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS/role_command_order.json

13 Apr 2016 14:16:01,840  INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.4/role_command_order.json

13 Apr 2016 14:16:01,927 ERROR [main] AmbariServer:820 - Failed to run the Ambari Server

com.google.inject.ProvisionException: Guice provision errors:

1) Error injecting method, java.lang.NullPointerException

  at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)

  at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)

  while locating org.apache.ambari.server.api.services.AmbariMetaInfo

    for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)

  at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)

  while locating org.apache.ambari.server.controller.AmbariServer

1 error

         at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)

         at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)

         at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)

Caused by: java.lang.NullPointerException

         at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)

         at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)

         at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)

         at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)

         at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()

         at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)

         at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)

         at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)

         at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

         at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

         at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

         at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

         at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)

         at com.sun.proxy.$Proxy26.create(Unknown Source)

         at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)

         at org.apache.ambari.server.api.services.AmbariMetaInfo$$FastClassByGuice$$202844bc.invoke()

         at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53)

         at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56)

         at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)

         at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)

         at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)

相關推薦

Ambari學習13_安裝ambari的時候遇到的ambarihadoop問題

5.在安裝的時候遇到的問題 5.1使用ambari-server start的時候出現ERROR: Exiting with exit code -1. 5.1.1REASON: Ambari Server java process died with e

Ambari學習10_ambari安裝過程Registration with the server failed[Errno 256] No more mirrors to try

今天重新安裝ambari過程中,遇到了幾個問題,耗費了我很長時間,在此記錄一下 ambari重新安裝可參考我之前的一篇隨筆 http://www.cnblogs.com/6tian/p/4097401.html 遇到的問題一: 此問題是在安裝第二步,註冊主機時遇

Quartz學習——SSMM(Spring+SpringMVC+Mybatis+Mysql)Quartz成詳解(四)

webapp cron表達式 msi 接口 cli post 定時 報錯 gets Quartz學習——SSMM(Spring+SpringMVC+Mybatis+Mysql)和Quartz集成詳解(四) 當任何時候覺你得難受了,其實你的大腦是在進化,當任何時候你覺得

Hadoop 學習研究(八): 多Job任務hadoop中的全域性變數

/* * 重寫Mapper的setup方法,獲取分散式快取中的檔案 */ @Override protected void setup(Mapper<LongWritable, Text, Text, Text>.Context context)

ambari離線方式安裝Hive不能連線mysql不能啟動hive metastore&hiveserver2

1,在自己筆記本上離線安裝ambari,在搭建Hadoop叢集時測試hive連線mysql的連通性,總是連線不上 網上查資料,找到是因為缺少一個配置檔案,進入後臺在hive-site.xml裡新增一個 <property> <name>javax.j

CentOS7 通過Ambari安裝Hadoop

部門 2.6.0 5.1 資源文件 postgresq 賬號 left direct jar 第一次在cnblogs上發表文章,效果肯定不會好,希望各位多包涵。 編寫這個文檔的背景是月中的時候,部門老大希望我們能夠抽時間學習一下Hadoop大數據方面的技術;給我的學習內容是

centos7 + ambari 2.6安裝與系統配置過程記錄

繼上一篇部落格記錄如何u盤安裝centos,centos7 u盤安裝遇到的坑以及靠譜解決方法,繼續寫安裝的ambari的過程。 安裝的過程主要是參考這個部落格的,CentOS 7.4 安裝 Ambari 2.6.0 + HDP 2.6.3 搭建Hadoop叢集 在安裝過程出現了很多配置相關問題,

ambari單節點安裝過程中的一些問題

問題1:Confirm Hosts時,遇到Repository的問題 Error Summary ------------- Repository base is listed more than once in the configuration Repository updates

ambari下httpfs安裝

    1.比較webFS 和 httpFS     2.安裝 執行 sudo yum install hadoop-httpfs 即可   3.啟動 切換到httpfs 使用者 sudo su - httpfs 執行 /usr/h

Apache Ambari 原始碼編譯安裝

1、執行以下操作; wget http://www.apache.org/dist/ambari/ambari-2.7.0/apache-ambari-2.7.0-src.tar.gz (use th

六、Ambari-server的安裝與配置啟動

企業級大資料平臺Ambari搭建與管理 本節中我們將介紹Ambari-server的安裝與配置啟動 本節的操作只需要在Hadoop01節點上進行: 1、安裝ambari-server 切換到hadoop使用者: [[email protected] ~

CentOS安裝HDP叢集-1 安裝mysql、Ambari

安裝HDP前,先要安裝Ambari,它提供了圖形化安裝和管理hadoop叢集。 CDH官網:https://docs.hortonworks.com/index.html 可以選擇版本下載: 然後點安裝 後面就到安裝文件頁面了,後面可以根據它來安裝了。 本次安裝環境介紹下

Ambari 2.1安裝HDP2.3.2 之 六、安裝部署HDP叢集 詳細步驟

六、安裝部署HDP叢集 瀏覽器訪問 http://master:8080,進入amabri登入頁面,使用者名稱:admin,密碼: admin 選擇 Launch Install Wizard: 1. Get started 給叢集起個名字

Ambari安裝安裝並配置Ambari-server(三)

  不多說,直接上乾貨!    前期部落格 安裝並配置Ambari-server (1)檢查倉庫是否可用 [[email protected] yum.repos.d]$ pwd /etc/yum.repos.d [[email 

Centos7下Hortonworks的Ambari-serverHadoop叢集平臺重灌.

Ambari是apache的頂級專案, 是一套類似一鍵包安裝hadoop叢集的快速部署工具. 本文是因為配置kerberos 授權的時候, 需要加安裝一些功能, 比如tez的時候, 某個包(pig 安裝失敗,) 導致禁用kerberos 無效. 進而

Ambari學習筆記:以本地倉庫自動搭建hadoop叢集環境

測試平臺:Ubuntu_server_16.04_x64 準備好一臺虛擬機器,安裝vmtool以設定共享資料夾: sudo mkdir /mnt/cdrom mount -tro iso9660 /dev/cdrom /mnt/cdrom sudo ta

Maven學習筆記—安裝配置

src 自己 修改 分享 另一個 window 關於 1.3 頁面 Maven的安裝和配置 1 在windows上安裝maven 1.1 下載maven 訪問maven的下載頁面:http://maven.apache.org/download.cgi,選擇版本下載即可。

Kibana學習筆記——安裝使用

分享 server 學習筆記 下載 文件夾 man soft www eight 1.首先下載Kibana https://www.elastic.co/downloads 2.解壓 tar -zxvf kibana-6.2.1-linux-x86_64.tar.g

重新學習Ubuntu -- 安裝QT VNote 筆記軟件

一個 def ann note href chmod markdown ppa date 在windows 系統下一直使用vnote 筆記軟件(一個markdown 語法的筆記軟件),它是基於QT 查看是否安裝QT 在終端上輸入命令: qmake -v 結果: chr@c

Redis學習01_redis安裝部署(centos) Redis學習(一):CentOS下redis安裝部署

原文: http://www.cnblogs.com/herblog/p/9305668.html Redis學習(一):CentOS下redis安裝和部署   1.基礎知識  redis是用C語言開發的一個開源的高效能鍵值對(key-value)資料庫。它通過提