1. 程式人生 > >Trafodion server --- 服務端安裝

Trafodion server --- 服務端安裝

如有疑問 請聯絡本人  Q:327398329

準備工作:

1、因為安裝trafodion2.0.1版本,如果使用CHD平臺,就必須使用5.4版本。(CDH5.4安裝在上一篇有介紹)

2、其他使用者的sudo許可權。  這個應該修改配置檔案。/etc/sudoer

trafodion搭建。  

1、在http://trafodion.incubator.apache.org/download.html下載 trafodion的server和installer。  

2、將兩個檔案放到 linux 根目錄 : /root/trafodion-installer

3、將installer,解壓:   tar xvfz installer.tar.gz   

4、進入解壓的檔案中 ,也就是installer中

5、trafodion_install是 安裝指令碼, 先不執行它。   如果你的linux系統可以上網,那你就直接./trafodion_install

如果無法上網,你就在執行./trafodion_install命令之前,先看一下traf_package_setup這個指令碼,這裡面是一些包的安裝,他需要從網路上下載然後rpm安裝或者yum安裝。

首先你要在系統上 使用命令  : rpm -qa | grep package_name  | wc -l                             package_name 就是下面列舉的包名,每個都要使用該命令查詢一下,看是否已經安裝。   執行結果只要不是 0,  就代表已經安裝。        (rpm -qa 是列出系統安裝的所有包, grep是通過包名過濾是否安裝該包, wc -l 是一共有幾項結果符合前面的命令,0代表沒有安裝)

這裡我列舉這些包:(如果無法上,就先自己在網上下載rpm包,然後使用   rpm -ivh 包名   進行安裝 )

 1、epel   2、pdsh  3、apr  4、apr-util   5、sqlite   6、expect   7、perl-DBD-SQLite*   8、protobuf 

 9、xerces-c   10、perl-Params-Validate    11、perl-Time-HiRes    12、gzip   13、lzo   14、lzop   15unzip

以上是叢集上每個節點都要安裝的package。  

注意: 如果以上包沒有安裝完全,就會導致後面安裝trafodion出現各種錯誤。 務必全部安裝。

6、 traf_getHadoopNodes這個指令碼是獲得 hadoop節點的,如果使用CDH模式安裝,下面HADOOP_PATH和HADOOP_BIN_PATH,要修改成 hadoop的安裝位置。

(這裡為什麼我要說這一點,因為我第一開啟這個指令碼時,我發現HADOOP_PATH的值並不是正確的,所以你有必要檢視一下,是否正確。)

 if [ -d /opt/cloudera/parcels/CDH ]; then
      export HADOOP_PATH="/opt/cloudera/parcels/CDH/lib/hadoop"
      export HADOOP_BIN_PATH="/opt/cloudera/parcels/CDH/bin"
   

7、上面都完成之後,我們就開始下一步的安裝了。  執行./trafodion_install。   

下面這些是我執行該命令時的一些資訊,可以作為參考。

(如果安裝成功了,可以切換到trafodion使用者下,執行sqlci 它會進入命令模式,和mysql命令列差不多)

[[email protected] installer]# ./trafodion_install 


******************************
 TRAFODION INSTALLATION START
******************************


***INFO: testing sudo access
***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-09-23-20-01-26.log
***INFO: Config directory: /etc/trafodion
***INFO: Working directory: /usr/lib/trafodion


************************************
 Trafodion Configuration File Setup
************************************


***INFO: Please press [Enter] to select defaults.


Is this a cloud environment (Y/N), default is [N]: 
Enter trafodion password, default is [traf123]: 
Enter list of data nodes (blank separated), default [ hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3]: 
Do you have a set of management nodes (Y/N), default is N: 
Specify location of Java 1.7.0_65 or higher (JDK), default is [/usr/java/jdk1.7.0_79]: 
Enter full path (including .tar or .tar.gz) of trafodion tar file [/root/trafodion-instarller/apache-trafodion_server-2.0.1-incubating.tar.gz]: 
Enter Backup/Restore username (can be Trafodion), default is [trafodion]: 
Specify the Hadoop distribution installed (1: Cloudera, 2: Hortonworks, 3: Other): 1
Enter Hadoop admin username, default is [admin]: 
Enter Hadoop admin password, default is [admin]: 
Enter full Hadoop external network URL:port (include 'http://' or 'https://), default is [http://192.168.226.17:7180]: 
Enter HDFS username or username running HDFS, default is [hdfs]: 
Enter HBase username or username running HBase, default is [hbase]: 
Enter HBase group, default is [hbase]: 
Enter Zookeeper username or username running Zookeeper, default is [zookeeper]: 
Enter directory to install trafodion to, default is [/home/trafodion/apache-trafodion_server-2.0.1-incubating]: 
Start Trafodion after install (Y/N), default is Y: 
Total number of client connections per cluster, default [32]: 
Enter the node of primary DcsMaster, default [hadoop.master]: 
Enable High Availability (Y/N), default is N: 
Enable simple LDAP security (Y/N), default is N: 
***INFO: Trafodion configuration setup complete
***INFO: Trafodion Configuration File Check


***INFO: Testing sudo access on node hadoop.master
***INFO: Testing sudo access on node hadoop.slave1
***INFO: Testing sudo access on node hadoop.slave2
***INFO: Testing sudo access on node hadoop.slave3
***INFO: Testing ssh on hadoop.master
***INFO: Testing ssh on hadoop.slave1
***INFO: Testing ssh on hadoop.slave2
***INFO: Testing ssh on hadoop.slave3
#!/bin/bash
#
# @@@ START COPYRIGHT @@@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
# @@@ END COPYRIGHT @@@
#


# Install feature file ($MY_SQROOT/conf/install_features)
#
# This file allows specific Trafodion core builds to signal to
# the installer the presence of a new feature that requires
# configuration work during install time.
#
# This file allows a single installer to install many different
# versions of Trafodion core as opposed to having many versions
# of the installer.  This allows the installer to get additional
# features in ahead of time before the Trafodion core code 
# is available.
#
# The installer will source this file and perform additional
# configuration work based upon the mutually agreed settings
# of the various environment variables in this file.  
#
# It must be coordinated between the Trafodion core feature developer
# and installer developers as to the specifics (i.e. name & value)
# of the environment variable used.
#
# ===========================================================
# Example:
# A new feature requires installer to modify HBase settings in a 
# different way that are not compatible with previous versions of
# Trafodion core. The following is added to this file:
#
#         # support for setting blah-blah in HBase
#         export NEW_HBASE_FEATURE="1"
#
# Logic is added to the installer to test for this env var and if
# there then do the new HBase settings and if not, set the settings
# to whatever they were previously.
# ===========================================================
#


# Trafodion core only works with CDH 5.4 [and HDP 2.3 not yet]
# This env var will signal that to the installer which will
# verify the hadoop distro versions are correct as well as 
# perform some additional support for this.
export CDH_5_3_HDP_2_2_SUPPORT="N"
export HDP_2_3_SUPPORT="Y"
export CDH_5_4_SUPPORT="Y"
export APACHE_1_0_X_SUPPORT="Y"
***INFO: Getting list of all cloudera nodes
***INFO: HADOOP_PATH=/opt/cloudera/parcels/CDH/lib/hadoop
***INFO: HADOOP_BIN_PATH=/opt/cloudera/parcels/CDH/bin
***INFO: cloudera list of nodes:  hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3
***INFO: cloudera list of HDFS nodes:  hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3
***INFO: cloudera list of HBASE nodes:  hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3
***INFO: Testing ssh on hadoop.master
***INFO: Testing ssh on hadoop.slave1
***INFO: Testing ssh on hadoop.slave2
***INFO: Testing ssh on hadoop.slave3
***INFO: Testing sudo access on hadoop.master
***INFO: Testing sudo access on hadoop.slave1
***INFO: Testing sudo access on hadoop.slave2
***INFO: Testing sudo access on hadoop.slave3
***INFO: Checking cloudera Version
***INFO: nameOfVersion=cdh5.4.3


******************************
 TRAFODION SETUP
******************************


***INFO: Installing required RPM packages
***INFO: Starting Trafodion Package Setup (2016-09-23-20-02-52)
***INFO: Installing required packages
***INFO: Log file located in /var/log/trafodion
***INFO: ... pdsh on node hadoop.master
***INFO: ... pdsh on node hadoop.slave1
***INFO: ... pdsh on node hadoop.slave2
***INFO: ... pdsh on node hadoop.slave3
***INFO: Checking if apr is installed ...
***INFO: Checking if apr-util is installed ...
***INFO: Checking if sqlite is installed ...
***INFO: Checking if expect is installed ...
***INFO: Checking if perl-DBD-SQLite* is installed ...
***INFO: Checking if protobuf is installed ...
***INFO: Checking if xerces-c is installed ...
***INFO: Checking if perl-Params-Validate is installed ...
***INFO: Checking if perl-Time-HiRes is installed ...
***INFO: Checking if gzip is installed ...
***INFO: Checking if lzo is installed ...
***INFO: Checking if lzop is installed ...
***INFO: Checking if unzip is installed ...
***INFO: creating sqconfig file
***INFO: Reserving DCS ports
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key


***INFO: Creating trafodion sudo access file




******************************
 TRAFODION MODS
******************************


***INFO: Cloudera installed will run traf_cloudera_mods
***INFO: copying hbase-trx-cdh5_4-*.jar to all nodes
***INFO: hbase-trx-cdh5_4-*.jar copied correctly! Huzzah.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
152  1187    0  1187    0   487  14170   5813 --:--:-- --:--:-- --:--:--  8433
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3521    0  1887  102  1634  42154  36502 --:--:-- --:--:-- --:--:-- 42886
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
199   227    0   227    0   171   3058   2304 --:--:-- --:--:-- --:--:--   767
***INFO: restarting Hadoop to pickup Trafodion transaction jar
***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
101   101    0   101    0     0   1259      0 --:--:-- --:--:-- --:--:--  1278
{ "id" : 190, "name" : "Restart", "startTime" : "2016-09-23T12:04:48.530Z", "active" : true }
***DEBUG: Cloudera command_id=190
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
130   260    0   260    0     0   8670      0 --:--:-- --:--:-- --:--:--  8965
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  24445      0 --:--:-- --:--:-- --:--:-- 25700
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  24200      0 --:--:-- --:--:-- --:--:-- 25700
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  35193      0 --:--:-- --:--:-- --:--:-- 36714
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  24778      0 --:--:-- --:--:-- --:--:-- 25700
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
110   770    0   770    0     0  17686      0 --:--:-- --:--:-- --:--:-- 17906
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "endTime" : "2016-09-23T12:07:19.484Z",
  "active" : false,
  "success" : true,
  "resultMessage" : "All services successfully restarted.",
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "endTime" : "2016-09-23T12:07:19.484Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully started."
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
***INFO: Hadoop restart completed successfully
***INFO: waiting for HDFS to exit safemode
Safe mode is OFF
***INFO: Setting HDFS ACLs for snapshot scan support
***INFO: Trafodion Mods ran successfully.


******************************
 TRAFODION CONFIGURATION
******************************


/usr/lib/trafodion/installer/..
/home/trafodion/apache-trafodion_server-2.0.1-incubating
***INFO: untarring file  to /home/trafodion/apache-trafodion_server-2.0.1-incubating
***INFO: modifying .bashrc to set Trafodion environment variables
***INFO: copying .bashrc file to all nodes
***INFO: copying sqconfig file (/home/trafodion/sqconfig) to /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/sqconfig
***INFO: Creating /home/trafodion/apache-trafodion_server-2.0.1-incubating directory on all nodes
***INFO: Start of DCS install
***INFO: DCS Install Directory: /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1
***INFO: modifying /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-env.sh
***INFO: modifying /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-site.xml
***INFO: creating /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/servers file
***INFO: End of DCS install.
***INFO: Start of REST Server install
***INFO: Rest Install Directory: /home/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1
***INFO: modifying /home/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/conf/rest-site.xml
***INFO: End of REST Server install.
***INFO: starting sqgen
hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3


Creating directories on cluster nodes
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/logs 
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp 
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts 


Generating SQ environment variable file: /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env


Note: Using cluster.conf format type 2.


Generating SeaMonster environment variable file: /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env




Generated SQ startup script file: ./gomon.cold
Generated SQ startup script file: ./gomon.warm
Generated SQ cluster config file: /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf
Generated SQ Shell          file: sqshell
Generated RMS Startup       file: rmsstart
Generated RMS Stop          file: rmsstop
Generated RMS Check         file: rmscheck.sql
Generated SSMP Startup      file: ssmpstart
Generated SSMP Stop         file: ssmpstop
Generated SSCP Startup      file: sscpstart
Generated SSCP Stop         file: sscpstop




Copying the generated files to all the nodes in the cluster


Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf to /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp


Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env to /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env   /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 
Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/traf_coprocessor.properties to /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/traf_coprocessor.properties   /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 


Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env to /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env   /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 


Copying rest of the generated files to /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master sqconfig sqshell gomon.cold gomon.warm rmsstart rmsstop rmscheck.sql ssmpstart ssmpstop sscpstart sscpstop /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master sqconfig sqconfig.db /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts 




******* Generate public/private certificates *******


 Cluster Name : hadoop
Generating Self Signed Certificate....
***********************************************************
 Certificate file :server.crt
 Private key file :server.key
 Certificate/Private key created in directory :/home/trafodion/sqcert
***********************************************************


***********************************************************
 Updating Authentication Configuration
***********************************************************
Creating folders for storing certificates


***INFO: copying /home/trafodion/sqcert directory to all nodes
***INFO: copying install to all nodes
***INFO: starting Trafodion instance
Checking orphan processes.
Removing old mpijob* files from /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp


Removing old monitor.port* files from /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp


Executing sqipcrm (output to sqipcrm.out)
Starting the SQ Environment (Executing /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/gomon.cold)
Background SQ Startup job (pid: 48930)


# of SQ processes: 23 .
SQ Startup script (/home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/gomon.cold) ran successfully. Performing further checks...
Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.


The SQ environment is up!




ProcessConfiguredActual   Down
-----------------------   ----
DTM44   
RMS88   
DcsMaster1 0   1
DcsServer4 0   4
mxosrvr32 0   32


Fri Sep 23 20:10:07 CST 2016
Checking if processes are up.
Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.


The SQ environment is up!




ProcessConfiguredActual   Down
-----------------------   ----
DTM44   
RMS88   
DcsMaster1 0   1
DcsServer4 0   4
mxosrvr32 0   32


Starting the DCS environment now
starting master, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-1-master-hadoop.master.out
hadoop.master: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-1-server-hadoop.master.out
hadoop.slave3: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-4-server-hadoop.slave3.out
hadoop.slave1: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-2-server-hadoop.slave1.out
hadoop.slave2: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-3-server-hadoop.slave2.out
Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.


The SQ environment is up!




ProcessConfiguredActual   Down
-----------------------   ----
DTM44   
RMS88   
DcsMaster1 1   
DcsServer4 4   
mxosrvr32 0   32


Checking if processes are up.
Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.


The SQ environment is up!




ProcessConfiguredActual   Down
-----------------------   ----
DTM44   
RMS88   
DcsMaster1 1   
DcsServer4 4   
mxosrvr32 0   32


Starting the REST environment now
starting rest, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/bin/../logs/rest-trafodion-1-rest-hadoop.master.out






Zookeeper listen port: 2181
DcsMaster listen port: 23400


Configured Primary DcsMaster: "hadoop.master"
Active DcsMaster            : "hadoop.master"


ProcessConfiguredActualDown
-----------------------------
DcsMaster1 1
DcsServer4 4
mxosrvr32 275




You can monitor the SQ shell log file : /home/trafodion/apache-trafodion_server-2.0.1-incubating/logs/sqmon.log




Startup time  0 hour(s) 2 minute(s) 29 second(s)
Apache Trafodion Conversational Interface 2.0.1
Copyright (c) 2015-2016 Apache Software Foundation
>>Metadata Upgrade: started


Version Check: started


Metadata Upgrade: done




*** ERROR[1393] Trafodion is not initialized on this system. Do 'initialize trafodion' to initialize it.


--- SQL operation failed with errors.
>>


End of MXCI Session


Apache Trafodion Conversational Interface 2.0.1
Copyright (c) 2015-2016 Apache Software Foundation
>>initialize trafodion


;


--- SQL operation complete.
>>


End of MXCI Session


***INFO: Installation setup completed successfully.


******************************
 TRAFODION INSTALLATION END
******************************


[[email protected] installer]# su trafodion
[[email protected] ~]$ sqcheck
Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.


The SQ environment is up!




ProcessConfiguredActual   Down
-----------------------   ----
DTM44   
RMS88   
DcsMaster1 1   
DcsServer4 4   
mxosrvr32 32   

小結:當重啟機器時,各服務的啟動為:

1、CDH的啟動, server和agent保證啟動。 mysql啟動。

2、trafodion的server端啟動。

命令如下:

①、 /opt/cm-5.4.3/etc/init.d/cloudera-scm-server start

 /opt/cm-5.4.3/etc/init.d/cloudera-scm-agent start

service mysql start

②、su trafodion

cds

sqstart

相關推薦

Trafodion server --- 服務安裝

如有疑問 請聯絡本人  Q:327398329 準備工作: 1、因為安裝trafodion2.0.1版本,如果使用CHD平臺,就必須使用5.4版本。(CDH5.4安裝在上一篇有介紹) 2、其他使用者的sudo許可權。  這個應該修改配置檔案。/etc/sudoer tr

003.Zabbix2.x-Server服務安裝

一 環境基礎 1.1 部署基礎環境 部署Zabbix需要LAMP或LANP環境,資料庫可以為MySQL或者MariaDB。硬體及儲存條件按需配置。 1.2 常見依賴列表列表 Web前端需要支援的軟體環境如下:

004.Zabbix3.x-Server服務安裝

系統 輸入 tom art users -i innodb 存儲 color 一 環境基礎 1.1 部署基礎環境 部署Zabbix需要LAMP或LANP環境,數據庫可以為MySQL或者MariaDB。硬件及存儲條件按需配置。 1.2 常見依賴列表 Web前端需要支持的軟件環

【2】循序漸進學 Zabbix :安裝配置 Zabbix Server 服務

use 簡單的 config .cn href 之前 zab quit 關閉 上一篇 【1】循序漸進學 Zabbix :初識與基礎依賴環境搭建( LNMP ) 安裝 Zabbix Server 上篇我們在 192.16

SSR服務安裝教程

安裝 soc hub configure use 服務端 安裝教程 配置教程 多用戶 SSR服務端安裝教程https://github.com/breakwa11/shadowsocks-rss/wiki/Server-Setup SS服務端安裝教程https://gith

Skype for Business 2015全新部署_06.SQL Server服務安裝

lync skype lync安裝 skype部署 先決條件安裝參照截圖,點擊服務器角色管理器參照截圖,點擊“添加角色和功能”參照截圖,點擊“下一步”選擇“基於角色或基於功能的安裝”並點擊“下一步”參照截圖,點擊“下一步”保持默認並點擊“下一步”參照截圖選擇角色,點擊“下一步”參照截圖,點擊“

NFS介紹,NFS服務安裝配置,NFS配置選項

nfs筆記內容:14.1 NFS介紹14.2 NFS服務端安裝配置14.3 NFS配置選項筆記日期:2017-11-0114.1 NFS介紹NFS(Network File System)即網絡文件系統,是FreeBSD支持的文件系統中的一種,它允許網絡中的計算機之間通過TCP/IP網絡共享資源。在NFS的應

14.1 NFS介紹 14.2 NFS服務安裝配置 14.3 NFS配置選項

14.1 nfs介紹 14.2 nfs服務端安裝配置 14.3 nfs配置選項14.1 NFS介紹NFS是Network File System的縮寫NFS最早由Sun公司開發,分2,3,4三個版本,2和3由Sun起草開發,4.0開始Netapp公司參與並主導開發,最新為4.1版本NFS數據傳輸基於RPC協議

NFS介紹、NFS服務安裝配置、NFS配置選項

nfs服務NFS介紹NFS(Network File System)即網絡文件系統,是FreeBSD支持的文件系統中的一種,它允許網絡中的計算機之間通過TCP/IP網絡共享資源。在NFS的應用中,本地NFS的客戶端應用可以透明地讀寫位於遠端NFS服務器上的文件,就像訪問本地文件一樣。NFS的數據傳輸基於RPC

2018-3-26 14周1次課 NFS服務安裝、配置

NFS14.1 NFS介紹·NFS是Network File System的縮寫·NFS最早由Sun公司開發,分2,3,4三個版本,2和3由Sun起草開發,4.0開始Netapp公司參與並主導開發,最新為4.1版本·NFS數據傳輸基於RPC協議,RPC為Remote Procedure Call的簡寫。·NF

14.1 NFS介紹;14.2 NFS服務安裝配置;14.3 NFS配置選項

NFS服務端安裝配置14.1 NFS介紹1. NFS是Network File System的縮寫2. NFS最早由Sun公司開發,分2,3,4三個版本,2和3由Sun起草開發,4.0開始Netapp公司參與並主導開發,最新為4.1版本3. NFS數據傳輸基於RPC協議,RPC為Remote Procedur

14.1 NFS介紹14.2 NFS服務安裝配置14.3 NFS配置選項

十四周一次課(3月26日)14.1 NFS介紹centos6之前的版本叫portmap 之後的版本叫rpcbind14.2 NFS服務端安裝配置首先要準備2臺機器,一臺是服務端ip:192.168.133.130,一臺是客戶端ip:192.168.133.131在服務端安裝yum install -y nf

NFS(1)NFS介紹、 NFS服務安裝配置、配置選項、exportfs命令、NFS客戶問題

NFS服務端安裝配置 NFS介紹NFS服務常常用到,用於在網絡上共享存儲NFS工作原理(在centos版本5及之前的版本,RPC服務叫portmap,之後就叫:rpcbind)NFS服務需要借助RPC協議進行通信。 NFS服務端安裝配置先準

zabbix服務安裝、zabbix客戶安裝、zabbix忘記admin密碼怎麽做、

zabbix zabbix-server zabbix-agent zabbix安裝 zabbix忘記密碼 常見開源監控軟件 CactiEZ、nagios、zabbix、smokeping、open-falcon等;cacti、smokeping偏向於基礎網絡設備監控,成圖漂亮;cacti

14.1-14.3 NFS介紹,服務安裝,客戶掛載NFS

NFS NFS客戶端掛載 NFS服務端安裝 14.1 NFS介紹NFS是Network File System的縮寫NFS最早由Sun公司開發,分2,3,4三個版本,2和3由Sun起草開發,4.0開始Netapp公司參與並主導開發,最新為4.1版本NFS數據傳輸基於RPC協議,RPC為Remote

五十四、NFS介紹、NFS服務安裝配置、NFS配置選項

NFS介紹 NFS服務端安裝配置 NFS配置選項及客戶端掛載 五十四、NFS介紹、NFS服務端安裝配置、NFS配置選項一、NFS介紹NFS是Network File System的縮寫。 NFS最早由sun公司開發,分2,3,4三個版本,2和3由sun起草開發,4.0開始Netapp公司參

十四周一課 NFS介紹、NFS服務安裝配置、NFS配置選項

nfsNFS介紹 NFS是Network File System的縮寫 NFS最早由Sun公司開發,分2,3,4三個版本,2和3由Sun起草開發,4.0開始Netapp公司參與並主導開發,最新為4.1版本 NFS數據傳輸基於RPC協議,RPC為Remote Procedure Call的簡寫。 NFS應用場

54.NFS介紹、NFS服務安裝配置、NFS配置選項

NFS介紹 NFS服務端安裝配置 NFS配置選項 一、NFS介紹 NFS是Network File System的縮寫 NFS最早由Sun公司開發,分2,3,4三個版本,2和3由Sun起草開發,4.0開始 Netapp公司參與並主導開發,最新為4.1版本 NFS數據傳輸基於RPC協議,RPC為

OpenLDAP 服務安裝與配置以及原理

ldapOpenLDAP 服務端安裝與配置 一、什麽是LDAP 目錄是一個為查詢、瀏覽和搜索而優化的專業分布式數據庫,它呈樹狀結構組織數據,就好象Linux/Unix系統中的文件目錄一樣。目錄數據庫和關 系數據庫不同,它有優異的讀性能,但寫性能差,並且沒有事務處理、回滾等復雜功能,不適於存儲修改頻繁的數據。所

輕松搭建CAS系列(1)-使用cas overlay搭建SSO SERVER服務

連接 登錄 mage pla class TP build 基礎上 解壓 概要說明 cas的服務端搭建有兩種常用的方式: 1. 基於源碼的基礎上構建出來的 2. 使用WAR overlay的方式來安裝 官方推薦使用第二種,配置管理方便,以後升級也容易。本文就是使用