1. 程式人生 > 其它 >CentOS7 安裝Nvidia Tesla T4驅動 CUDA CUDNN

CentOS7 安裝Nvidia Tesla T4驅動 CUDA CUDNN

顯示卡為 Nvidia Tesla T4

前置

安裝gcc編譯環境以及核心相關的包

# 新增阿里雲的安裝源

curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

  

# 安裝基礎環境

yum -y install apr autoconf automake bash bash-completion bind-utils bzip2 bzip2-devel chrony cmake coreutils curl curl-devel dbus dbus-libs dhcp-common dos2unix e2fsprogs e2fsprogs-devel file file-libs freetype freetype-devel gcc gcc-c++ gdb glib2 glib2-devel glibc glibc-devel gmp gmp-devel gnupg iotop kernel kernel-devel kernel-doc kernel-firmware kernel-headers krb5-devel libaio-devel libcurl libcurl-devel libevent libevent-devel libffi-devel libidn libidn-devel libjpeg libjpeg-devel libmcrypt libmcrypt-devel libpng libpng-devel libxml2 libxml2-devel libxslt libxslt-devel libzip libzip-devel lrzsz lsof make microcode_ctl mysql mysql-devel ncurses ncurses-devel net-snmp net-snmp-libs net-snmp-utils net-tools nfs-utils nss nss-sysinit nss-tools openldap-clients openldap-devel openssh openssh-clients openssh-server openssl openssl-devel patch policycoreutils polkit procps readline-devel rpm rpm-build rpm-libs rsync sos sshpass strace sysstat tar tmux tree unzip uuid uuid-devel vim wget yum-utils zip zlib* jq

  


# 時間同步

systemctl start chronyd && systemctl enable chronyd 

  


# 重啟

reboot

  

# 整體升級

yum update -y

  


# 再次重啟

reboot

  

檢查

注意:安裝核心包時需要先檢查一下當前核心版本是否與所要安裝的kernel-devel/kernel-doc/kernel-headers的版本一致,請務必保持兩者版本一致,否則後續的編譯過程會出問題。

[root@localhost opt]# uname -a
Linux localhost 3.10.0-1160.31.1.el7.x86_64 #1 SMP Thu Jun 10 13:32:12 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost opt]# yum list | grep kernel-
kernel-devel.x86_64                         3.10.0-1160.31.1.el7       @updates
kernel-doc.noarch                           3.10.0-1160.31.1.el7       @updates
kernel-headers.x86_64                       3.10.0-1160.31.1.el7       @updates
kernel-tools.x86_64                         3.10.0-1160.31.1.el7       @updates
kernel-tools-libs.x86_64                    3.10.0-1160.31.1.el7       @updates
kernel-abi-whitelists.noarch                3.10.0-1160.31.1.el7       updates
kernel-debug.x86_64                         3.10.0-1160.31.1.el7       updates
kernel-debug-devel.x86_64                   3.10.0-1160.31.1.el7       updates
kernel-tools-libs-devel.x86_64              3.10.0-1160.31.1.el7       updates

兩種方法可以解決版本不一致的問題:

方法一、升級核心版本,具體升級方法請自行百度, 可以不用設為預設啟動核心;

方法二、安裝與核心版本一致的kernel-devel/kernel-doc/kernel-headers,例如:

yum install "kernel-devel-uname-r == $(uname -r)"

安裝顯示卡驅動

下載

https://www.nvidia.cn/Download/index.aspx?lang=cn

檢視支援顯示卡的驅動最新版本及下載,下載之後是.run字尾。然後上傳到伺服器任意位置即可

準備

禁用系統預設安裝的 nouveau 驅動

# 修改配置
echo -e "blacklist nouveau\noptions nouveau modeset=0" > /etc/modprobe.d/blacklist.conf

# 備份原來的映象檔案
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak

# 重建新映象檔案
sudo dracut --force

# 重啟
reboot

# 檢視nouveau是否啟動,如果結果為空即為禁用成功
lsmod | grep nouveau

安裝DKMS模組

DKMS全稱是DynamicKernel ModuleSupport,它可以幫我們維護核心外的驅動程式,在核心版本變動之後可以自動重新生成新的模組。

yum -y install dkms

  

安裝

執行以下命令進行安裝,檔名替換為自己的。

sudo sh NVIDIA-Linux-x86_64-410.129-diagnostic.run -no-x-check -no-nouveau-check -no-opengl-files
# -no-x-check #安裝驅動時關閉X服務 # -no-nouveau-check #安裝驅動時禁用nouveau # -no-opengl-files #只安裝驅動檔案,不安裝OpenGL檔案

按照安裝提示進行安裝,一路點yes、ok

安裝完之後輸入以下命令 ,顯示如下:

[root@localhost opt]# nvidia-smi
Wed Jul  7 11:11:33 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.129      Driver Version: 410.129      CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:41:00.0 Off |                    0 |
| N/A   94C    P0    36W /  70W |      0MiB / 15079MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

  

安裝CUDA

安裝前檢查

1、確定已經安裝NVIDIA顯示卡

[root@localhost opt]# lspci | grep -i nvidia
41:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)

2、確認安裝gcc,如果沒有安裝需要安裝

[root@localhost opt]# gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Copyright © 2015 Free Software Foundation, Inc.
本程式是自由軟體;請參看原始碼的版權宣告。本軟體沒有任何擔保;
包括沒有適銷性和某一專用目的下的適用性擔保。

# yum -y install gcc  gcc-c++

3、禁用Nouveau

# 沒有輸出就是已經禁用了Nouveau
# 如果沒有禁用, 看文件上面的禁用Nouveau
[root@localhost opt]# lsmod | grep nouveau
[root@localhost opt]#

4、設定開機啟動級別

在載入Nouveau驅動程式或圖形介面處於活動狀態時,無法安裝CUDA驅動程式

[root@localhost opt]# systemctl set-default multi-user.target
Removed symlink /etc/systemd/system/default.target.
Created symlink from /etc/systemd/system/default.target to /usr/lib/systemd/system/multi-user.target.

安裝

此處的安裝環境為離線環境,需要先下載cuda安裝檔案,安裝檔案可以去官網地址下載對應的系統版本,官網下載地址:https://developer.nvidia.com/cuda-toolkit-archive

CUDA版本按照自己的需求選擇即可, 這裡我選擇的安裝型別為 runfile(local)

wget https://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run
sudo sh cuda_10.1.243_418.87.00_linux.run

接著,會出現安裝介面,輸入accept,

第二個介面, 直接選擇install

安裝後腳本輸出, 臨時儲存一下, 後面需要:

===========
= Summary =
===========

Driver:   Installed
Toolkit:  Installed in /usr/local/cuda-10.1/
Samples:  Installed in /root/, but missing recommended libraries

Please make sure that
 -   PATH includes /usr/local/cuda-10.1/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-10.1/lib64, or, add /usr/local/cuda-10.1/lib64 to /etc/ld.so.conf andrun ldconfig as root

To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-10.1/bin
To uninstall the NVIDIA Driver, run nvidia-uninstall

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-10.1/doc/pdf for detailed information on setting upCUDA.
Logfile is /var/log/cuda-installer.log

新增CUDA進入環境變數

# 需要按照自己的cuda安裝指令碼輸出來更改
[root@localhost cuda-10.1]# tail -5 /etc/profile
PATH=$PATH:/usr/local/cuda-10.1/bin/
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.1/lib64/
export PATH
export LD_LIBRARY_PATH
[root@localhost cuda-10.1]# source /etc/profile

驗證

[root@localhost cuda-10.1]# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

[root@localhost NVIDIA_CUDA-10.1_Samples]# cd /root/NVIDIA_CUDA-10.1_Samples
[root@localhost NVIDIA_CUDA-10.1_Samples]# make
[root@localhost NVIDIA_CUDA-10.1_Samples]# cd 1_Utilities/deviceQuery
[root@localhost deviceQuery]# ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Tesla T4"
  CUDA Driver Version / Runtime Version          10.1 / 10.1
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 15080 MBytes (15812263936 bytes)
  (40) Multiprocessors, ( 64) CUDA Cores/MP:     2560 CUDA Cores
  GPU Max Clock rate:                            1590 MHz (1.59 GHz)
  Memory Clock rate:                             5001 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 65 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS

主要關注 Result = PASS 代表測試通過

安裝cuDNN

下載

從官網上(https://developer.nvidia.com/rdp/cudnn-archive)下載相關版本的CUDNN(需要先註冊賬號才能下載):

注意:要選擇CUDA相對應版本的。

安裝

上傳並解壓

[root@localhost opt]# cd /opt/
[root@localhost opt]# tar xzvf cudnn-10.1-linux-x64-v7.6.5.32.tgz
cuda/include/cudnn.h
cuda/NVIDIA_SLA_cuDNN_Support.txt
cuda/lib64/libcudnn.so
cuda/lib64/libcudnn.so.7
cuda/lib64/libcudnn.so.7.6.5
cuda/lib64/libcudnn_static.a
[root@localhost opt]# cp cuda/include/cudnn.h /usr/local/cuda/include
[root@localhost opt]# cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
[root@localhost opt]# chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
[root@localhost opt]# chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*