1. 程式人生 > >Working with Caffe framework using PowerAI 1.5.3

Working with Caffe framework using PowerAI 1.5.3

Overview

PowerAI 1.5.3 supports Caffe as one of Deep learning frameworks. Caffe is the system default version of PowerAI. It actually contains two variations:

  • Caffe BVLC – It contains upstream Caffe 1.0.0 version developed by Berkeley Vision and Learning Center(BVLC) and other community contributors.Berkeley Vision and Learning Center is renamed as BAIR (Berkeley Artificial Intelligence Research).
  • Caffe IBM – It is developed on top of Caffe BVLC and contains enhancements by IBM. By default, Caffe points to Caffe-IBM variant. In case if you need to select the other variant Caffe framework, you can do that by using the command: source /opt/DL/caffe/bin/caffe-activate

To ensure if the specific caffe variant is activated, you can check if specific Caffe framework is activated using the systems PATH variable given in the command.
echo $PATH


/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/ibutils/bin:/opt/anaconda2/bin/:/root/bin:/opt/DL/protobuf/bin:/opt/DL/mldl-spectrum/bin:/opt/DL/ddl/bin:/opt/DL/caffe-ibm/bin

Power AI doesn’t support activation of multiple frameworks in the same login session as it results unpredictable behavior. If you want to activate some other variant of Caffe, you need to logout of this session and login to the new session. The Caffelink provides steps on how to train imagenet model but you can also use caffe framework that is shipped in PowerAI software bundle. In the next section we will discuss more on how do start the training using the imagenet example.

Before starting the training you need to download the dataset that you will use for training and evaluation.

Download Imagenet Dataset

  1. Sign up for downloading the imagenet dataset from imagenet dataset
  2. After getting access permissions, you can download two tar files:
    • ILSVRC2012_img_train.tar
    • ILSVRC2012_img_val.tar
  3. Extract the two tar files and copy them to train and val folders
    tar -C train/ -xvf ILSVRC2012_img_train.tar
    tar -C val/ -xvf ILSVRC2012_img_val.tar
  4. Extracting images from Train tar files into respective class folders. So the train folder will have 1003 sub-folders of different image categories. Each of the sub-folders will have ~ 1304 JPEG images.
  5. Extracting val tar file generates 53212 Images. You need to categorise them in proper folder structure.For this you need to download valprep.sh from https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh. You need to copy this valprep.sh to val folder and execute this shell script. Once execution is done, it will create different sub-folder structure under val folder and move the respective JPEG images to the right sub-folders under “val” directory. So the “val” folder will have 1004 sub-folders with each having 53 JPEG image

Converting the Images to LMDB format

  1. PowerAI provides a script that copies example scripts and models to a directory. Below example copies all the examples and models to user defined directory test-dir.

    [[email protected] ~]$ caffe-install-samples test-dir
    Creating directory test-dir
    Copying data/ into test-dir...
    Copying examples/ into test-dir...
    Copying models/ into test-dir...
    Copying scripts/ into test-dir...
    Copying python/ into test-dir...
    Success
    [[email protected] ~]$ cd test-dir/
    [[email protected] test-dir]$ ls -al
    total 12
    drwxrwxr-x. 7 testuser testuser 77 Sep 26 01:59 .
    drwx------. 3 testuser testuser 128 Sep 26 01:59 ..
    drwxr-xr-x. 6 testuser testuser 62 Sep 26 01:59 data
    drwxr-xr-x. 18 testuser testuser 4096 Sep 26 01:59 examples
    drwxr-xr-x. 7 testuser testuser 144 Sep 26 01:59 models
    drwxr-xr-x. 3 testuser testuser 4096 Sep 26 01:59 python
    drwxr-xr-x. 3 testuser testuser 4096 Sep 26 01:59 scripts
    [[email protected] test-dir]$

  2. Navigate to test-dir/data/ilsvrc12 and run the script get_ilsvrc_aux.sh that downloads imagenet mean binaryproto, training, and validation dataset labels in text files.
    $ sh get_ilsvrc_aux.sh
    Downloading...
    --2018-09-28 07:43:16-- http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
    Resolving dl.caffe.berkeleyvision.org (dl.caffe.berkeleyvision.org)... 169.229.222.251
    Connecting to dl.caffe.berkeleyvision.org (dl.caffe.berkeleyvision.org)|169.229.222.251|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 17858008 (17M) [application/octet-stream]
    Saving to: 'caffe_ilsvrc12.tar.gz'

    100%[======================================>] 1,78,58,008 11.3MB/s in 1.5s

    2018-09-28 07:43:18 (11.3 MB/s) - ‘caffe_ilsvrc12.tar.gz’ saved [17858008/17858008]

    Unzipping...
    Done.
    $ ls
    det_synset_words.txt imagenet_mean.binaryproto test.txt
    get_ilsvrc_aux.sh synsets.txt train.txt
    imagenet.bet.pickle synset_words.txt val.txt

  3. Navigate to path test-dir/examples/imagenet and edit create_imagenet.sh file as follows:
    1. EXAMPLE variable to the absolute path of imagenet example
    2. DATA to the location of data/ilsvrc12
    3. TRAIN_DATA_ROOT to the absolute path of imagenet train folder
    4. VAL_DATA_ROOT to the absolute path of validation val folder

    #!/usr/bin/env sh
    # Create the imagenet lmdb inputs
    # N.B. set the path to the imagenet train + val data dirs
    set -e

    EXAMPLE=/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/examples/imagenet
    DATA=/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/data/ilsvrc12

    # Check if CAFFE_BIN is unset
    if [ -z "$CAFFE_BIN" ]; then
    TOOLS=./build/tools
    else
    TOOLS=$CAFFE_BIN
    fi

    TRAIN_DATA_ROOT=/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/pytorch/train/
    VAL_DATA_ROOT=/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/pytorch/val/

    # Set RESIZE=true to resize the images to 256x256. Leave as false if images have
    # already been resized using another tool.
    RESIZE=false
    if $RESIZE; then
    RESIZE_HEIGHT=256

  4. Execute the create_imagenet.sh file and you can see LMDB file generation output below:
    $ sh ./create_imagenet.sh
    Creating train lmdb...
    I0928 07:45:58.968338 34452 convert_imageset.cpp:86] Shuffling data
    I0928 07:45:59.885766 34452 convert_imageset.cpp:89] A total of 1281167 images.
    I0928 07:45:59.887583 34452 db_lmdb.cpp:35] Opened lmdb /gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/examples/imagenet/ilsvrc12_train_lmdb
    I0928 07:46:46.930565 34452 convert_imageset.cpp:147] Processed 1000 files.
    I0928 07:47:23.977712 34452 convert_imageset.cpp:147] Processed 2000 files.
    I0928 07:47:58.416146 34452 convert_imageset.cpp:147] Processed 3000 files.
    I0928 07:48:31.446862 34452 convert_imageset.cpp:147] Processed 4000 files.
    I0928 07:49:03.587481 34452 convert_imageset.cpp:147] Processed 5000 files.
    I0928 07:49:36.566186 34452 convert_imageset.cpp:147] Processed 6000 files.
    I0928 07:50:08.293210 34452 convert_imageset.cpp:147] Processed 7000 files.
    I0928 07:50:40.633654 34452 convert_imageset.cpp:147] Processed 8000 files.
    I0928 07:51:12.149935 34452 convert_imageset.cpp:147] Processed 9000 files.
    I0928 07:51:44.529917 34452 convert_imageset.cpp:147] Processed 10000 files.

At the end of execution ilsvrc12_train_lmdb and ilsvrc12_val_lmdb folders will be generated with 2 files: data.mdb and lock.mdb. Note that the imagenet_mean.binaryproto file required for training alexnet model will be located in ..test-dir/data/ilsvrc12/

Training Alexnet model using Caffe-IBM

Navigate to models/bvlc_alexnet folder and you can find the solver, trainer files for alexnet model.

[[email protected] models]$ cd bvlc_alexnet/
[[email protected] bvlc_alexnet]$ pwd
/home/testuser/test-dir/models/bvlc_alexnet
[[email protected]dlw11 bvlc_alexnet]$ ls -al
total 20
drwxr-xr-x. 2 testuser testuser 95 Sep 26 01:59 .
drwxr-xr-x. 7 testuser testuser 144 Sep 26 01:59 ..
-rw-r--r--. 1 testuser testuser 3629 Sep 26 01:59 deploy.prototxt
-rw-r--r--. 1 testuser testuser 1146 Sep 26 01:59 readme.md
-rw-r--r--. 1 testuser testuser 297 Sep 26 01:59 solver.prototxt
-rw-r--r--. 1 testuser testuser 5351 Sep 26 01:59 train_val.prototxt
[[email protected] bvlc_alexnet]$

Before training the model, you need to set the parameters under solver.prototxt and train_val.prototxt. Under solver.prototxt, if you need to train the model you need to take care of few parameter values

  • Number of Iterations
  • Training Batch size
  • Total number of Images
  • Number of GPU’s used for training

No.of Epochs = Number of Iterations*Batch size*Number of GPU’s / Total number of Images

Suppose you want to train a model for 10 epochs on Imagenet Data having 1.2 million images , you can set the parameter value as:

  1. Number of GPU’s = 4
  2. Number of Iterations = 25000
  3. Training Batch size= 256
  4. Total number of Images = 1200000

The solver.prototxt file appears like below

net: "/home/testuser/test-dir/models/bvlc_alexnet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 20000
display: 20
max_iter: 25000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "/home/testuser/test-dir/models/bvlc_alexnet/caffe_alexnet_train"
solver_mode: GPU
~
~
~
~
~
~
~
~
~

Here you need to mention the absolute path of train_val.prototxt file in “net” value, snapshot_prefix and “max_iter” as 25000. For train_val.prototxt available under the same directory mention the absolute path of train and validation directory location in the “TRAIN” and “TEST” phase. Also give the train batch size as “256” if you need to train the model for 10 epochs.

name: "AlexNet"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 227
mean_file: "/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/data/ilsvrc12/imagenet_mean.binaryproto"
}
data_param {
source: "/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/examples/imagenet/ilsvrc12_train_lmdb"
batch_size: 256
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 227
mean_file: "/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/data/ilsvrc12/imagenet_mean.binaryproto"
}
data_param {
source: "/gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/examples/imagenet/ilsvrc12_val_lmdb"
batch_size: 50
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1

Caffe accepts train and validation data only in LMDB format. So you have given the absolute path here under the source value in data_param section. Also you need to give the location of mean file to be used for training under mean_file. Now you are all set and you can train the model using the command:

caffe train –gpu=all –solver=solver.prototxt

You can see the output like below.

Start Of Execution At: Tue Sep 18 05:13:02 EDT 2018
Using Configs:
LMDB_TRAIN_DIR = /gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/examples/imagenet/ilsvrc12_train_lmdb
LMDB_VAL_DIR = /gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/examples/imagenet/ilsvrc12_val_lmdb
LMDB_MEAN_FILE = /gpfs/gpfs_gl4_16mb/b8p226/b8p226zd/test-dir/data/ilsvrc12/imagenet_mean.binaryproto
ITERATIONS = 8000
TRAIN_BATCH_SIZE = 240
TEST_BATCH_SIZE = 64
GPUs = 0,1,2,3,4,5
RUN_MODE = non-lms
lms_size_threshold=
lms_exclude =
Caffe Training Started At: Tue Sep 18 05:13:02 EDT 2018
Running Caffe as : time numactl /opt/DL/caffe/bin/caffe train -gpu 0,1,2,3,4,5 --solver=/tmp/b8p226zd/alexnet-2018-09-18-05-13-02/solver_2018-09-18-05-13-02.prototxt --iterations 8000
I0918 05:13:02.492660 5945 caffe.cpp:335] Using GPUs 0, 1, 2, 3, 4, 5
I0918 05:13:04.695749 5945 caffe.cpp:340] GPU 0: Tesla V100-SXM2-16GB
I0918 05:13:04.697422 5945 caffe.cpp:340] GPU 1: Tesla V100-SXM2-16GB
I0918 05:13:04.699075 5945 caffe.cpp:340] GPU 2: Tesla V100-SXM2-16GB
I0918 05:13:04.700800 5945 caffe.cpp:340] GPU 3: Tesla V100-SXM2-16GB
I0918 05:13:04.702517 5945 caffe.cpp:340] GPU 4: Tesla V100-SXM2-16GB
I0918 05:13:04.704231 5945 caffe.cpp:340] GPU 5: Tesla V100-SXM2-16GB
I0918 05:13:04.712694 5945 common.cpp:226] NVidia Management Library loaded successfully
I0918 05:13:05.501705 5945 solver.cpp:45] Initializing solver from parameters:
test_iter: 1000
test_interval: 1000
base_lr: 0.01
display: 20
max_iter: 8000
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
stepsize: 20000
snapshot: 800
snapshot_prefix: "/tmp/b8p226zd/alexnet-2018-09-18-05-13-02/caffe_alexnet_gpu_train"
solver_mode: GPU
device_id: 0
net: "/tmp/b8p226zd/alexnet-2018-09-18-05-13-02/train_val_2018-09-18-05-13-02.prototxt"
train_state {
level: 0
stage: ""
}
I0918 05:13:05.502215 5945 solver.cpp:103] Creating training net from net file: /tmp/b8p226zd/alexnet-2018-09-18-05-13-02/train_val_2018-09-18-05-13-02.prototxt
I0918 05:13:05.503700 5945 net.cpp:531] The NetState phase (0) differed from the phase (1) specified by a rule in layer data
I0918 05:13:05.503762 5945 net.cpp:531] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0918 05:13:05.503785 5945 net.cpp:57] Initializing net from parameters:
name: "Alexnet"
state {

At the end you will see the output:

I0918 05:38:51.565030 5945 sgd_solver.cpp:128] Iteration 7920, lr = 0.01
I0918 05:38:54.061976 5945 solver.cpp:244] Iteration 7940 (7.9922 iter/s, 2.50244s/20 iters), loss = 2.87838
I0918 05:38:54.062108 5945 solver.cpp:263] Train net output #0: loss = 2.87838 (* 1 = 2.87838 loss)
I0918 05:38:54.066254 5945 sgd_solver.cpp:128] Iteration 7940, lr = 0.01
I0918 05:38:56.531842 5945 solver.cpp:244] Iteration 7960 (8.09814 iter/s, 2.4697s/20 iters), loss = 2.75436
I0918 05:38:56.537919 5945 solver.cpp:263] Train net output #0: loss = 2.75436 (* 1 = 2.75436 loss)
I0918 05:38:56.538363 5945 sgd_solver.cpp:128] Iteration 7960, lr = 0.01
I0918 05:38:59.019044 5945 solver.cpp:244] Iteration 7980 (8.06125 iter/s, 2.48101s/20 iters), loss = 3.00122
I0918 05:38:59.019153 5945 solver.cpp:263] Train net output #0: loss = 3.00122 (* 1 = 3.00122 loss)
I0918 05:38:59.024619 5945 sgd_solver.cpp:128] Iteration 7980, lr = 0.01
I0918 05:39:01.430420 5945 solver.cpp:483] Snapshotting to binary proto file /tmp/b8p226zd/alexnet-2018-09-18-05-13-02/caffe_alexnet_gpu_train_iter_8000.caffemodel
I0918 05:39:02.009801 5945 sgd_solver.cpp:367] Snapshotting solver state to binary proto file /tmp/b8p226zd/alexnet-2018-09-18-05-13-02caffe_alexnet_gpu_train_iter_8000.solverstate
I0918 05:39:02.231539 5945 solver.cpp:332] Iteration 8000, loss = 2.65919
I0918 05:39:02.231578 5945 solver.cpp:352] Iteration 8000, Testing net (#0)
I0918 05:39:07.122040 5945 blocking_queue.cpp:49] Waiting for data
I0918 05:39:11.766402 6121 data_layer.cpp:86] Restarting data prefetching from start.
I0918 05:39:18.347111 5945 solver.cpp:431] Test net output #0: accuracy = 0.408531
I0918 05:39:18.347160 5945 solver.cpp:431] Test net output #1: loss = 2.72732 (* 1 = 2.72732 loss)
I0918 05:39:18.347172 5945 solver.cpp:337] Optimization Done.
I0918 05:39:24.157742 5945 caffe.cpp:421] Optimization Done.
Caffe Training Completed At: Tue Sep 18 05:39:26 EDT 2018
Generating Data for outdir-2018-09-18-05-13-02/caffe-run-alexnet.log
Running parse_log.sh to generate the data for train & test
End Of Execution At: Tue Sep 18 05:40:08 EDT 2018

Now you should be able to easily train any model using Caffe-IBM. If you have any questions, feel free to add them below. We’d love to hear from you!

Useful links

相關推薦

Working with Caffe framework using PowerAI 1.5.3

Overview PowerAI 1.5.3 supports Caffe as one of Deep learning frameworks. Caffe is the system default version of PowerAI. It actually contains two varia

Windows下libjpeg-trubo-1.5.3編譯

cef logs .exe str 準備 dna out OS types 簡述 https://libjpeg-turbo.org/的網站上是有已經編譯好的版本下載的,但是VC下是使用的VC10.0編譯的。雖然在VC14.0下也能用,但是我還是需要編譯一個VC14.0版本

無法解析parent POM——1.5.3.RELEASE

word 倉庫 maven name style spring ron 不知道 aliyun 報錯信息: Non-resolvable parent POM for com.zhaoyang:eurekaTest:1.0-SNAPSHOT: Could not trans

[USACO 1.5.3]特殊的質數肋骨

題目描述 農民約翰母牛總是產生最好的肋骨。你能通過農民約翰和美國農業部標記在每根肋骨上的數字認出它們。農民約翰確定他賣給買方的是真正的質數肋骨,是因為從右邊開始切下肋骨,每次還剩下的肋骨上的數字都組成一個質數,舉例來說: 7 3 3 1 全部肋骨上的數字 7331是質數;三根肋骨 733

WinRAR 5.1-5.3 64位註冊方法,rarreg.key

(複製上方資料,新建個文字文件,把以上資料貼上到文字文件中,儲存為 rarreg.key 格式即可,這樣你就得到了一個授權檔案,把授權檔案複製到 Winrar 的安裝根目錄中就成為註冊版本了) 註冊碼

SpringBoot專案版本升級:從1.5.3升級到2.1.8版本

SpringBoot專案版本升級:從1.5.3升級到2.1.8版本 前言 簡單記錄一次本人在自己的SpringBoot專案project-template中,把1.5.3版本升級到2.1.8版本時升級的步驟,及遇到的問題。 提升parent版本號 更改pom檔案中parent的版本號 <par

錯誤:php70w-common conflicts with php-common-5.3.3-49.el6.x86_64 You could try using --skip-broken to

記錄一下  由於之前系統自帶的php5.3.3沒有解除安裝乾淨; 在執行phpize時報錯說需要php-devel  然後yum -y install php-delel ; 然後就報錯 錯誤:php70w-common conflicts with php-common

file /usr/share/mysql/... conflicts with file from package mysql-libs-5.1.73-3.el6_5.x86_ 64 MySQL安裝

在CentOS 6.5安裝MySQL 5.6.17,安裝到最後一個rpm檔案MySQL-server時 安裝命令是:rpm -ivh MySQL-server-5.6.17-1.el6.x86_64.rpm 出現了錯誤資訊: error: Failed dependencies: liba

《Statistical Analysis with Missing Data》習題5.1——5.2

一、題目 5.1 本題基於之前習題1.6產生關於(Y1,Y2,U)(Y_1, Y_2, U)(Y1​,Y2​,U)的模擬資料: yi1=1+zi1y_{i1}=1+z_{i1}yi1​=1+zi1​ yi2=5+2∗zi1+zi2y_{i2}=5+2*z_{i1

在Sql中將 varchar 值 '1,2,3,4,5,6' 轉換成數據類型 int

給定 序列 顯示 結果 空格 sel -方法 一個表 affect --問題:將aa轉換為Int類型失敗 string aa="3,5,11,56,88,45,23"; select * from ERPBuMen where ID in(aa) ; --方法sel

10.1.5 Comment類型【JavaScript高級程序設計第三版】

區別 data instr 特征 -s 包含 解釋 eval ntb 註釋在DOM中是通過Comment 類型來表示的。Comment 節點具有下列特征: nodeType 的值為8; nodeName 的值為"#comment"; nodeVa

1.5 重點

main 薪水 員工信息 onu name this for system args package Employ; public class EmpDemo { public static void main(String[] args) { Emp emp =new

轉載----編寫高質量代碼:改善Java程序的151個建議(第1章:JAVA開發中通用的方法和準則___建議1~5)

ase 重載方法 name 原理 .get tin stat eas 容易 閱讀目錄 建議1:不要在常量和變量中出現易混淆的字母 建議2:莫讓常量蛻變成變量    建議3:三元操作符的類型務必一致   建議4:避免帶有變長參數的方法重載 建議5:別讓null值和空值威

maven工程:Missing artifact com.sun:tools:jar:1.5.0:system 解決方法

end rop jdk 一個 blog enc files mave system 修改maven的pom文件指定 a.定義屬性<properties><java.home>C:\Program Files\Java\jdk1.6.0_21<

Victor 串口控件 1.5.0.6 VCL/FMX for C++ Builder 10.2 Tokyo, 10.1 Berlin, 10.0 Seattle, XE8, XE7, XE6 已經發布

blank sms mac 使用 模板 www 文本 clas stat Victor 串口控件 1.5.0.6 更新內容: ? 增加支持 FMX (Firemonkey) Win32/Win64,控件包含 VCL 和 FMX 兩個框架版本的,可以同時安裝 ? 增加

發布 Victor 串口控件 1.5.0.6 VCL for C++ Builder 6.0

引腳 amp ans dem mode 發的 結構 base 設計思路 Victor 串口控件 1.5.0.6 VCL BCB6/BCB5 版本更新的內容: ? 和新版 BCB 的控件同步更新,BCB6 版本和新版 C++ Builder 控件只是 UNICODE/AN

1.5 高速找出機器故障

line sid art black 大小 方法 star height 元素 題目:如果一個機器僅僅存儲一個標號為ID的記錄,如果每份數據保存2個備份,這樣就有2個機器存儲了同樣的數據。當中ID是小於10億的整數。 問題1、在某個時間。假設得到一個數據文件ID的列

全新的閃念膠囊,OneStep 1.5 以及 BigBang 2.0 更新後的 Smartisan OS 3.6 體驗

win .com googl 下午 老羅 free 好的 作者 沒有 本文標簽: OneStep1.5 BigBang2.0 SmartisanOS3.6 閃念膠囊 隨著堅果手機的發布,Smartisan OS 也得到了例行更新。包括了全新的閃念膠囊,OneStep 1.5

[大數據]-Logstash-5.3.1的安裝導入數據到Elasticsearch5.3.1並配置同義詞過濾

cat 3.1 send text 開啟 gui 插件 work message 閱讀此文請先閱讀上文:[大數據]-Elasticsearch5.3.1 IK分詞,同義詞/聯想搜索設置,前面介紹了ES,Kibana5.3.1的安裝配置,以及IK分詞的安裝和同義詞設置,這裏主

1-2+3-4+5-6+7-8....M 的結果算法

次數 pre blog spa rgs static console line span 1 static void Main(string[] args) 2 { 3 /** 4 * 算法題: 5 * 求 1-2+3-4+5-6+7