1. 程式人生 > >CARLA(an open urban driving simulator)

CARLA(an open urban driving simulator)

介紹:

CARLA包含三個模組的自動駕駛:① 經典的規則化無人駕駛 ② 端對端模仿學習無人駕駛 ③端對端強化學習無人駕駛

CARLA支援感知控制兩個模組,包含城市堵路(有汽車,建築物,行人和道路指示標誌),CARLA提供世界和智慧體的介面,客戶端API是python命令控制,以類似插槽(socket)的方式連線智慧體和伺服器。客戶端client傳送命令和下層指令,直接命令包括轉向,加速和剎車,下層命令包括控制伺服器的行為和重置模擬器,改變模擬環境和修改感測器引數。CARLA可以調整視覺資訊質量和速度。CARLA有兩個城鎮,TOWN1用來訓練,TOWN2用來測試。CARLA包含許多感測器,有RGB攝像頭,提供深度資訊的攝像頭(該深度資訊和語義分割,CARLA已經做好了,語義分割有12個種類:道路,道路線,交通燈,行人等等)GPS定位感測器,速度加速度感測器和碰撞感測器等等。

在CARLR中存在探索狀態和行動包含轉向、節氣門和剎車,包含感測器輸入資訊。

啟動:

linux系統下安裝各種依賴:

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install build-essential clang-5.0 lld-5.0 g++-7 ninja-build python python-pip python-dev tzdata sed curl wget unzip autoconf libtool
pip install --user setuptools nose2

sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/lib/llvm-5.0/bin/clang++ 101
sudo update-alternatives --install /usr/bin/clang clang /usr/lib/llvm-5.0/bin/clang 101

安裝 Unreal Engine:

git clone --depth=1 -b 4.19 https://github.com/EpicGames/UnrealEngine.git ~/UnrealEngine_4.19
cd ~/UnrealEngine_4.19
./Setup.sh && ./GenerateProjectFiles.sh && make

安裝CARLA:

git clone https://github.com/carla-simulator/carla

export UE4_ROOT=~/UnrealEngine_4.19

在指定路徑下(下載CARLA的路徑下):

CarlaUE4.sh

CARLA預設的TCP介面是2000和2001,可以通過以下命令修改:

-carla-port=N

如果需要執行示例樣例:

python example.py

修改地圖:

./CarlaUE4.sh /Game/Carla/Maps/Town02

配置:

快進訓練時間:

./CarlaUE4.sh -benchmark -fps=5

修改攝像頭和感測器的引數:

Example.CarlaSettings.ini檔案中修改,圖片在伺服器之間以BGRA陣列的格式傳送,使用者也可以自己定義其他格式。

場景終端相機:(一般情況下選擇在python中,後續將不再列出ini的修改)讓整個場景看起來更加真實,在Python中:

camera = carla.sensor.Camera('MyCamera', PostProcessing='SceneFinal')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)

carla_settings.add_sensor(camera)

在CarlaSettings.ini中:

[CARLA/Sensor/MyCamera]
SensorType=CAMERA
PostProcessing=SceneFinal
ImageSizeX=800
ImageSizeY=600
FOV=90
PositionX=0.30
PositionY=0
PositionZ=1.30
RotationPitch=0
RotationRoll=0
RotationYaw=0

深度地圖相機:

camera = carla.sensor.Camera('MyCamera', PostProcessing='Depth')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)

carla_settings.add_sensor(camera)

語義分割相機:將影象中的每一個目標進行分類

camera = carla.sensor.Camera('MyCamera', PostProcessing='SemanticSegmentation')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)

carla_settings.add_sensor(camera)

鐳射雷達:一個旋轉的鐳射雷達,投射出周圍的三維點雲

lidar = carla.sensor.Lidar('MyLidar')
lidar.set(
    Channels=32,
    Range=50,
    PointsPerSecond=100000,
    RotationFrequency=10,
    UpperFovLimit=10,
    LowerFovLimit=-30)
lidar.set_position(x=0, y=0, z=1.40)
lidar.set_rotation(pitch=0, yaw=0, roll=0)

carla_settings.add_sensor(lidar)

benchmark agent:

agent 和 experiment suite 都需要使用者定義

# We instantiate a forward agent, a simple policy that just set
# acceleration as 0.9 and steering as zero
agent = ForwardAgent()

# We instantiate an experiment suite. Basically a set of experiments
# that are going to be evaluated on this benchmark.
experiment_suite = BasicExperimentSuite(city_name)

# Now actually run the driving_benchmark
# Besides the agent and experiment suite we should send
# the city name ( Town01, Town02) the log
run_driving_benchmark(agent, experiment_suite, city_name,
                      log_name, continue_experiment,
                      host, port)

定義agent:這裡的measurements傳送回來的資料是agent的位置,方向,動態資訊等等;sensor_data傳送回來的資訊是攝像頭資訊和雷達資訊;Directions傳送回來的是規劃器傳送的直行、右轉、左轉等資訊;target傳送回來的是位置和方向資訊。函式會根據上述資訊返回控制資訊:轉向角度、節氣門開度、剎車制動力等。

from carla.agent.agent import Agent
from carla.client import VehicleControl

class ForwardAgent(Agent):

def run_step(self, measurements, sensor_data, directions, target):
    """
    Function to run a control step in the CARLA vehicle.
    """
    control = VehicleControl()
    control.throttle = 0.9
    return control

定義 experiment suite:

from carla.agent_benchmark.experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings

from .experiment_suite import ExperimentSuite


class BasicExperimentSuite(ExperimentSuite):
@property
def train_weathers(self):
    return [1]
@property
def test_weathers(self):
    return [1]

檢視起始位置:

python view_start_positions.py

增加一些其他的選項:

# Define the start/end position below as tasks
poses_task0 = [[7, 3]]
poses_task1 = [[138, 17]]
poses_task2 = [[140, 134]]
poses_task3 = [[140, 134]]
# Concatenate all the tasks
poses_tasks = [poses_task0, poses_task1 , poses_task1 , poses_task3]
# Add dynamic objects to tasks
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]

定義實驗向量:

experiments_vector = []
    # The used weathers is the union between test and train weathers
    for weather in used_weathers:
        for iteration in range(len(poses_tasks)):
            poses = poses_tasks[iteration]
            vehicles = vehicles_tasks[iteration]
            pedestrians = pedestrians_tasks[iteration]

            conditions = CarlaSettings()
            conditions.set(
                SendNonPlayerAgentsInfo=True,
                NumberOfVehicles=vehicles,
                NumberOfPedestrians=pedestrians,
                WeatherId=weather

            )
            # Add all the cameras that were set for this experiments
            conditions.add_sensor(camera)
            experiment = Experiment()
            experiment.set(
                Conditions=conditions,
                Poses=poses,
                Task=iteration,
                Repetitions=1
            )
            experiments_vector.append(experiment)

定義評價標準:

@property
    def metrics_parameters(self):
    """
    Property to return the parameters for the metrics module
    Could be redefined depending on the needs of the user.
    """
    return {

        'intersection_offroad': {'frames_skip': 10,
                                 'frames_recount': 20,
                                 'threshold': 0.3
                                 },
        'intersection_otherlane': {'frames_skip': 10,
                                   'frames_recount': 20,
                                   'threshold': 0.4
                                   },
        'collision_other': {'frames_skip': 10,
                            'frames_recount': 20,
                            'threshold': 400
                            },
        'collision_vehicles': {'frames_skip': 10,
                               'frames_recount': 30,
                               'threshold': 400
                               },
        'collision_pedestrians': {'frames_skip': 5,
                                  'frames_recount': 100,
                                  'threshold': 300
                                  },

          }