1. 程式人生 > 其它 >baselines中環境包裝器EpisodicLifeEnv的分析

baselines中環境包裝器EpisodicLifeEnv的分析

 

如題:

class EpisodicLifeEnv(gym.Wrapper):
    def __init__(self, env):
        """Make end-of-life == end-of-episode, but only reset on true game over.
        Done by DeepMind for the DQN and co. since it helps value estimation.
        """
        gym.Wrapper.__init__(self, env)
        self.lives 
= 0 self.was_real_done = True def step(self, action): obs, reward, done, info = self.env.step(action) self.was_real_done = done # check current lives, make loss of life terminal, # then update lives to handle bonus lives lives = self.env.unwrapped.ale.lives()
if lives < self.lives and lives > 0: # for Qbert sometimes we stay in lives == 0 condition for a few frames # so it's important to keep lives > 0, so that we only reset once # the environment advertises done. done = True self.lives = lives
return obs, reward, done, info def reset(self, **kwargs): """Reset only when lives are exhausted. This way all states are still reachable even though lives are episodic, and the learner need not know about any of this behind-the-scenes. """ if self.was_real_done: obs = self.env.reset(**kwargs) else: # no-op step to advance from terminal/lost life state obs, _, _, _ = self.env.step(0) self.lives = self.env.unwrapped.ale.lives() return obs

 

 

EpisodicLifeEnv包裝器是針對環境中有多條lives的,遊戲中所剩的lives通過: lives = self.env.unwrapped.ale.lives()獲得。

主要需要說明的程式碼為:

        if lives < self.lives and lives > 0:
            # for Qbert sometimes we stay in lives == 0 condition for a few frames
            # so it's important to keep lives > 0, so that we only reset once
            # the environment advertises done.
            done = True

根據註釋可以知道,對於遊戲Qbert來說當所剩lives為0的時候這時返回的done為false,也就是說還需要幾幀畫面後才會獲得done=True的反饋,如果我們將判斷條件:

        if lives < self.lives and lives > 0:

改為:

        if lives < self.lives and lives >=0:

這樣,step返回的 return obs, reward, done, info 將作為一個episode的最後一幀資料來處理,並呼叫reset函式中的:

        else:
            # no-op step to advance from terminal/lost life state
            obs, _, _, _ = self.env.step(0)

這樣,在隨後的幾幀資料中由於 self.was_real_done = False,而  lives = self.env.unwrapped.ale.lives()=0,會不斷的迴圈呼叫reset操作。

 

 

當然針對Qbert遊戲中的這種問題我們還可以使用其他的修改方式:

class EpisodicLifeEnv(gym.Wrapper):
    def __init__(self, env):
        """Make end-of-life == end-of-episode, but only reset on true game over.
        Done by DeepMind for the DQN and co. since it helps value estimation.
        """
        gym.Wrapper.__init__(self, env)
        self.lives = 0
        self.was_real_done = True

    def step(self, action):
        obs, reward, done, info = self.env.step(action)
        # self.was_real_done = done
        # check current lives, make loss of life terminal,
        # then update lives to handle bonus lives
        lives = self.env.unwrapped.ale.lives()
        if lives < self.lives:
            # for Qbert sometimes we stay in lives == 0 condition for a few frames
            # so it's important to keep lives > 0, so that we only reset once
            # the environment advertises done.
            done = True
        self.lives = lives
        return obs, reward, done, info

    def reset(self, **kwargs):
        """Reset only when lives are exhausted.
        This way all states are still reachable even though lives are episodic,
        and the learner need not know about any of this behind-the-scenes.
        """
        # if self.was_real_done:
        if self.lives == 0:
            obs = self.env.reset(**kwargs)
        else:
            # no-op step to advance from terminal/lost life state
            obs, _, _, _ = self.env.step(0)
        self.lives = self.env.unwrapped.ale.lives()
        return obs

 

 

 

==================================================