I’m using TF-Agents library for reinforcement learning,
and I would like to take into account that, for a given state,
some actions are invalid.
How can this be implemented?
Should I define a "observation_and_action_constraint_splitter" function when
creating the DqnAgent?
If yes: do you know any tutorial on this?
Yes you need to define the function, pass it to the agent and also appropriately change the environment output so that the function can work with it. I am not aware on any tutorials on this, however you can look at this repo I have been working on.
Note that it is very messy and a lot of the files in there actually are not being used and the docstrings are terrible and often wrong (I forked this and didn’t bother to sort everything out). However it is definetly working correctly. The parts that are relevant to your question are:
_observation_specis defined as a dictionary of
ArraySpecs(here). You can ignore
knowledge_obswhich are used to run the environment verbosely, they are not fed to the agent.
HanabiEnv._resetat line 110 gives an idea of how the timestep observations are constructed and returned from the environment.
legal_movesare passed through a
np.logical_notsince my specific environment marks legal_moves with 0 and illegal ones with -inf; whilst TF-Agents expects a 1/True for a legal move. My vector when cast to bool would therefore result in the exact opposite of what it should be for TF-agents.
These observations will then be fed to the
utility.py(here) where a tuple containing the observations and the action constraints is returned. Note that
knowledge_obsare implicitly thrown away (and not fed to the agent as previosuly mentioned.
observation_and_action_constraint_splitteris fed to the agent in
create_agentfunction at line 198 for example.
Answered By – Federico Malerba