-
Programmatically interpretable reinforcement learning
Programmatically interpretable reinforcement learning, Verma et al., ICML 2018 Being able to trust (interpret, verify) a controller learned through reinforcement learning (RL) is one of the key challenges for real-world deployments of RL that we looked at earlier this week. It’s also an essential requirement for agents in human-machine collaborations (i.e, all deployments at some … Continue reading Programmatically interpretable reinforcement learning
More like this (3)
-
Synthesizing data structure transformations from input-output examples
Synthesizing data structure transformations from input-output examples, Feser et al., PLDI’15 The Programmatically Interpretable Reinforcement Learning...
-
Challenges of real-world reinforcement learning
Challenges of real-world reinforcement learning, Dulac-Arnold et al., ICML’19 Last week we looked at some of...
-
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead...