RL-Glue In Practice

From RL-Glue

Revision as of 02:43, 28 November 2010 by Bradknox (Talk | contribs)
(diff) ← Older revision | Current revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page is dedicated to list the projects, papers, and classes that have used RL-Glue. Please contact us if you have an example of RL-Glue in practice that we can put on this page!






  • Lihong Li, Michael L. Littman, and Christopher R. Mansley: Online exploration in least-squares policy iteration. To appear in AAMAS-09, Budapest, Hungary, May, 2009.
  • John Asmuth, Lihong Li, Michael L. Littman, Ali Nouri, and David Wingate: A Bayesian Sampling Approach to Exploration in Reinforcement Learning [1]. UAI-09; Montreal, Canada; June 2009.


  • John Asmuth, Michael L. Littman and Robert Zinkov: Potential-based Shaping in Model-based Reinforcement Learning. AAAI 2008.
  • W. Bradley Knox and Peter Stone. TAMER: Training an Agent Manually via Evaluative Reinforcement. In IEEE 7th International Conference on Development and Learning, August 2008. ICDL-2008


  • Csáji and Monostori: Value Function Based Reinforcement Learning in Changing Markovian Environments. Journal of Machine Learning Research 9 (2008) 1679-1709
  • Steffen Nissen: Large Scale Reinforcement Learning using Q-SARSA(λ) and Cascading Neural Networks. MSc Thesis Department of Computer Science University of Copenhagen Denmark. 2007.
  • Thomas J. Walsh, Ali Nouri, Lihong Li, and Michael L. Littman: Planning and learning in environments with delayed feedback. In the Journal of Autonomous Agents and Multi-Agent Systems, 18(1):83-105, 2009. A preliminary version appeared in ECML-07.
  • Thomas J. Walsh, Ali Nouri, Lihong Li, and Michael L. Littman: Planning and learning in environments with delayed feedback. In ECML-07, Warsaw, Poland, September, 2007. Also appears in LNCS 4701.



The mission of the Reinforcement-Learning Library (RL-Library) is to create a centralized place for the reinforcement-learning community to share their RL-Glue compatible software projects.

The RL-Library serves two distinct needs. First, to provide standardized, trusted implementations of agents and environments from the reinforcement-learning literature. Second, as a repository for other RL-Glue compatible software and tools, including "up and coming" agents and environments.


The TAMER project seeks to create agents which can be effectively taught behaviors by lay people using positive and negative feedback signals (akin to reward and punishment in animal training). This training scenario is formally defined as the Shaping Problem. The TAMER framework makes use of established supervised learning techniques to model a human's reinforcement function and bases its action selection on the learned model. Videos of TAMER agents being trained can be found here.


rosglue is a framework that allows robots running ROS, Willow Garage's middleware system, to be environments for RL-Glue agents. The goal is to facilitate increased communication between the robotics and RL fields and open further collaborations.

Other Projects

Classes and Course Projects



Personal tools