Wednesday, September 16, 2020

Learning-based Actor-Interpreter State Representation

While in previous posts we have discussed machine learning (ML) with respect to social media analysis, we have also been exploring how one can use it in agent-based modeling. One of the first examples of this is a new paper with Paul Cummings which has been accepted at the upcoming 2020 International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation (or SBP-BRiMS 2020 for short).

In the paper entitled "Development of a Hybrid Machine Learning Agent Based Model for Optimization and Interpretability" we discuss the growth of ML within agent-based models and present the design of the hybrid agent-based/ML model called the Learning-Driven Actor-Interpreter Representation (LAISR) Model. LAISR's attempts to: "a) generate an optimal decision-making strategy through its training process, including a more constrained parameter space, and b) describe its behaviors in a human-readable and interpretable approach." To demonstrate the LAISR's model we use a simple wargaming example, that of a tactical air and ground warfare as an experiment and discuss areas of further research and applications.

If this is of interest to you, below we provide the abstract to the paper, along with a high level overview of the LAISR's model, its tactical experiment diagram and some results from the wargaming experiment. This is followed by a short movie of a representative model run that Paul has created. Finally at the the bottom of the post you can see the full reference and a link to the paper itself.

Abstract.
The use of agent-based models (ABMs) has become more wide-spread over the last two decades allowing researchers to explore complex systems composed of heterogeneous and locally interacting entities. How-ever, there are several challenges that the agent-based modeling community face. These relate to developing accurate measurements, minimizing a large complex parameter space and developing parsimonious yet accurate models. Machine Learning (ML), specifically deep reinforcement learning has the potential to generate new ways to explore complex models, which can enhance traditional computational paradigms such as agent-based modeling. Recently, ML algorithms have proved an important contribution to the de-termination of semi-optimal agent behavior strategies in complex environments. What is less clear is how these advances can be used to enhance existing ABMs. This paper presents Learning-based Actor-Interpreter State Representation (LAISR), a research effort that is designed to bridge ML agents with more traditional ABMs in order to generate semi-optimal multi-agent learning strategies. The resultant model, explored within a tactical game scenario, lies at the intersection of human and automated model design. The model can be decomposed into a format that automates aspects of the agent creation process, producing a resultant agent that creates its own optimal strategy and is interpretable to the designer. Our paper, therefore, acts as a bridge between traditional agent-based modeling and machine learning practices, designed purposefully to enhance the inclusion of ML-based agents in the agent-based modeling community.

Keywords: Agent-Based Modeling, Machine Learning, Explainable Artificial Intelligence.

LAISR model

LAISR tactical experiment diagram (A). Actor finite state machine (B).

Heat map representations of actor agents.



Full Reference:
Cummings, P. and Crooks, A.T. (2020), Development of a Hybrid Machine Learning Agent Based Model for Optimization and Interpretability, in Thomson, R., Bisgin, H., Dancy, C., Hyder, A. and Hussain, M. (eds), 2020 International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, Washington DC., pp 151-160. (pdf)

No comments: