Thursday, October 15, 2020

The Impact of Mandatory Remote Work during the COVID-19 Pandemic

In the past we have written about using agent-based modeling to study human resources management issues and how workplace the layout might impact subordinates interactions with managers but with growing amounts data we can explore how employees communicate with each other. To this end, Talha Oz and myself have a  new paper entitled "Exploring the Impact of Mandatory Remote Workduring the COVID-19 Pandemic" which will be presented in a special session on COVID-19 at the 2020 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation (or SBP-Brims 2020 for short). 

In this study we exploit metadata (and not content) emitted from commonplace workplace technologies such as calendar and workplace messaging apps collected from a tech company in order to see how mandatory remote work changed communication patterns and how such data can be used to measure organizational health. If this is of interest to you, below we provide the abstract to the paper along with some of the results with respect to how meetings and communication patterns changed from  business as usual (BAU), pre pandemic to that when people were forced to work from home (WFH). Finally at the bottom of the post we provide the full reference and the link to the paper.

Abstract. During the early months of the COVID-19 pandemic, millions of people had to work from home. We examine the ways in which COVID-19 affect organizational communication by analyzing five months of calendar and messaging metadata from a technology company. We found that: (i) cross-level communication increased more than that of same-level, (ii) while within-team messaging increased considerably, meetings stayed the same, (iii) off-hours messaging became much more frequent, and that this effect was stronger for women; (iv) employees respond to non-managers faster than managers; finally, (v) the number of short meetings increased while long meetings decreased. These findings contribute to theories on organizational communication, remote work, management, and flexibility stigma. Besides, this study exemplifies a strategy to measure organizational health using an objective (not self-report based) method. To the best of our knowledge, this is the first study using workplace communication metadata to examine the heterogeneous effects of mandatory remote work. 

Keywords: Work from Home, Communication, COVID-19, Organization.




Full Reference:

Oz, T. and Crooks, A.T. (2020), Exploring the Impact of Mandatory Remote Work during the COVID-19 Pandemic, 2020 International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, Washington DC. (pdf)

 If you would like a pre-print of  paper, just let us know and we can email you one. 

Utilizing Python for Agent-based Modeling: The Mesa Framework

In the past I have mentioned Mesa, an agent-based modeling framework in Python is several posts but not really discussed it in detail. This is about to change with this post. The reason being is that we have a paper at the forthcoming International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation (or SBP-BRiMS for short) entitled "Utilizing Python for Agent-based Modeling: The Mesa Framework".

While Mesa started off with two students from the CSS program at George Mason University: Jackie Kazil and David Masad it has now grown to include over 70 contributors. In this new paper we discuss the rationale for developing Mesa (see https://github.com/projectmesa/mesa) which arose because there was no framework for easily building agent-based models in Python. Furthermore we discuss Mesa's design goals and its architecture and usage, along with who is using Mesa and extensions to it (e.g. Mesa-Geo, Multi-Level Mesa), finally we conclude the paper with future development directions. Below we provide the abstract to the paper and a selection of figures which highlights Mesa's model components (model, analysis and visualization), how various activation schedules are incorporated within Mesa and an illustration of how these different schemes impact a model and some examples of Mesa's visualization functionality. At the bottom of the post we have the full reference and a link to the paper.

Abstract.
Mesa is an agent-based modeling framework written in Python. Originally started in 2013, it was created to be the go-to tool in for re-searchers wishing to build agent-based models with Python. Within this paper we present Mesa’s design goals, along with its underlying architecture. This includes its core components: 1) the model (Model, Agent, Schedule, and Space), 2) analysis (Data Collector and Batch Runner) and the visualization (Visualization Server and Visualization Browser Page). We then discuss how agent-based models can be created in Mesa. This is followed by a discussion of applications and extensions by other researchers to demonstrate how Mesa design is decoupled and extensible and thus creating the opportunity for a larger decentralized ecosystem of packages that people can share and reuse for their own needs. Finally, the paper concludes with a summary and discussion of future development areas for Mesa. 

Keywords: Agent-based Modeling, Python, Framework, Complex Systems. 
Mesa model components: model, analysis and visualization.
Activation schedules within Mesa and an illustration of how these different schemes impact a model. In this case the Prisoner’s Dilemma. Defecting agents are in red and cooperating agents are in blue. Each image is from the same step, but different activation schemes are used.
Model visualization of two Mesa applications within a web browser: (A) Wolf-sheep predation Model. (B) Virus on a network (Source: https://github.com/projectmesa).

Full Reference: 
Kazil, J., Masad, D. and Crooks, A.T. (2020), Utilizing Python for Agent-based Modeling: The Mesa Framework, in Thomson, R., Bisgin, H., Dancy, C., Hyder, A. and Hussain, M. (eds), 2020 International Conference on Social Computing, Behavioral-Cultural Modeling & Prediction and Behavior Representation in Modeling and Simulation, Washington DC., pp. 308-317. (pdf)

Wednesday, October 07, 2020

Creating Intelligent Agents

Continuing our work on machine learning and agent-based modeling,  at the upcoming Computational Social Science (CSS 2020) annual conference, Dale Brearcliffe and myself have a paper entitled: "Creating Intelligent Agents: Combining Agent-Based Modeling with Machine Learning."  In the paper we discuss how advances in computational availability and power have permitted a rapid increase in the development and use of machine learning (ML) solutions in a wide variety of applications (some examples we have already shown on this website), including within agent-based models. 

One thing however, is that while within the ML community at large, it is common to compare different approaches and take the one that gives the best result (e.g., like we did in the Communities, Bots and Vaccinations paper), this is not the case within the social simulation community. There has been little written with respect to why one ML method was chosen over another, or how the simulation results might be different if different ML methods were used. To address this gap we demonstrate the integration of three machine learning methods (i.e., Evolutionary Computing, Q Learning, and State→Action→Reward→State→Action (SARSA)) into the well-known agent based model: Sugarscape (in this instance we modified NetLogo's "Sugarscape 2 Constant Growback"). Our rationale for choosing the Sugarscape model was that it is well known within the social sciences and as the purpose of this paper was not to solve or explore a specific social issue, but to show how different ML methods can be used within the same agent-based model and to show how different methods impact the results of a model.

If this type of research is of interest, below we provide the abstract to the paper, a flow chart of the model execution along with some results. At the bottom of the past you can find the full reference and a link to the paper. Supplementary material can also be found at https://tinyurl.com/ML-Agents. At this link, the model presented in this paper along with a full description of it following the Overview, Design concepts, and Details (ODD) protocol can be found. We do this to allow others to replicate the results and adapt the ML methods for their own applications if they so desire.  

Graphical User Interface of the “Creating Intelligent Agents” model. From left to right: input parameters, agents within their artificial world, and aggregate model outputs
 

Abstract. 

Over the last two decades with advances in computational availability and power, we have seen a rapid increase in the development and use of Machine Learning (ML) solutions applied to a wide range of applications, including their use within agent-based models. However, little attention has been given to how different ML methods alter the simulation results. Within this paper, we discuss how ML methods have been utilized within agent-based models and explore how different methods affect the results. We do this by extending the Sugarscape model to include three ML methods (evolutionary computing, and two reinforcement learning algorithms (i.e., Q Learning, and State→Action→Reward→State→Action (SARSA)). We pit these ML methods against each other and the normal functioning of the rule-based method (Rule M) in pairwise combat. Our results demonstrate ML methods can be integrated into agent-based models, that learning does not always mean better results, and that agent attributes considered important to the modeler might not be to the agent. Our paper's contribution to the field of agent-based modeling is not only to show how previous researchers have used ML but also to directly compare and contrast how different ML methods used in the same model impact the simulation outcome, which is rarely discussed thus, helping bring awareness to researchers who are considering using intelligent agents to improve their models. 

Keywords: Agent-based Modeling, Evolutionary Computing, Machine Learning, Reinforcement Learning, Sugarscape.

 

Model execution flowchart.


Mean result for vision for all rule combinations (50 model runs).
 
Full reference:
Brearcliffe, D.K. and Crooks, A.T. (2020), Creating Intelligent Agents: Combining Agent-Based Modeling with Machine Learning, The 2020 Computational Social Science Society of Americas Conference, Online. (pdf)