Friday, April 09, 2021

Agent-Based Modeling and the City

Turning our attention back to agent-based modeling, in the recently open access edited volume by Wenzhong Shi, Michael Goodchild, Michael Batty, Mei-Po Kwan and Anshu Zhang entitled Urban Informatics, Alison Heppenstall, Nick Malleson, Ed Manley and myself have a chapter entitled "Agent-Based Modeling and the City: A Gallery of Applications.

In the chapter we discuss cities through the lens of complex systems comprised of composed of people, places, flows, and activities. Moreover, we make the argument that as cities contain large numbers of discrete actors interacting within space and with other systems from nature, predicting what might happen in the future is a challenge. We base this argument on the fact that human behavior cannot be understood or predicted in the same way as in the physical sciences such as physics or chemistry. The actions and interactions of the inhabitants of a city, for example, cannot be easily described in in a physical science theory such as that of Newton’s Laws of Motion. This notion is captured quite aptly by a quote by Nobel laureate Murray Gell-Mann: “Think how hard physics would be if particles could think.” Building on these arguments we introduce readers to agent-based modeling as it offers a way to explore the processes that lead to patterns we see in cities from the bottom up but also allows us to incorporate ideas from complex systems (e.g., feedbacks, path dependency, emergence) along with providing a gallery of applications of geographically explicit agent-based models. 

We then discuss how agent-based models can incorporate various decision-making processes within them and  how we can integrate data within such models with a specific emphasis on geographical and social information. This leads us to a discussion on how agent-based modelers are utilizing machine learning (such as genetic algorithms, artificial neural networks, Bayesian classifiers, decision trees, reinforcement learning, to name but a few) and data mining (i.e. finding patterns in the data) within their models: from the design of the model, the execution of the model to that of the evaluation of the model. Finally,  we conclude the chapter with a summary and discuss new opportunities with respect to agent-based modeling and the city. One such opportunity is dynamic data assimilation which could be transformative for the ways that some systems, for example “smart” cities, are modeled. Our argument is that agent-based models are often used to simulate the behavior of complex systems, these systems often diverge rapidly from initial starting conditions. One way to prevent a simulation from diverging from reality would be to occasionally incorporate more up-to-date data and adjust the model accordingly (i.e., data assimilation). Data, especially streaming data produced through near real time observational datasets (e.g., social media, vehicle routing counters) could be utilized in such a case. If what we have written above is of interest, below we provide the abstract to chapter along with some figures which we use to illustrate some key points or concepts (such as dynamic data assimilation). Finally at the bottom post, we provide the full reference and a link to the chapter.

Abstract:
Agent-based modeling is a powerful simulation technique that allows one to build artificial worlds and populate these worlds with individual agents. Each agent or actor has unique behaviors and rules which governs their interactions with each other and their environment. It is through these interactions that more macro phenomena emerge: for example, how individual pedestrians lead to the emergence of crowds. Over the last two decades, with the growth of computational power and data, agent-based models have evolved into one of the main modeling paradigms for urban modeling and for understanding the various processes which shape our cities. Agent-based models have been developed to explore a vast range of urban phenomena from that of micro-movement of pedestrians over seconds to that of urban growth over decades and many other issues in between. In this chapter we will introduce readers to agent-based modeling from simple abstract applications to those representing space utilizing geographical data not only for the creation of the artificial worlds but also for the validation and calibration of such models through a series of example applications. We will then discuss how big data, data mining, and machine learning techniques are advancing the field of agent-based modeling and demonstrate how such data and techniques can be leveraged into these models, giving us a new way to explore cities.

Key Words: Agent-based Modeling, Geographical Information Systems, Machine Learning, Urban Simulation.
Using geographical information as a foundation for artificial worlds.
A selection of GeoMason models across various spatial and temporal scales.
Dynamic data assimilation and agent-based modeling.

Full Reference:
Crooks, A.T., Heppenstall, A., Malleson, N. and Manley, E. (accepted), Agent-Based Modeling and the City: A Gallery of Applications, in Shi, W., Goodchild, M., Batty, M., Kwan, M.-P., Zhang, A. (eds.), Urban Informatics, Springer, New York, NY, pp. 885-910. (pdf)

 

Tuesday, February 23, 2021

Simulating Urban Shrinkage in Detroit via Agent-Based Modeling

While we are witnessing a growth in the world-wide urban population, not all cities are growing equally and some are actually shrinking (e.g., Leipzig in Germany; Urumqi in China; and Detroit in the United States). Such shrinking cities pose a significant challenge to urban sustainability from the urban planning, development and management point of view due to declining populations and changes in land use. To explore such a phenomena from the bottom up, Na (Richard) Jiang, Wenjing Wang, Yichun Xie and myself have a new paper entitled "Simulating Urban Shrinkage in Detroit via Agent-Based Modeling" published in Sustainability

This paper builds on our initial efforts in this area which was presented in a previous post. In that post we showed how a stylized model could not only simulate housing transactions but the aggregate market conditions relating to urban shrinkage (i.e., the contraction of housing markets). In this new paper, we significantly extend our previous work by: 1) enlarging the study area; 2) introducing another type of agent, specially, a bank type agent; 3) enhancing the trade functions by incorporating agents preferences when it comes to buying a house; 4) adding additional household dynamics, such as employment status change. These changes will are discussed extensively in the methodology section of the paper.

If this is of interest to you, below we provide the abstract of the paper along with some figures of the study area, graphical user interface, model logic and results. At the bottom of the post you can see the full reference to the paper along with a link to it. The model itself was created in NetLogo and a similar to our other works, we have a more detailed description of the model following the Overview, Design concepts, and Details (ODD) protocol along with the source code and data needed to run the model at: http://bit.ly/ExploreUrbanShrinkage.

Abstract

While the world’s total urban population continues to grow, not all cities are witnessing such growth, some are actually shrinking. This shrinkage causes several problems to emerge, including population loss, economic depression, vacant properties and the contraction of housing markets. Such issues challenge efforts to make cities sustainable. While there is a growing body of work on studying shrinking cities, few explore such a phenomenon from the bottom-up using dynamic computational models. To fill this gap, this paper presents a spatially explicit agent-based model stylized on the Detroit Tri-County area, an area witnessing shrinkage. Specifically, the model demonstrates how the buying and selling of houses can lead to urban shrinkage through a bottom-up approach. The results of the model indicate that along with the lower level housing transactions being captured, the aggregated level market conditions relating to urban shrinkage are also denoted (i.e., the contraction of housing markets). As such, the paper demonstrates the potential of simulation to explore urban shrinkage and potentially offers a means to test policies to achieve urban sustainability.

Keywords: Agent-based modeling; housing markets; Urban Shrinkage; cities; Detroit; GIS

Study Area. 

Model graphical user interface, including input parameters, monitors (left) and the study area (middle) and charts recording key model properties.

Unified modeling language (UML) Diagram of the Model.

Household Decision-Making Process for Stay or Leave Current Location.

Heat Maps of Median (A) and Average (B) House Prices at the End of the Simulation where Demand equals Supply.

Full Reference: 

Jiang, N., Crooks, A.T., Wang, W. and Xie, Y. (2021), Simulating Urban Shrinkage in Detroit via Agent-Based Modeling, Sustainability, 13, 2283. Available at https://doi.org/10.3390/su13042283. (pdf)

 

Thursday, January 28, 2021

Call for Papers: Humans, Societies and Artificial Agents (HSAA)

 

As part of the Annual Modeling & Simulation Conference (ANNSIM 2021), Philippe Giabbanelli, and myself are organizing a tract entitled "Humans, Societies and Artificial Agents (HSAA)" which now has a call for papers out. 

Track description: Artificial societies have typically relied on agent-based models, Geographical Information Systems (GIS), or cellular automata to capture the decision-making processes of individuals in relation to places and/or social interactions. This has supported a wide range of applications (e.g., in archaeology, economics, geography, psychology, political science, or health) and research tasks (e.g., what-if scenarios or predictive models, models to guide data collection). Several opportunities have recently emerged that augment the capacity of artificial societies at capturing complex human and social behavior. Mixed-methods and hybrid approaches now enable the use of ‘big data’, for instance by combining machine learning with artificial societies to explore the model’s output (i.e., artificial societies as input to machine learning), define the model structure (i.e. machine learning as a preliminary to designing artificial societies), or run a model efficiently (i.e. machine learning as a proxy or surrogate to artificial societies). Datasets are also broader in type since artificial societies can now be built from text or generate textual as well as visual outputs to better engage end-users. 

Authors are encouraged to submit papers in the following areas: 

  • Applications of artificial societies (e.g., modeling group decisions and collective behaviors, emergence of social structures and norms, dynamics of social networks). 
  • Data collection for artificial societies (e.g., using simulations to identify data gaps, population simulations with multiple data sources, use of the Internet-of-Things). 
  • Design and implementation of artificial agents and societies (e.g., case studies, analyses of moral and ethical considerations). 
  • Participatory modeling and simulation. 
  • Policy development and evaluation through simulations. 
  • Predictive models of social behavior. 
  • Simulations of societies as public educational tools.
  • Mixed-methods (e.g., analyzing or generating text data with artificial societies, combining machine learning and artificial societies). 
  • Models of individual decision-making, mobility patterns, or socio-environmental interactions. 
  • Testbeds and environments to facilitate artificial society development. 
  • Tools and methods (e.g., agent-based models, case-based modeling, soft systems).

Key dates:

  • Papers due: March 1, March 22nd 2021. 
    • Accepted papers will be published in the conference proceedings and archived in ACM Digital Library and IEEE Explore. 
  • Conference (hybrid format), July 19 – 22, 2021.

Further information including paper guidelines can be found at: https://scs.org/annsim/

Wednesday, January 06, 2021

Elections and Bots

Continuing our work on botsRoss Schuchard and myself have a new paper in PLOS ONE entitled "Insights into elections: An ensemble bot detection coverage framework applied to the 2018 U.S. midterm elections." Our motivation for the work came from the fact that during elections internet-based technological platforms (e.g., online social networks (OSNs), online political blogs etc.) are gaining more power compared to mainstream media sources (e.g., print, television and radio). While such technologies are reducing the barrier for individuals to actively participate in political dialogue, the relatively unsupervised nature of OSNs increases susceptibility to misinformation campaigns, especially with respect to political and election dialogue. This is especially the case for social bots—automated software agents designed to mimic or impersonate humans which are prevalent actors in OSN platforms and have proven to amplify misinformation.  

The issue however is that no single detection algorithm is able to account for the myriad of social bots operating in OSNs. To overcome this issue, this research incorporates multiple social bot detection services to determine the prevalence and relative importance of social bots within an OSN conversation of tweets. Through the lens of the 2018 U.S. midterm elections, 43.5 million tweets were harvested capturing the election conversation which were then analyzed for evidence of bots using three bot detection platform services: Botometer, DeBot and Bot-hunter.

We found that bot and human accounts contributed temporally to our tweet election corpus at relatively similar cumulative rates. The multi-detection platform comparative analysis of intra-group and cross-group interactions showed that bots detected by DeBot and Bot-hunter persistently engaged humans at rates much higher than bots detected by Botometer. Furthermore, while bots accounted for less than 8% of all unique accounts in the election conversation retweet network, bots accounted for more than 20% of the top-100 and top-25 ranking out-degree centrality, thus suggesting persistent activity to engage with human accounts. Finally, the bot coverage overlap analysis shows that minimal overlap existed among the bots detected by the three bot detection platforms, with only eight total bot accounts detected by all (out of a total of 254,492 unique bots in the overall tweet corpus ).

If this research sounds interesting to you, below we provide the abstract to the paper along with some figures outlining our methodology and some of the results. While at the bottom of the post you can see the full reference and there is a link to the paper were you can read more.

Abstract:

The participation of automated software agents known as social bots within online social network (OSN) engagements continues to grow at an immense pace. Choruses of concern speculate as to the impact social bots have within online communications as evidence shows that an increasing number of individuals are turning to OSNs as a primary source for information. This automated interaction proliferation within OSNs has led to the emergence of social bot detection efforts to better understand the extent and behavior of social bots. While rapidly evolving and continually improving, current social bot detection efforts are quite varied in their design and performance characteristics. Therefore, social bot research efforts that rely upon only a single bot detection source will produce very limited results. Our study expands beyond the limitation of current social bot detection research by introducing an ensemble bot detection coverage framework that harnesses the power of multiple detection sources to detect a wider variety of bots within a given OSN corpus of Twitter data. To test this framework, we focused on identifying social bot activity within OSN interactions taking place on Twitter related to the 2018 U.S. Midterm Election by using three available bot detection sources. This approach clearly showed that minimal overlap existed between the bot accounts detected within the same tweet corpus. Our findings suggest that social bot research efforts must incorporate multiple detection sources to account for the variety of social bots operating in OSNs, while incorporating improved or new detection methods to keep pace with the constant evolution of bot complexity.

 

Fig 1. Social bot analysis framework employing multiple bot detection platforms. The framework enables the application of ensemble analysis methods to determine the prevalence and relative importance of social bots within Twitter conversations discussing the 2018 U.S. midterm elections.
 

Fig 3. Cumulative tweet contribution rates for the 2018 U.S. midterm OSN conversation (October 10 – November 6, 2018) from the (a) human (blue) / bot (red) and (b) DeBot (green) / Botometer (pink) / Bot-hunter (orange) account classification perspectives.

Fig 4. Intra-group and cross-group retweet communication patterns of human (blue) and social bot (red) users within the 2018 U.S. midterm election Twitter conversation according to each bot detection classification platform: (a) Combined Bot Sources (b) DeBot (c) Botometer (d) Bot-hunter. The combined bot sources results (shown in gray) classified an account as a bot in aggregate fashion if any of the three detection platforms classified the account as a bot.

Fig 5. Social bot account evidence within the top-N (where, N = 1000 / 500 / 100 / 25) centrality rankings [(a) eigenvector (b) in-degree (c) out-degree (d) PageRank] according to bot classification results from Bot-hunter (orange), Botometer (pink) and DeBot (green).
 
Fig 7. Bot detection coverage analysis for bots detected within the 2018 U.S. midterm election Twitter conversation using the Botometer, Bot-hunter and DeBot bot detection platforms.

 

Full reference:

Schuchard, R.J. and Crooks, A.T. (2021), Insights into Elections: An Ensemble Bot Detection Coverage Framework Applied to the 2018 U.S. Midterm Elections, PLoS ONE, 16(1): e0244309. Available at  https://doi.org/10.1371/journal.pone.0244309. (pdf).