Tuesday, February 23, 2021

Simulating Urban Shrinkage in Detroit via Agent-Based Modeling

While we are witnessing a growth in the world-wide urban population, not all cities are growing equally and some are actually shrinking (e.g., Leipzig in Germany; Urumqi in China; and Detroit in the United States). Such shrinking cities pose a significant challenge to urban sustainability from the urban planning, development and management point of view due to declining populations and changes in land use. To explore such a phenomena from the bottom up, Na (Richard) Jiang, Wenjing Wang, Yichun Xie and myself have a new paper entitled "Simulating Urban Shrinkage in Detroit via Agent-Based Modeling" published in Sustainability

This paper builds on our initial efforts in this area which was presented in a previous post. In that post we showed how a stylized model could not only simulate housing transactions but the aggregate market conditions relating to urban shrinkage (i.e., the contraction of housing markets). In this new paper, we significantly extend our previous work by: 1) enlarging the study area; 2) introducing another type of agent, specially, a bank type agent; 3) enhancing the trade functions by incorporating agents preferences when it comes to buying a house; 4) adding additional household dynamics, such as employment status change. These changes will are discussed extensively in the methodology section of the paper.

If this is of interest to you, below we provide the abstract of the paper along with some figures of the study area, graphical user interface, model logic and results. At the bottom of the post you can see the full reference to the paper along with a link to it. The model itself was created in NetLogo and a similar to our other works, we have a more detailed description of the model following the Overview, Design concepts, and Details (ODD) protocol along with the source code and data needed to run the model at: http://bit.ly/ExploreUrbanShrinkage.

Abstract

While the world’s total urban population continues to grow, not all cities are witnessing such growth, some are actually shrinking. This shrinkage causes several problems to emerge, including population loss, economic depression, vacant properties and the contraction of housing markets. Such issues challenge efforts to make cities sustainable. While there is a growing body of work on studying shrinking cities, few explore such a phenomenon from the bottom-up using dynamic computational models. To fill this gap, this paper presents a spatially explicit agent-based model stylized on the Detroit Tri-County area, an area witnessing shrinkage. Specifically, the model demonstrates how the buying and selling of houses can lead to urban shrinkage through a bottom-up approach. The results of the model indicate that along with the lower level housing transactions being captured, the aggregated level market conditions relating to urban shrinkage are also denoted (i.e., the contraction of housing markets). As such, the paper demonstrates the potential of simulation to explore urban shrinkage and potentially offers a means to test policies to achieve urban sustainability.

Keywords: Agent-based modeling; housing markets; Urban Shrinkage; cities; Detroit; GIS

Study Area. 

Model graphical user interface, including input parameters, monitors (left) and the study area (middle) and charts recording key model properties.

Unified modeling language (UML) Diagram of the Model.

Household Decision-Making Process for Stay or Leave Current Location.

Heat Maps of Median (A) and Average (B) House Prices at the End of the Simulation where Demand equals Supply.

Full Reference: 

Jiang, N., Crooks, A.T., Wang, W. and Xie, Y. (2021), Simulating Urban Shrinkage in Detroit via Agent-Based Modeling, Sustainability, 13, 2283. Available at https://doi.org/10.3390/su13042283. (pdf)

 

Thursday, January 28, 2021

Call for Papers: Humans, Societies and Artificial Agents (HSAA)

 

As part of the Annual Modeling & Simulation Conference (ANNSIM 2021), Philippe Giabbanelli, and myself are organizing a tract entitled "Humans, Societies and Artificial Agents (HSAA)" which now has a call for papers out. 

Track description: Artificial societies have typically relied on agent-based models, Geographical Information Systems (GIS), or cellular automata to capture the decision-making processes of individuals in relation to places and/or social interactions. This has supported a wide range of applications (e.g., in archaeology, economics, geography, psychology, political science, or health) and research tasks (e.g., what-if scenarios or predictive models, models to guide data collection). Several opportunities have recently emerged that augment the capacity of artificial societies at capturing complex human and social behavior. Mixed-methods and hybrid approaches now enable the use of ‘big data’, for instance by combining machine learning with artificial societies to explore the model’s output (i.e., artificial societies as input to machine learning), define the model structure (i.e. machine learning as a preliminary to designing artificial societies), or run a model efficiently (i.e. machine learning as a proxy or surrogate to artificial societies). Datasets are also broader in type since artificial societies can now be built from text or generate textual as well as visual outputs to better engage end-users. 

Authors are encouraged to submit papers in the following areas: 

  • Applications of artificial societies (e.g., modeling group decisions and collective behaviors, emergence of social structures and norms, dynamics of social networks). 
  • Data collection for artificial societies (e.g., using simulations to identify data gaps, population simulations with multiple data sources, use of the Internet-of-Things). 
  • Design and implementation of artificial agents and societies (e.g., case studies, analyses of moral and ethical considerations). 
  • Participatory modeling and simulation. 
  • Policy development and evaluation through simulations. 
  • Predictive models of social behavior. 
  • Simulations of societies as public educational tools.
  • Mixed-methods (e.g., analyzing or generating text data with artificial societies, combining machine learning and artificial societies). 
  • Models of individual decision-making, mobility patterns, or socio-environmental interactions. 
  • Testbeds and environments to facilitate artificial society development. 
  • Tools and methods (e.g., agent-based models, case-based modeling, soft systems).

Key dates:

  • Papers due: March 1, 2021. 
    • Accepted papers will be published in the conference proceedings and archived in ACM Digital Library and IEEE Explore. 
  • Conference (hybrid format), July 19 – 22, 2021.

Further information including paper guidelines can be found at: https://scs.org/annsim/

Wednesday, January 06, 2021

Elections and Bots

Continuing our work on botsRoss Schuchard and myself have a new paper in PLOS ONE entitled "Insights into elections: An ensemble bot detection coverage framework applied to the 2018 U.S. midterm elections." Our motivation for the work came from the fact that during elections internet-based technological platforms (e.g., online social networks (OSNs), online political blogs etc.) are gaining more power compared to mainstream media sources (e.g., print, television and radio). While such technologies are reducing the barrier for individuals to actively participate in political dialogue, the relatively unsupervised nature of OSNs increases susceptibility to misinformation campaigns, especially with respect to political and election dialogue. This is especially the case for social bots—automated software agents designed to mimic or impersonate humans which are prevalent actors in OSN platforms and have proven to amplify misinformation.  

The issue however is that no single detection algorithm is able to account for the myriad of social bots operating in OSNs. To overcome this issue, this research incorporates multiple social bot detection services to determine the prevalence and relative importance of social bots within an OSN conversation of tweets. Through the lens of the 2018 U.S. midterm elections, 43.5 million tweets were harvested capturing the election conversation which were then analyzed for evidence of bots using three bot detection platform services: Botometer, DeBot and Bot-hunter.

We found that bot and human accounts contributed temporally to our tweet election corpus at relatively similar cumulative rates. The multi-detection platform comparative analysis of intra-group and cross-group interactions showed that bots detected by DeBot and Bot-hunter persistently engaged humans at rates much higher than bots detected by Botometer. Furthermore, while bots accounted for less than 8% of all unique accounts in the election conversation retweet network, bots accounted for more than 20% of the top-100 and top-25 ranking out-degree centrality, thus suggesting persistent activity to engage with human accounts. Finally, the bot coverage overlap analysis shows that minimal overlap existed among the bots detected by the three bot detection platforms, with only eight total bot accounts detected by all (out of a total of 254,492 unique bots in the overall tweet corpus ).

If this research sounds interesting to you, below we provide the abstract to the paper along with some figures outlining our methodology and some of the results. While at the bottom of the post you can see the full reference and there is a link to the paper were you can read more.

Abstract:

The participation of automated software agents known as social bots within online social network (OSN) engagements continues to grow at an immense pace. Choruses of concern speculate as to the impact social bots have within online communications as evidence shows that an increasing number of individuals are turning to OSNs as a primary source for information. This automated interaction proliferation within OSNs has led to the emergence of social bot detection efforts to better understand the extent and behavior of social bots. While rapidly evolving and continually improving, current social bot detection efforts are quite varied in their design and performance characteristics. Therefore, social bot research efforts that rely upon only a single bot detection source will produce very limited results. Our study expands beyond the limitation of current social bot detection research by introducing an ensemble bot detection coverage framework that harnesses the power of multiple detection sources to detect a wider variety of bots within a given OSN corpus of Twitter data. To test this framework, we focused on identifying social bot activity within OSN interactions taking place on Twitter related to the 2018 U.S. Midterm Election by using three available bot detection sources. This approach clearly showed that minimal overlap existed between the bot accounts detected within the same tweet corpus. Our findings suggest that social bot research efforts must incorporate multiple detection sources to account for the variety of social bots operating in OSNs, while incorporating improved or new detection methods to keep pace with the constant evolution of bot complexity.

 

Fig 1. Social bot analysis framework employing multiple bot detection platforms. The framework enables the application of ensemble analysis methods to determine the prevalence and relative importance of social bots within Twitter conversations discussing the 2018 U.S. midterm elections.
 

Fig 3. Cumulative tweet contribution rates for the 2018 U.S. midterm OSN conversation (October 10 – November 6, 2018) from the (a) human (blue) / bot (red) and (b) DeBot (green) / Botometer (pink) / Bot-hunter (orange) account classification perspectives.

Fig 4. Intra-group and cross-group retweet communication patterns of human (blue) and social bot (red) users within the 2018 U.S. midterm election Twitter conversation according to each bot detection classification platform: (a) Combined Bot Sources (b) DeBot (c) Botometer (d) Bot-hunter. The combined bot sources results (shown in gray) classified an account as a bot in aggregate fashion if any of the three detection platforms classified the account as a bot.

Fig 5. Social bot account evidence within the top-N (where, N = 1000 / 500 / 100 / 25) centrality rankings [(a) eigenvector (b) in-degree (c) out-degree (d) PageRank] according to bot classification results from Bot-hunter (orange), Botometer (pink) and DeBot (green).
 
Fig 7. Bot detection coverage analysis for bots detected within the 2018 U.S. midterm election Twitter conversation using the Botometer, Bot-hunter and DeBot bot detection platforms.

 

Full reference:

Schuchard, R.J. and Crooks, A.T. (2021), Insights into Elections: An Ensemble Bot Detection Coverage Framework Applied to the 2018 U.S. Midterm Elections, PLoS ONE, 16(1): e0244309. Available at  https://doi.org/10.1371/journal.pone.0244309. (pdf).

Friday, December 04, 2020

Future Developments in Geographical Agent-Based Models: Challenges and Opportunities

Its been a while since (to say the least), that we wrote a position paper about agent-based modeling. But with agent-based modeling becoming more widely accepted  and the growth of machine learning within the geographical sciences we thought we would revisit some of the existing challenges  (e.g. validation, representing behavior) and discuss how machine learning and data might help here. To this end, Alison HeppenstallNick Malleson, Ed Manley, Jiaqi Ge and Mike Batty, have recently published a paper entitled "Future Developments in Geographical Agent-Based Models: Challenges and Opportunities" in Geographical Analysis.  Below we provide the abstract to the paper, and if this is of interest please follow the links to the paper itself.

Abstract

Despite reaching a point of acceptance as a research tool across the geographical and social sciences, there remain significant methodological challenges for agent-based models. These include recognizing and simulating emergent phenomena, agent representation, construction of behavioral rules, calibration and validation. Whilst advances in individual-level data and computing power have opened up new research avenues, they have also brought with them a new set of challenges. This paper reviews some of the challenges that the field has faced, the opportunities available to advance the state-of-the-art, and the outlook for the field over the next decade. We argue that although agent-based models continue to have enormous promise as a means of developing dynamic spatial simulations, the field needs to fully embrace the potential offered by approaches from machine learning to allow us to fully broaden and deepen our understanding of geographical systems.

Full Reference:

Heppenstall, A., Crooks, A.T., Malleson, N., Manley, E., Ge, J. and Batty, M. (2020), Future Developments in Geographical Agent-Based Models: Challenges and Opportunities, Geographical Analysis. https://doi.org/10.1111/gean.12267 (pdf)

Tuesday, November 03, 2020

Integrating Social Networks into Large-scale Urban Simulations

Building on past posts about our work with respect to generating large scale synthetic populations for agent-based models, we have a new paper entitled "Integrating Social Networks into Large-scale Urban Simulations for Disaster Responses" that was accepted at the 3rd ACM SIGSPATIAL International Workshop on GeoSpatial Simulation. In the paper we discuss our method to create synthetic populations  which incorporates social networks to generate for the New York megacity region. To demonstrate the utility of our approach, we use the generated synthetic population to initialize an agent-based model which not only generates basic patterns of life (e.g., commuting to and from work), but also allows us to explore how people react to disasters and how their social networks are changed by such events. 

If sounds of interest to you, below we provide the abstract to the paper, along with our synthetic population workflow and some sample outcomes from the model. At the bottom of the post we provide the full reference and link to the paper (the paper itself also links to a GitHub repository where more information about the synesthetic population can be found).

ABSTRACT: Social connections between people influence how they behave and where they go; however, such networks are rarely incorporated in agent-based models of disaster. To address this, we introduce a novel synthetic population method which specifically creates social relationships. This synthetic population is then used to instantiate a geographically explicit agent-based model for the New York megacity region which captures pre- and post- disaster behaviors. We demonstrate not only how social networks can be incorporated into models of disaster but also how such networks can impact decision making, opening up a variety of new application areas where network structures matter in urban settings. 

KEYWORDS: Urban Simulation, Agent-based models, Synthetic Populations, Social Networks, Geographical Information Systems, Disasters.

Synthetic population and social network generation workflow.
Synthetic population at household level within a census track (A) and social network of one individual (B).
Example of a heat-map of traffic density (A) Manhattan is center of the plot. The impact area of the disaster and the health status of the agents (B).

Full Reference:

Jiang, N., Burger, A., Crooks, A.T. and Kennedy, W.G. (2020), Integrating Social Networks into Large-scale Urban Simulations for Disaster Responses. Geosim ’20: 3rd ACM SIGSPATIAL International Workshop on GeoSpatial Simulation, Seattle, WA. (pdf)