Monday, December 15, 2025

Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI

In the past we have written about how one can use social media to monitor dust storms along with how multi-modal large language models (MLLMs) can be used to analyze images. At the recent American Geophysical Union (AGU) Fall Meeting we (Sage Keidel, Stuart Evans and myself) brought these two strands of research together in a poster entitled "Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI."

In this work we showcase how MLLMs are providing new opportunities and accessible methods for information extraction from imagery data using geo-located images from Flickr which have a dust keyword tag associated with it from multiple languages (e.g., Arabic, English, Spanish).  We run these images through ChatGPT, which classifies them as dust storms or not and compare this classification with human classifed images. If this sounds of interest, below you can read the abstract, see the poster along with a selection of images that have been labeled as as dust storm or not and ChatGPTs confidence in its classification. While the dust storm database itself can be found here

Abstract:

Complete observations of dust events are difficult, as dust’s spatial and temporal variability means satellites may miss dust due to overpass time or cloud coverage, while ground stations may miss dust due to not being in the plume. As a result, an unknown number of dust events go unrecorded in traditional datasets. Dust’s importance both for atmospheric processes and as a health and travel hazard makes detecting dust events whenever possible important, and in particular, studies of the health impacts of dust are limited by detailed exposure information. 

In recent years, social media platforms have emerged as a valuable source of unconventional data to study events such as earthquakes and flooding around the world. However, one challenge with respect to using such data is classifying and labeling it (i.e., is it a dust storm or not?). While it is relatively simple to classify textural data through natural language processing, it is not the case with imagery data. Traditionally, classifying imagery data was a complex computer vision task. However, recent advancements in generative artificial intelligence (AI) especially multi-modal large language models (MLLMs) are opening up new opportunities and offering accessible methods for information extraction from imagery data. Therefore, in this study we collected geotagged Flickr images referencing dust from around the globe from multiple languages (e.g., English, Spanish, Arabic) and use generative AI (i.e., ChatGPT) to classify the images as dust storms or not. Furthermore, we compare a sample of these classified images from ChatGPT with human classified images to assess its accuracy in classification. Our results suggest that ChatGPT can relatively accurately detect dust storms from Flickr images and thus helps us create an unconventional global database of dust storm events that might otherwise go unobserved from more traditional datasets.



Workflow

Poster

Dust storm database (click here to go to it)

Full Referece: 
Keidel, S., Evans S. and Crooks, A.T. (2025), Creating and Assessing an Unconventional Global Database of Dust Storms Utilizing Generative AI, American Geophysical Union (AGU) Fall Meeting, 15th–19th December, New Orleans, LA. (pdf of poster).

Friday, December 12, 2025

Quantitative Comparison of Population Synthesis Techniques

In the past we have written a number of posts on synthetic populations, however, one thing we have not done is compare the various techniques that can be used to create them. This has now changed with a new paper entitled "Quantitative Comparison of Population Synthesis Techniques" which was recently presented at the 2025 Winter Simulation Conference.

In this paper, we (David Han, Samiul IslamTaylor Anderson, Hamdi Kavak and myself) investigate five synthetic population generation techniques (e.g., Iterative Proportional Fitting, Conditional Probabilities, Simple Random Sampling, Hill Climbing and Simulated Annealing) in parallel to synthesize population data for different North America settings (e.g., Fairfax County, VA, USA and Metro Vancouver, BC, Canada). Our findings suggest that while iterative proportional fitting and conditional probabilities techniques perform best, it also suggests at the same time that it is important to consider the basis of choosing certain methods over others for generating synthetic populations with regard to a geographic domain. 

If this sounds of interest, below you can read the abstract to the paper, see some of the figures and tables that support our discussion. While at the bottom of the post you can find the full referece and a link to the paper. Moreover, in an effort to allow for reproducible science,  all code and data are available to interested readers in our GitHub repository located at https://github.com/kavak-lab/synthetic-pop-comparison.

Abstract
Synthetic populations serve as the building blocks for predictive models in many domains, including transportation, epidemiology, and public policy. Therefore, using realistic synthetic populations is essential in these domains. Given the wide range of available techniques, determining which methods are most effective can be challenging. In this study, we investigate five synthetic population generation techniques in parallel to synthesize population data for various regions in North America. Our findings indicate that iterative proportional fitting (IPF) and conditional probabilities techniques perform best in different regions, geographic scales, and with increased attributes. Furthermore, IPF has lower implementation complexity, making it an ideal technique for various population synthesis tasks. We documented the evaluation process and shared our source code to enable further research on advancing the field of modeling and simulation.
A conceptual depiction of the IPF process for population synthesis.

Our four-step process used in this study.

Average R2 values by geographic level and method (standard deviations in italics).

% Total absolute error (% TAE) comparison by attribute for Fairfax County.

Full Referece: 
Han, D., Islam, S., Anderson, T., Crooks, A.T. and Kavak, H. (2025), Quantitative Comparison of Population Synthesis Techniques, in Azar, E., Djanatliev, A., Harper, A., Kogler, C., Ramamohan, V., Anagnostou, A. and Taylor, S.J.E. (eds.), Proceedings of the 2025 Winter Simulation Conference, Seattle, WA, IEEE. pp. 151-162. (pdf)