Smart Alerts for Complex Risks
A Dual-Risk Framework for Understanding AI-Driven Crisis Communication in the Computational Age
Parole chiave:
Smart alerts, AI-enabled crisis communication, human–AI collaboration, predictive risk communication, multi-source verification, adaptive governanceAbstract
This paper develops a framework for “smart alerts” that explains how artificial intelligence (AI) and computational social media analytics are reshaping crisis communication, amplifying detection speed and message personalization while introducing new categories of technological risk. We theorize a dual-risk structure: (1) primary hazards (e.g., floods, wildfires) that alerts aim to mitigate, and (2) secondary risks embedded in AI-mediated communication systems (false positives, bias, privacy, deepfakes). Using a speculative design approach and an illustrative technical case study of Twitter-based flood detection in Thailand, we show how human–AI collaboration models (AI-assisted, human-supervised, and parallel processing) can be operationalized from data ingestion and geocoding to visualization and verification. We propose three cross-cutting design and governance mechanisms: graduated confidence communication, multi-source verification, and adaptive governance architectures. They jointly balance the speed–accuracy dilemma while safeguarding equity and democratic accountability. The framework advances crisis and strategic communication by (a) reframing time in predictive messaging (from reactive to anticipatory communication), (b) specifying organizational design patterns for decision rights and oversight in AI-enabled warning systems, and (c) articulating implementable practices that can sustain public trust. We conclude with implications for empirical evaluation and policy design.
