SISE AI Users Group
Survey Analysis & Strategic Insights
A synthesized view of AI experience, organizational posture, and priority use cases drawn from the March 2026 webinar poll — prepared for executive leadership review.
Executive Summary
Seventeen professionals from state, local, and federal government, private sector, and nonprofit organizations participated in the SISE AI Users Group inaugural survey. Respondents skewed heavily toward emergency management practitioners. The data reveals a community that is curious and motivated but operating in environments where AI governance lags behind individual interest — the single most consequential finding for leadership.
Most participants have experimented with consumer AI tools on their own, but fewer than 15% report working in organizations with formal AI policies and active adoption. This gap between individual initiative and institutional readiness represents the central strategic challenge — and opportunity — surfaced by this survey.
Participant Profile
The group spans multiple sectors, though government — particularly at the state level — represents the largest segment. Emergency management practitioners dominate by functional role, underscoring that this user group is primarily focused on public safety and disaster response contexts.
Sector Breakdown
Functional Role
AI Experience & Organizational Posture
Participants are largely in an exploratory phase — aware of AI tools and using them occasionally, but without deep proficiency or institutional backing. Three respondents report never having used AI for work at all. Meanwhile, three heavy users demonstrate what's possible when individuals invest personally in AI skills.
Individual AI Experience Level
How Common is AI at the Organizational Level?
Organizational AI Policy Posture
Only 2 of 17 respondents work in organizations with both formal policy and active adoption. This represents the benchmark — and the destination most of this group aspires to reach. Several respondents explicitly noted frustration that policies restrict even basic workflow capabilities, suggesting policy design, not just policy existence, is a priority concern.
Generative AI Tools in Use
ChatGPT (OpenAI) and Microsoft Copilot dominate familiarity across the group, largely because these are either freely accessible or bundled into existing Microsoft 365 enterprise licenses. Google Gemini is the third most common tool. Claude (Anthropic) appeared in one respondent's toolkit.
Tool Familiarity (multi-select)
Where Participants Want AI to Have Impact
When asked to select up to three areas where they most want AI to improve their work environment, respondents clustered around operational intelligence and documentation — precisely the high-volume, repetitive tasks where AI creates measurable productivity gains.
Priority Areas for AI Improvement (top choices)
Write-in Use Cases
Respondents also surfaced additional priority areas not on the original list:
What Participants Are Actually Using AI For Today
Despite the governance gaps, the majority of participants are actively using AI tools for practical tasks. The most common applications are document-centric: drafting, editing, summarizing, and researching. A smaller number are pushing toward workflow integration and process development.
Drafting responses and summarizing long documents. Compliance verification.
Research, draft/review documents, starting to create decks, beginning to learn about workflow creation.
Reviewing and cleaning communications — emails, brochures, monthly reports; also coding and spreadsheet development.
Starting to use for models, procedure and plan development. Also for workflow enhancement.
Consolidating information from multiple sources.
Developing SEOC documents and training.
Key Themes from Participant Feedback
Theme 1 — Policy Is the #1 Barrier
The most striking pattern across free-text responses was frustration not with AI capability, but with organizational policy constraining what tools and workflows can be used. Multiple respondents noted they would like to use AI more broadly but are restricted by formal policy or the complete absence of guidance. One respondent described the culture as outright "taboo." Another is actively trying to convince leadership to change workflow policy.
Theme 2 — Accuracy and Trust Are Critical Concerns
Several respondents raised the issue of verifying AI outputs. One called proofing "the biggest challenge," noting that AI produces attractive products that may be inaccurate. Another asked how to distinguish AI-generated data from legitimate data. This trust gap suggests that organizations need not only AI tools but also validation frameworks and AI literacy training.
Theme 3 — "How Do We Start?" Is the Most Common Need
For organizations still in early stages, the question is foundational. Multiple respondents — including one federal agency preparing to introduce AI during an April awareness month — are at the very beginning of the journey. Questions like "How do we start?" and "What are next steps for AI in emergency management?" dominate the open-response section.
Theme 4 — Skilled Practitioners Are an Untapped Resource
Three respondents report frequent, comfortable AI use and are already integrating AI into complex workflows including code review, process automation, and document pipelines. These individuals represent peer mentors who could accelerate adoption across the group if given a platform to share practices.
Questions the Community Wants Answered
The following themes emerge from the "one question for the webinar" responses — these are the topics that will drive the highest engagement and perceived value for future sessions:
Strategic Recommendations for Leadership
Based on the survey data, the following action areas are recommended — sequenced by readiness and impact:
| Priority | Recommendation | Rationale | Target Audience |
|---|---|---|---|
| High | Develop AI governance frameworks & use-case sandboxes | 6 organizations have zero formal guidance. Unmanaged AI use creates data risk and inconsistency. A lightweight policy template would unlock activity already happening informally. | All sectors, especially government |
| High | Launch "Situational Awareness AI" pilot programs | Incident triage and situational awareness was the #1 priority use case (6 votes). A structured pilot here delivers direct operational value in the core mission area of this user group. | Emergency management orgs |
| High | Activate Microsoft Copilot for already-licensed organizations | 11 of 17 respondents already use Copilot. Many organizations are paying for M365 AI capabilities but haven't formally enabled or trained staff on them — a zero-procurement win. | Gov't agencies on M365 |
| Medium | Create AI literacy & prompt engineering training | The majority of respondents are in "occasional use" mode. Structured training on prompt design, tool selection, and output validation would convert moderately comfortable users to proficient ones. | All participants |
| Medium | Develop after-action report (AAR) AI templates | AAR and lessons-learned documentation is the #2 use case. This is high-volume, time-consuming work where AI can reduce burden significantly. Standardized prompts and templates would accelerate adoption. | Emergency management |
| Medium | Create a peer practitioner showcase series | Three "power users" are already doing advanced AI work. Structured case-study sessions where these practitioners share their workflows would accelerate learning across the group faster than external training alone. | User Group programming |
| Lower | Explore AI tools for grant management & compliance | Grant management was cited as a use case by NGO/nonprofit respondents. AI can assist with requirements analysis, draft narrative sections, and cross-reference compliance criteria — reducing workload significantly. | Non-profit / NGO sector |
| Lower | Define AI output validation protocols | Accuracy concerns were raised by multiple respondents. Clear guidance on when and how to verify AI outputs — and what tools assist with this — would increase confidence and adoption. | All participants |
Overall Readiness Assessment
This user group sits at an inflection point. Individual curiosity and experimentation are widespread, but institutional frameworks have not kept pace. The most urgent need is not more capable tools — it is governance, training, and structured use cases that allow organizations to move from informal experimentation to sanctioned, high-value deployment.
The appetite is clearly present. The community is asking the right questions and is eager to grow. If leadership invests in policy development, peer-to-peer knowledge transfer, and a small number of high-visibility pilot deployments — particularly in situational awareness and after-action reporting — this group will become a meaningful center of AI capability in the emergency management sector.