Comprehensive documentation of the analysis approach, verification criteria, and limitations
Overview
This analysis used Friedrich Glasl's 9-stage conflict escalation model to systematically analyze Discord messages from eight community members. The analysis employed a two-phase approach:
Phase 1: Automated keyword/pattern matching to identify candidate messages
Phase 2: Manual verification to filter false positives and confirm actual conflict escalation behavior
Analysis Date: 2025-12-30 Data Through: December 2025 Total Messages Analyzed: 125,220
Glasl's 9-Stage Model
Friedrich Glasl's conflict escalation model maps how conflicts intensify through three phases, each containing three stages:
Phase
Stages
Name
Characteristics
Outcome Orientation
Win-Win
1
Hardening
Positions crystallize, tension rises
Rational discussion still possible; both parties can win
Self-interest dominates; face-saving difficult; one must win
5
Loss of Face
Public attacks on opponent's character
6
Strategies of Threats
Ultimatums, accelerating conflict
Lose-Lose
7
Limited Destructive Blows
Opponent as "thing," damaging actions acceptable
Destruction becomes the goal; both parties lose
8
Fragmentation
Destroying opponent's support system
9
Together into the Abyss
Total confrontation, mutual destruction
Key Finding: No user showed Stage 7-9 behavior in conflict channels. Conflicts remain in the "Win-Lose" phase and haven't crossed into "Lose-Lose" territory.
Data Sources
Discord Backup Data
Primary backup:Noisebridge_20251223_225023 (full server backup, 125,220 messages)
Bravespace channel extract:extract_20251229_204036 (1,941 messages including threads)
Bravespace forum extract:bravespace_forum_extract (7 forum posts, 74 messages)
Channels Analyzed
All public channels were included, with particular attention to conflict-relevant channels:
#safety-council - Safety and moderation discussions
#general-chat-aka-hackitorium - Main community discussion
#weekly-meeting - Meeting notes and discussions
#accessibility - Accessibility concerns
#sewing - Area-specific discussions
Various thread channels for specific incidents
Limitation: Private messages (DMs) and private channels were not included in the backup data.
Phase 1: Automated Keyword Matching
A Python script (scripts/glasl_analysis.py) scanned all messages for indicators associated with each Glasl stage:
Stage 1 (Hardening) Keywords
Keywords: "disagree", "i think", "my position", "i believe", "i feel that", "from my perspective", "in my view", "i stand by"
Patterns:i (still )?(think|believe|feel), my (position|stance|view)
Stage 2 (Debate/Polarization) Keywords
Keywords: "wrong", "right", "obviously", "clearly", "you don't understand", "that's not true", "you're missing", "the fact is", "actually", "no offense but", "with all due respect"
Keywords: "people agree", "everyone knows", "the community thinks", "we all", "they always", "typical", "people like you", "your type", "those people", "us vs them", "taking sides", "who's with me"
Identity coalition-building: "noisebridgers", "at noisebridge we", "here at noisebridge"
Identity-based dismissal: "sexism", "sexist", "misogyny", "misogynist", "because i'm a woman", "as a woman"
Messages were attributed to target users by matching author display names and usernames against known aliases:
User Key
Matched Names
elle
elle, ellewrites, ellen
cloud
cloud, cloud and wiffles, wiffles
elan
elan, lané, elan (lané)
fineline
fineline, stephen c. young
wyatt
wyatt
coreyfro
coreyfro, corey froehlich, corey
loren
loren
zoda
zoda
Phase 2: Manual Verification and Data Cleaning
Critical Finding: Raw automated counts were 5-10x higher than cleaned counts due to extensive false positives. Manual review was essential for intellectually honest comparisons.
All messages flagged at any Glasl stage underwent manual review to distinguish true positives from false positives and exclude non-conflict contexts.
Manual Review Process
Based on detailed analysis in glasl_escalation_report.md (2025-12-30), each user's flagged messages were reviewed in context to determine:
Is this actual personal conflict escalation? (vs. mediator work, process explanation, technical language)
Is the message initiating or defensive?
What is the user's role in this context? (participant, mediator, administrator)
Does the message meet strict verification criteria?
~32 technical language (fail2ban, network contexts)
fineline
24
0
100%
~24 false positives, reduced engagement
coreyfro
20
3
85%
~17 false positives, systemic policy focus
Verification Criteria for Character Attacks
A message was counted as a verified character attack only if it met ALL of the following criteria:
Directed at a specific individual - Not general commentary about "people who..."
Accusatory in nature - Making claims about character, not just disagreeing with actions
Current/active - Not merely recounting historical events neutrally
Authored by the user - Not quoting someone else's words
Serious in tone - Not obviously joking or using technical terminology
Exclusion Categories
1. Mediator/Administrative Documentation
Problem: Users explaining processes or documenting decisions were flagged at high stages.
Example: Elan's 145 raw Stage 4-6 counts were almost entirely mediator work (explaining ATL processes, documenting Safety Council decisions, setting up mediation frameworks).
Solution: Excluded all mediator/administrative work from personal conflict counts. Elan should NOT be included in personal conflict comparisons.
2. Technical Language False Positives
Problem: Technical terms triggered keyword matches in non-conflict contexts.
Examples:
"isolate" in electrical isolation contexts
"attack" in "Tetris attack" or network security discussions
"ban" in "fail2ban," "banned IP ranges"
"everywhere" in technical documentation
Solution: Excluded all technical/infrastructure discussions from conflict counts.
3. Educational/Process Explanations
Problem: Explaining community processes triggered coalition/threat keywords.
Example: Elle's ~31 educational explanations of ATL/86 processes were not coalition-building.
Solution: Excluded educational content where the user was explaining processes to others, not engaging in personal conflict.
4. Defensive Procedural Questions
Problem: Asking about processes or accountability was flagged as Stage 6.
Example: Wyatt's "What was the justification for the ask to leave?" is a procedural question, not a threat.
Solution: Excluded defensive questions about process and accountability.
Manual verification was performed by a single analyst based on detailed analysis in glasl_escalation_report.md. Inter-rater reliability would strengthen the findings.
2. Conservative Bias
When uncertain about context, messages were excluded from conflict counts (false negative bias). This means cleaned counts are likely under-counts of actual personal conflict escalation.
3. Private Messages Excluded
DMs and private channels not included in backup. Analysis only covers public Discord channels.
4. Temporal Context Not Weighted
The analysis counts total instances without weighting for recency or clustering. Temporal clustering (recent vs. older conflicts) not captured in current metrics.
5. No Sentiment Analysis
The verification was manual based on content review; automated sentiment scoring was not used.
6. Historical Conflicts Before Analysis Period
Some users (e.g., fineline) had earlier active conflict involvement (2021 Lovi/Ninewaves incident) that predates the analyzed period. Current data may not capture full historical patterns.
Strengths of Manual Review Approach
While limitations exist, the manual review approach addressed critical false positive problems:
Eliminated mediator role inflation: Elan's 145 raw → 0 cleaned shows importance of role context
Removed technical language bias: 87-100% reduction in false positives for most users
Distinguished initiating vs. defensive: Cloud and Wyatt's defensive patterns now clear
Provided intellectually honest comparisons: 7.5x disparity (15 vs. 2) more defensible than raw 1.3x (116 vs. 91)
Recommendations for Future Analysis
Normalize by message count: Report attacks per 1,000 messages to control for volume differences
Temporal tracking: Plot escalation over time within specific incident threads
Reaction data: Include community response (emoji reactions, replies) as signal
Thread context: Distinguish initiators from responders in conflict threads
Cross-reference with mediation records: Correlate escalation patterns with formal mediation outcomes
Inter-rater reliability: Have multiple analysts verify character attacks independently
Sentiment analysis: Use NLP tools to supplement manual verification
Network analysis: Map coalition formation and third-party involvement patterns
Why This Methodology Matters
Systematic Approach
By using Glasl's established conflict escalation model, the analysis follows a recognized framework rather than ad-hoc characterization of conflict behavior.
Two-Phase Design
Combining automated detection with manual verification balances:
Scale: Automated phase can process 125K+ messages
Accuracy: Manual phase filters false positives and confirms actual escalation
Transparent Criteria
All verification criteria are documented, allowing others to assess the analysis or replicate it with different data.
Limitations Acknowledged
By documenting limitations, the analysis invites appropriate skepticism and suggests paths for improvement.
References
Glasl, Friedrich. Confronting Conflict: A First-Aid Kit for Handling Conflict. Hawthorn Press, 1999.
Glasl, Friedrich. "The Process of Conflict Escalation and Roles of Third Parties." In Conflict Management and Industrial Relations, edited by G. B. J. Bomers and R. Peterson. Springer, 1982.
Noisebridge Discord Server (2023-2025). Primary data source.