Methodology

Comprehensive documentation of the analysis approach, verification criteria, and limitations

Overview

This analysis used Friedrich Glasl's 9-stage conflict escalation model to systematically analyze Discord messages from eight community members. The analysis employed a two-phase approach:

  1. Phase 1: Automated keyword/pattern matching to identify candidate messages
  2. Phase 2: Manual verification to filter false positives and confirm actual conflict escalation behavior
Analysis Date: 2025-12-30
Data Through: December 2025
Total Messages Analyzed: 125,220

Glasl's 9-Stage Model

Friedrich Glasl's conflict escalation model maps how conflicts intensify through three phases, each containing three stages:

Phase Stages Name Characteristics Outcome Orientation
Win-Win 1 Hardening Positions crystallize, tension rises Rational discussion still possible; both parties can win
2 Debate/Polarization Verbal confrontation, black-and-white thinking
3 Actions Not Words Talk replaced by action, empathy decreases
Win-Lose 4 Images/Coalitions Stereotypes, recruiting allies, reputation campaigns Self-interest dominates; face-saving difficult; one must win
5 Loss of Face Public attacks on opponent's character
6 Strategies of Threats Ultimatums, accelerating conflict
Lose-Lose 7 Limited Destructive Blows Opponent as "thing," damaging actions acceptable Destruction becomes the goal; both parties lose
8 Fragmentation Destroying opponent's support system
9 Together into the Abyss Total confrontation, mutual destruction
Key Finding: No user showed Stage 7-9 behavior in conflict channels. Conflicts remain in the "Win-Lose" phase and haven't crossed into "Lose-Lose" territory.

Data Sources

Discord Backup Data

Channels Analyzed

All public channels were included, with particular attention to conflict-relevant channels:

Limitation: Private messages (DMs) and private channels were not included in the backup data.

Phase 1: Automated Keyword Matching

A Python script (scripts/glasl_analysis.py) scanned all messages for indicators associated with each Glasl stage:

Stage 1 (Hardening) Keywords

Stage 2 (Debate/Polarization) Keywords

Stage 3 (Actions Not Words) Keywords

Stage 4 (Images/Coalitions) Keywords

Stage 5 (Loss of Face) Keywords

Stage 6 (Strategies of Threats) Keywords

Stage 7 (Limited Destructive Blows) Keywords

Stage 8 (Fragmentation) Keywords

Stage 9 (Together into the Abyss) Keywords

User Identification

Messages were attributed to target users by matching author display names and usernames against known aliases:

User Key Matched Names
elle elle, ellewrites, ellen
cloud cloud, cloud and wiffles, wiffles
elan elan, lané, elan (lané)
fineline fineline, stephen c. young
wyatt wyatt
coreyfro coreyfro, corey froehlich, corey
loren loren
zoda zoda

Phase 2: Manual Verification and Data Cleaning

Critical Finding: Raw automated counts were 5-10x higher than cleaned counts due to extensive false positives. Manual review was essential for intellectually honest comparisons.

All messages flagged at any Glasl stage underwent manual review to distinguish true positives from false positives and exclude non-conflict contexts.

Manual Review Process

Based on detailed analysis in glasl_escalation_report.md (2025-12-30), each user's flagged messages were reviewed in context to determine:

  1. Is this actual personal conflict escalation? (vs. mediator work, process explanation, technical language)
  2. Is the message initiating or defensive?
  3. What is the user's role in this context? (participant, mediator, administrator)
  4. Does the message meet strict verification criteria?

Data Cleaning Results

User Raw Stage 4-6 Cleaned Stage 4-6 Reduction Primary Exclusions
Elle 116 15 87% ~40 neutral 'noisebridgers' references, ~31 educational ATL/86 explanations
Cloud 91 2 98% ~89 technical false positives
Wyatt 79 3 96% ~65 technical/procedural language
Elan 145 0 100% ALL mediator/administrative documentation
zoda 48 2 96% ~36 mediator/process documentation
Loren 40 0 100% ~32 technical language (fail2ban, network contexts)
fineline 24 0 100% ~24 false positives, reduced engagement
coreyfro 20 3 85% ~17 false positives, systemic policy focus

Verification Criteria for Character Attacks

A message was counted as a verified character attack only if it met ALL of the following criteria:

  1. Directed at a specific individual - Not general commentary about "people who..."
  2. Accusatory in nature - Making claims about character, not just disagreeing with actions
  3. Current/active - Not merely recounting historical events neutrally
  4. Authored by the user - Not quoting someone else's words
  5. Serious in tone - Not obviously joking or using technical terminology

Exclusion Categories

1. Mediator/Administrative Documentation

Problem: Users explaining processes or documenting decisions were flagged at high stages.

Example: Elan's 145 raw Stage 4-6 counts were almost entirely mediator work (explaining ATL processes, documenting Safety Council decisions, setting up mediation frameworks).

Solution: Excluded all mediator/administrative work from personal conflict counts. Elan should NOT be included in personal conflict comparisons.

2. Technical Language False Positives

Problem: Technical terms triggered keyword matches in non-conflict contexts.

Examples:

Solution: Excluded all technical/infrastructure discussions from conflict counts.

3. Educational/Process Explanations

Problem: Explaining community processes triggered coalition/threat keywords.

Example: Elle's ~31 educational explanations of ATL/86 processes were not coalition-building.

Solution: Excluded educational content where the user was explaining processes to others, not engaging in personal conflict.

4. Defensive Procedural Questions

Problem: Asking about processes or accountability was flagged as Stage 6.

Example: Wyatt's "What was the justification for the ask to leave?" is a procedural question, not a threat.

Solution: Excluded defensive questions about process and accountability.

5. Quotes and Historical References

Problem: Quoting others' concerns or recounting historical events triggered keyword matches.

Example: Elan quoting Benjamin's concerns about someone was flagged as Elan's character attack.

Solution: Excluded quotes of others and neutral historical recounting.

True Positives (Counted as Verified)

Output Files

Key Findings from Data Cleaning

After Manual Review

  • Elle has 7.5x more Win-Lose escalation messages than the next highest personal conflict user (15 vs. 2)
  • Elle has 10x more verified character attacks (10 vs. 0-1 for others)
  • Elan should be excluded from personal conflict comparisons (mediator role, 0 personal conflict)
  • Most users show significant restraint with 0-3 verified Win-Lose messages when context is considered
  • Raw automated counts created false equivalence between documentation and personal attacks

Methodology Comparison Standard

For intellectually honest comparisons:

Output and Storage

Limitations

1. Single Analyst Bias

Manual verification was performed by a single analyst based on detailed analysis in glasl_escalation_report.md. Inter-rater reliability would strengthen the findings.

2. Conservative Bias

When uncertain about context, messages were excluded from conflict counts (false negative bias). This means cleaned counts are likely under-counts of actual personal conflict escalation.

3. Private Messages Excluded

DMs and private channels not included in backup. Analysis only covers public Discord channels.

4. Temporal Context Not Weighted

The analysis counts total instances without weighting for recency or clustering. Temporal clustering (recent vs. older conflicts) not captured in current metrics.

5. No Sentiment Analysis

The verification was manual based on content review; automated sentiment scoring was not used.

6. Historical Conflicts Before Analysis Period

Some users (e.g., fineline) had earlier active conflict involvement (2021 Lovi/Ninewaves incident) that predates the analyzed period. Current data may not capture full historical patterns.

Strengths of Manual Review Approach

While limitations exist, the manual review approach addressed critical false positive problems:

Recommendations for Future Analysis

  1. Normalize by message count: Report attacks per 1,000 messages to control for volume differences
  2. Temporal tracking: Plot escalation over time within specific incident threads
  3. Reaction data: Include community response (emoji reactions, replies) as signal
  4. Thread context: Distinguish initiators from responders in conflict threads
  5. Cross-reference with mediation records: Correlate escalation patterns with formal mediation outcomes
  6. Inter-rater reliability: Have multiple analysts verify character attacks independently
  7. Sentiment analysis: Use NLP tools to supplement manual verification
  8. Network analysis: Map coalition formation and third-party involvement patterns

Why This Methodology Matters

Systematic Approach

By using Glasl's established conflict escalation model, the analysis follows a recognized framework rather than ad-hoc characterization of conflict behavior.

Two-Phase Design

Combining automated detection with manual verification balances:

Transparent Criteria

All verification criteria are documented, allowing others to assess the analysis or replicate it with different data.

Limitations Acknowledged

By documenting limitations, the analysis invites appropriate skepticism and suggests paths for improvement.

References