SWDSI Annual Conference 2026  ·  University of North Texas

Learning from AI Implementation Failure

Service-Dominant Logic as a Framework for Collaborative Digital Transformation

Scott J. Warren, Ph.D. Department of Learning Technologies · UNT
Arun Pookulangara Department of Learning Technologies · UNT

AI Projects Keep Failing — The Cause Is Not the Technology

Despite massive global investment, AI and digital transformation initiatives fail at staggering rates. The bottleneck is never the algorithm.

70–84%
of AI and digital transformation projects fail to deliver their promised value
Oludapo et al., 2024; Syed et al., 2023
90–95%
of AI-specific transformation initiatives fail to meet goals
Systematic reviews across sectors
42%
of failures attributed to organizational factors — not technology
Syed et al., 2023

Not the Algorithm

AI capabilities exceed what most organizations can meaningfully integrate. The bottleneck is organizational logic, not technical sophistication.

Goods-Dominant Logic

Treating AI as a product to acquire, install, and optimize for autonomy — people become obstacles, not assets. Value is assumed to be embedded in the technology itself.

Misframed Success

Measuring by automation rates and cost savings instead of value creation guarantees the wrong outcomes. Efficiency metrics capture performance, but not value.

Goods-Dominant vs. Service-Dominant Logic

Understanding why AI projects fail requires examining the assumptions embedded in their design. These two paradigms represent opposing models of how organizations conceptualize value creation.

Dimension Goods-Dominant (GDL) Service-Dominant (SDL)
Primary Goal Replace human labor through automation Enhance human capability through collaboration
Value Location Embedded in the AI system Emerges through human–AI interaction
Success Metric Efficiency, cost reduction Stakeholder value & relationship quality
System Design Autonomous, closed Collaborative, transparent
Transparency Black-box acceptable Explainability required for trust
Adaptation Static optimization Continuous learning and evolution
Stakeholder Role Passive recipients Active co-creators of value
Failure Response Technical debugging Collaborative redesign and learning
Organizations applying service-dominant thinking achieve up to 89% higher stakeholder satisfaction and 67% greater innovation capacity than those pursuing automation alone.
Abdurrahman et al., 2024

Why Goods-Dominant AI Projects Collapse

Five persistent, predictable failure modes emerge when organizations approach AI through a goods-dominant framework.

1

The Autonomy Paradox

Full autonomy removes feedback loops; independence creates fragility. Autonomous systems lack context awareness — they continue to optimize for goals that may no longer align with current conditions. Without human feedback, neither the system nor the organization learns.

2

The Transparency Problem

Black-box systems break trust; explainability must be an architectural priority. Organizations cannot improve or govern what they cannot interpret. Transparency enables learning, builds trust, and supports integration of AI insights with contextual human knowledge. (Kleinaltenkamp et al., 2012)

3

The Efficiency Trap

Optimizing speed and cost sacrifices adaptive capacity — efficiency without adaptability is fragility. Efficiency-focused design sacrifices stakeholder relationships and adaptive capacity, the very foundations of sustainable performance. (Ostrom et al., 2015)

4

Stakeholder Alienation

AI as replacement triggers resistance; inclusion transforms opponents into co-creators. When people understand how AI supports rather than replaces them, trust develops naturally, adoption accelerates, and the system reflects organizational realities. (Edvardsson et al., 2011)

5

Static Systems Fail

Fixed designs degrade in dynamic environments; effective AI is a living capability. Systems optimized for today's conditions will inevitably fail tomorrow. Goods-dominant logic treats AI as a finished product — but effective AI functions as a dynamic capability that evolves through continuous interaction. (Teece, 2007)

Five Failure Patterns: Real-World Cases

Each abstract failure pattern maps directly onto a documented organizational collapse. In every case, the technology performed as designed — the logic failed.

1 Autonomy Paradox — IBM Watson Health

Designed to replace oncologist judgment autonomously. Physicians could not understand recommendations or override errors. MD Anderson terminated the $62M project in 2017 after safety concerns arose from unsafe treatment suggestions.

GDL trap: value embedded in system; humans as passive recipients (Abdurrahman et al., 2024)

2 Transparency Problem — Amazon Recruiting AI

Black-box hiring algorithm downgraded women's applications for 5 years, undetected. No interpretable outputs for HR to audit or override. Scrapped in 2018 after internal discovery of systematic gender bias.

GDL trap: black-box acceptable; explainability not a design priority (Kleinaltenkamp et al., 2012)

3 Efficiency Trap — Knight Capital Group

Automated trading system optimized entirely for speed and volume. A 45-minute runaway event in August 2012 cost $440M. No adaptive override, no human feedback loop. The firm collapsed within days.

GDL trap: efficiency without adaptability is fragility (Ostrom et al., 2015)

4 Stakeholder Alienation — JP Morgan COIN v1

COIN was deployed without involving legal staff. Lawyers saw it as a threat; many began seeking other roles. Credit officers couldn't explain decisions to clients. Trust eroded. The bank later rebuilt COIN as a collaborative intelligence platform.

GDL trap: stakeholders as passive recipients, not co-creators (Warren, 2025, Ch. 3)

5 Static Systems — Maersk Early AI Initiatives

In the mid-2010s Maersk invested heavily in AI for route optimization and port efficiency. Despite technical sophistication, these initiatives failed to create transformational value — AI could not adapt to what customers actually needed: integrated logistics ecosystems.

GDL trap: AI as finished product, not dynamic capability (Teece, 2007; Warren, 2025, Ch. 6)

In every case: the technology worked. The logic failed.
GDL treats AI as a product to install, not a collaborator to develop.

Four Principles for Collaborative SDL-Based AI Design

Originating in marketing and systems theory (Vargo & Lusch, 2004, 2008, 2016), SDL asserts that value is co-created through relationships and interactions, not delivered through static products.

Transparency

Interpretable models and visible reasoning chains. Explainability is an architectural priority, not a compliance checkbox. Users need to understand how AI arrives at outputs — transparency builds the trust that makes collaboration possible.

Adaptability

Modular designs with feedback mechanisms and iterative retraining so AI evolves alongside the business. Effective AI functions as a dynamic capability — a resource that evolves through continuous interaction and learning, not a finished product.

Co-Creation

Stakeholders embedded at every stage: planning, testing, and refinement. Resistance becomes collaboration. When people understand how AI supports rather than replaces them, adoption accelerates and systems reflect organizational realities.

Augmentation

AI extends human capability rather than replacing it. Human judgment plus machine precision equals resilience. Collaborative architecture integrates human judgment into the decision cycle, allowing AI to analyze, recommend, and learn from human validation.

When SDL Fixed What GDL Broke: Case Evidence

Two organizations that suffered GDL failures subsequently rebuilt their AI systems under service-dominant principles — with measurably different outcomes.

JP Morgan COIN: From Alienation to Collaboration

GDL Problem

Legal staff treated as obsolete; opaque scores eroded client trust. Lawyers experienced the AI as an existential threat and began seeking other positions.

SDL Fix

Stakeholder risk mapping engaged lawyers, credit officers, and compliance teams as co-designers. Legal staff became "AI collaboration specialists" rather than replacements, combining algorithmic pattern recognition with regulatory knowledge.

Outcome

Risk analysis combining AI pattern recognition with human relationship knowledge — producing results neither could achieve alone. Client trust was rebuilt on a foundation of shared understanding.

Source: Warren (2025), Ch. 3; Edvardsson et al. (2011)

Maersk: From Static Optimization to Living Ecosystem

GDL Problem

Static AI for container routing optimized for efficiency. Customers wanted integrated logistics. The AI couldn't adapt — it was optimizing for container transportation while customers needed ecosystem orchestration.

SDL Fix

Customer co-creation research revealed shipping was only 15–20% of total logistics cost. Maersk rebuilt around integrated ecosystems, applying Schmarzo's Law of Increasing Returns on Data and treating the platform as a reusable, evolving asset.

Outcome

Transformed from container carrier to integrated logistics platform. AI evolved alongside customer needs rather than degrading against them — a shift from static optimization to continuous value co-creation.

Source: Warren (2025), Ch. 6; Schmarzo (2020)

The SDL Difference: What Changes When You Switch Logics

Dimension Goods-Dominant (Fails) Service-Dominant (Succeeds)
Stakeholder rolePassive recipientsActive co-creators
TransparencyBlack-box acceptableExplainability required
Value locationEmbedded in AI systemEmerges through interaction
AdaptationStatic optimizationContinuous learning
Failure responseTechnical debuggingCollaborative redesign

Schmarzo's Five Steps + SDL: How They Align

SDL governs all five stages of Schmarzo's (2020) digital transformation model. Organizations applying goods-dominant logic stall at Step 3 — efficiency optimization without co-creation blocks transformation and cannot reach Steps 4–5. (Warren & Pookulangara, 2026; Schmarzo, 2020)
1 Monitor

Capture big data on organizational performance as a system of systems

SDL: stakeholders define what to measure

2 Insights

Analysts and management co-identify actionable insights and technology solutions

SDL: co-creation prevents misframed metrics

3 Optimize

Embed analytics and automate operations to improve performance

⚠ GDL stalls here — efficiency without adaptability

4 Monetize

Measure transformation value; ensure improvements drive profit and stakeholder outcomes

SDL: relational and adaptive metrics, not just ROI

5 Transform

Deploy analytics to augment human performance via new technologies and processes

SDL goal: augment not replace — Maersk, COIN v2 achieved this

Why GDL Stalls at Step 3

When organizations treat AI as a product, they achieve operational efficiency at Step 3 but cannot reach Steps 4–5. Knight Capital optimized for speed and eliminated resilience. IBM Watson optimized for accuracy and eliminated trust. Without SDL's co-creation and adaptability, systems degrade rather than evolve. (Ostrom et al., 2015; Warren & Pookulangara, 2026)

How SDL Unlocks Steps 4–5

SDL embeds stakeholders across all five stages so insights are co-created and monetization reflects relational as well as financial value. Maersk reached Step 5 only by moving from container efficiency (Step 3) to integrated ecosystem orchestration (Step 5) — a shift possible only through stakeholder co-design and adaptive AI architecture. (Schmarzo, 2020; Warren, 2025, Ch. 6)

Four-Stage SDL AI Implementation Framework

Transitioning to service-dominant AI involves both cultural and structural transformation. Successful organizations move through four iterative stages.

1

Awareness Building

Months 1–3

Executive education, stakeholder analysis, and mindset alignment

Shared understanding of collaborative AI principles · Leadership commitment to service-dominant implementation
2

Pilot Development

Months 4–9

Small-scale collaborative AI tests and feedback cycles

Proof of concept for partnership-based design · Demonstrated joint value creation
3

Capability Building

Months 10–18

Training, governance redesign, and performance system alignment

Institutional capacity for human–AI collaboration · Sustainable processes and cultural reinforcement
4

Ecosystem Integration

Months 19–24

Cross-functional integration and network scaling

Collaborative AI embedded across value chains · Enduring competitive advantage

Beyond Speed, Cost, and Accuracy

Assessment criteria must evolve alongside implementation philosophy. Traditional efficiency metrics capture performance but not value. Service-dominant measurement expands evaluation to include learning, trust, and adaptability.

Category Traditional Metrics Service-Dominant Metrics
Stakeholder Outcomes Cost per transaction, speed Satisfaction, value co-creation
Innovation Capacity Number of AI features Collaborative innovations, new capabilities
Adaptation System uptime Responsiveness to change, learning speed
Relationship Quality Usage rates Trust, transparency, partnership depth
Long-term Performance Short-term ROI Sustained value creation, strategic advantage
Cultural Integration Adoption percentages Normalization of collaborative practices

AI failure is about assumptions, not algorithms.

  • Goods-dominant logic creates brittle, isolated AI
  • Service-dominant logic builds living collaborators
  • Transparency and adaptability equal sustained value
  • Stakeholder co-creation transforms resistance into partnership
  • Measure trust, innovation, and adaptability — not just speed and cost
  • AI should extend people, not replace them

🛠 Ready to plan your implementation?In Testing

The Sustainable Digital Transformation Decision Framework (SDTDF) is a structured tool to assess AI and digital transformation readiness and reduce the likelihood of implementation failure. Apply SDL principles systematically before committing to deployment.

https://sdtdf.systemly.net →

Questions?

Scott J. Warren, Ph.D.

Department of Learning Technologies · College of Information · University of North Texas

scott.warren@unt.edu warren.systemly.net

Arun Pookulangara

Department of Learning Technologies · College of Information · University of North Texas

ArunPookulangara@my.unt.edu

Sources