University of North Texas  |  College of Information SWDSI Annual Conference

Learning from AI Implementation Failure

Service-Dominant Logic as a Framework for Collaborative Digital Transformation

SWDSI Conference Presentation
Presented by
Scott J. Warren, Ph.D.
Department of Learning Technologies
College of Information  ·  [email protected]

Co-author
Arun Pookalangara
Department of Learning Technologies
College of Information
University of North Texas
The Problem

AI Projects Keep Failing. The Cause Is Not the Technology

Despite dramatic advances in AI capability, organizations continue to fail at implementation at alarming rates. The problem is not algorithmic. It is structural. The dominant logic used to procure, deploy, and evaluate AI systems is fundamentally misaligned with how value is actually created.

70–84%
of AI and digital transformation projects fail to deliver their promised value.(Oludapo et al., 2024;[1] Syed et al., 2023[2])
Not the Algorithm
AI capabilities exceed what most organizations can meaningfully integrate. The bottleneck is organizational logic, not technical sophistication.
Goods-Dominant Logic
Treating AI as a product to acquire, install, and optimize for autonomy. People become obstacles to efficiency rather than sources of value.
Misframed Success
Measuring by automation rates and cost savings instead of value creation. The wrong metrics guarantee the wrong outcomes.

Two Paradigms

Goods-Dominant vs. Service-Dominant Logic

Goods-Dominant Logic treats AI as a product with value embedded in the system itself. Service-Dominant Logic (SDL), developed by Vargo and Lusch,[3] treats value as something that emerges through interaction. Organizations using SDL achieve up to 89% higher stakeholder satisfaction and 67% greater innovation capacity (Abdurrahman et al., 2024).[4]

DimensionGoods-Dominant (GDL)Service-Dominant (SDL)
Primary GoalReplace human laborEnhance human capability
Value LocationEmbedded in AI systemEmerges through interaction
Success MetricEfficiency, cost reductionStakeholder value and quality
System DesignAutonomous, closedCollaborative, transparent
Failure ResponseTechnical debuggingCollaborative redesign

Five Failure Patterns

Why Goods-Dominant AI Projects Collapse

The 70–84% AI implementation failure rate is not random. Goods-dominant logic produces five predictable structural failure patterns that recur across industries regardless of the underlying technology's quality. Each one stems from treating AI as a product rather than a collaborative capability.

1
The Autonomy Paradox
Organizations assume that the less human input a system requires, the more effective it is. But full autonomy removes the feedback loops that allow systems to improve, diffuses accountability when decisions go wrong, and creates systems that continue optimizing for goals that no longer match current conditions. Organizations that emphasize human-AI collaboration achieve significantly higher stakeholder satisfaction and innovation capacity than those pursuing full automation.[4]
2
The Transparency Problem
When AI systems act as opaque black boxes, they erode both user trust and organizational learning. People cannot govern, improve, or integrate what they cannot interpret. Transparency enables three outcomes: it allows humans to understand how decisions are reached, builds trust through explainability, and supports integration by helping people combine AI outputs with contextual knowledge.[5] Treating explainability as an afterthought rather than an architectural requirement guarantees adoption failure.
3
The Efficiency Trap
Efficiency is a commonly used but often deceptive metric. When organizations optimize solely for speed, cost, or automation rates, they sacrifice adaptive capacity, the very foundation of sustainable performance. A logistics firm that automates routing to cut delivery time may quietly degrade service quality or resilience during disruptions. The immediate metrics improve, but systemic fragility increases. Efficiency without adaptability is fragility with better-looking numbers.[6]
4
Stakeholder Alienation
When AI is positioned as a replacement rather than an enhancement, the predictable result is resistance. Employees perceive threats to their autonomy and expertise. Customers grow frustrated with rigid systems that cannot adapt. Partners face breakdowns when AI-driven processes eliminate the flexibility that relationships require. The fix is not communication campaigns; it is genuine inclusion. Embedding stakeholders in planning, testing, and refinement from the start transforms resistance into co-ownership.[7]
5
Static Systems in Dynamic Environments
Goods-dominant logic treats AI as a finished product delivered at deployment. But environments change: markets shift, regulations evolve, user needs transform. Systems optimized for today's conditions will fail tomorrow's. Effective AI must function as a dynamic capability: a resource that evolves through continuous interaction, feedback, and learning.[8] Organizations that treat AI as adaptive infrastructure achieve far greater resilience than those that treat it as a one-time deployment. Static design is not caution. It is the most expensive mistake in AI implementation.

Service-Dominant Solutions

Four Principles for Collaborative AI Design

Each of the five GDL failure patterns has a direct SDL counterpart. These four design principles are not aspirational values; they are structural requirements for building AI systems that sustain value over time rather than eroding it.

Transparency
Service-dominant AI makes explainability an architectural requirement, not a compliance feature. This means interpretable models, visible reasoning chains, and interfaces that communicate how conclusions are reached. The outcome is not just trust. It is collaboration built on shared understanding. Organizations that invest in explainability find that user adoption, feedback quality, and long-term performance all improve substantially. Users cannot partner with a system they cannot see into.
Adaptability
Effective AI is not a finished product. It is a dynamic capability that evolves through use. Modular designs, embedded feedback mechanisms, and iterative retraining allow systems to evolve alongside shifting business environments, regulatory requirements, and user needs. Organizations that build for adaptability achieve far greater resilience than those that optimize for a fixed set of current conditions. Static systems are not stable; they are degrading.
Co-Creation
SDL embeds stakeholders at every stage of implementation, from initial scoping through testing and ongoing refinement. This is not consultation. It is genuine design partnership. When employees understand how AI supports rather than replaces them, resistance dissolves and adoption accelerates. When customers and partners shape how systems interact with them, alignment between technology and real-world use is built in rather than retrofitted. Co-creation takes more time upfront and saves far more in recovery and retraining later.[7]
Augmentation over Automation
The core reframe of SDL is that AI's goal is to extend human capability, not replace it. This means designing AI to analyze, recommend, and learn from human validation, integrating human judgment into the decision cycle rather than removing it. Human adaptability complements machine precision in ways that produce outcomes neither could achieve alone. Organizations that make this shift find that accountability improves, trust develops, and systems prove far more resilient when conditions change unexpectedly.

Implementation Roadmap

Four-Stage SDL Implementation Framework

SDL transformation does not happen in a single deployment. This staged framework builds the organizational conditions for collaborative AI before scaling.

1
Awareness Building
Months 1–3
Executive education, stakeholder analysis, mindset alignment

Shared understanding of collaborative AI principles
2
Pilot Development
Months 4–9
Small-scale collaborative AI tests and feedback cycles

Proof of concept for partnership-based design
3
Capability Building
Months 10–18
Training, governance redesign, performance system alignment

Institutional capacity for human-AI collaboration
4
Ecosystem Integration
Months 19–24
Cross-functional integration and network scaling

Collaborative AI embedded across value chains

Measuring Success Differently

Beyond Speed, Cost, and Accuracy

Traditional AI metrics optimize for the wrong outcomes. SDL replaces efficiency proxies with measures that capture actual value creation and organizational learning.

CategoryTraditionalService-Dominant
Stakeholder OutcomesCost per transaction, speedSatisfaction, value co-creation
Innovation CapacityNumber of AI featuresCollaborative innovations, new capabilities
AdaptationSystem uptimeResponsiveness to change, learning speed
Relationship QualityUsage ratesTrust, transparency, partnership depth
Long-term PerformanceShort-term ROISustained value creation, strategic advantage
Cultural IntegrationAdoption percentagesNormalization of collaborative practices

The SDL Transformation Model

Schmarzo's Five-Step Digital Transformation via SDL

Each step embeds human judgment and collaborative value-creation into the transformation process (Schmarzo, 2020).[9]

1Monitor
Capture big data on org performance as a system of systems
>
2Insight
Analysts and management identify actionable insights and tech solutions
>
3Optimize
Embed analytics and automate parts of operations for performance gains
>
4Monetize
Measure transformation value; ensure improvements drive profit
>
5Transform
Deploy analytics to augment human performance via new tech and processes
Core finding: AI failure is about assumptions, not algorithms. Service-dominant logic builds living collaborators rather than brittle, isolated systems.
The Argument
  • GDL creates brittle, isolated AI
  • SDL builds living collaborators
  • Transparency + adaptability = sustained value
The Prescription
  • Embed stakeholders at every stage
  • Measure trust, innovation, adaptability
  • Design for augmentation, not replacement
The Outcome
  • Co-creation transforms resistance
  • AI that evolves with the organization
  • ROI measured in value, not cost savings

Sources

References

  1. Oludapo, S., Carroll, N., & Helfert, M. (2024). Why do so many digital transformations fail? A bibliometric analysis and future research agenda. Journal of Business Research, 174, 114528. https://doi.org/10.1016/j.jbusres.2024.114528
  2. Syed, R., Bandara, W., Arthur, D., French, E., & Ferrer, M. (2023). Digital transformation failure factors in public sector organizations: A systematic literature review. Information Polity, 28(3), 355–372. https://doi.org/10.3233/IP-220017
  3. Vargo, S. L., & Lusch, R. F. (2004). Evolving to a new dominant logic for marketing. Journal of Marketing, 68(1), 1–17. https://doi.org/10.1509/jmkg.68.1.1.24036; Vargo, S. L., & Lusch, R. F. (2008). Service-dominant logic: Continuing the evolution. Journal of the Academy of Marketing Science, 36(1), 1–10; Vargo, S. L., & Lusch, R. F. (2016). Institutions and axioms: An extension and update of service-dominant logic. Journal of the Academy of Marketing Science, 44(1), 5–23.
  4. Abdurrahman, A., Gustomo, A., & Prasetio, E. A. (2024). Impact of dynamic capabilities on digital transformation and innovation to improve banking performance. Journal of Open Innovation: Technology, Market, and Complexity, 10(1), 100215. https://doi.org/10.1016/j.joitmc.2024.100215
  5. Kleinaltenkamp, M., Brodie, R. J., Frow, P., Hughes, T., Peters, L. D., & Woratschek, H. (2012). Resource integration. Marketing Theory, 12(2), 201–205. https://doi.org/10.1177/1470593111429512
  6. Ostrom, A. L., Parasuraman, A., Bowen, D. E., Patrício, L., & Voss, C. A. (2015). Service research priorities in a rapidly changing context. Journal of Service Research, 18(2), 127–159. https://doi.org/10.1177/1094670515576315
  7. Edvardsson, B., Tronvoll, B., & Gruber, T. (2011). Expanding understanding of service exchange and value co-creation. Journal of the Academy of Marketing Science, 39(2), 327–339. https://doi.org/10.1007/s11747-010-0200-y
  8. Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350. https://doi.org/10.1002/smj.640
  9. Schmarzo, B. (2020). The economics of data, analytics, and digital transformation. Packt Publishing.
  10. Schein, E. H. (2010). Organizational culture and leadership (4th ed.). Jossey-Bass.