The AI Enterprise, Open-Source and Low-Code

Introduction

The artificial intelligence revolution has reached a critical inflection point. As enterprises worldwide race to integrate AI into their core operations, fundamental questions about control, transparency, and sustainability have emerged. The evidence increasingly points to an unavoidable conclusion: the future of enterprise AI must be built on open-source foundations, with low-code platforms serving as the essential standardization layer that makes this vision practical, scalable, and governable.

The Open-Source Imperative for Enterprise AI

The case for open-source AI in enterprise environments extends far beyond cost considerations.

While eliminating licensing fees represents a tangible benefit, with research showing companies would spend 3.5 times more on software without open-source alternatives, the strategic advantages run much deeper. Enterprise AI built on proprietary foundations creates fundamental vulnerabilities that threaten long-term organizational autonomy and operational resilience. Transparency stands as the cornerstone argument for open-source AI. When AI systems make consequential business decisions affecting everything from credit approvals to supply chain optimization, enterprises require complete visibility into model architecture, training data, and decision-making processes. Open-source models provide this transparency by granting access to source code and model weights, enabling development teams to understand exactly how their AI systems reach conclusions. This visibility proves essential for detecting biases, ensuring regulatory compliance, and building stakeholder trust. In heavily regulated industries like healthcare and finance where AI decisions carry significant consequences, this transparency transitions from beneficial to mandatory. The threat of vendor lock-in represents another compelling driver toward open-source AI. Organizations deploying proprietary AI solutions face technical lock-in through vendor-specific APIs and data formats, economic lock-in through volume-based pricing that escalates with usage, and strategic lock-in that constrains innovation to vendor roadmaps. When a vendor changes direction, increases prices, or even fails entirely, enterprises dependent on proprietary systems face potentially catastrophic disruption. Recent high-profile vendor failures have exposed how businesses lacking control over their source code and data face existential threats when dependencies collapse. Open-source AI fundamentally alters this power dynamic. Organizations retain complete control over model weights, training processes, and deployment infrastructure. They can customize AI systems according to specific business requirements without seeking vendor permission or incurring additional costs. They maintain the freedom to switch infrastructure providers, modify algorithms, or integrate with any technology stack without artificial barriers. This autonomy proves particularly crucial as AI transitions from experimental technology to mission-critical infrastructure.

Digital Sovereignty and Regulatory Alignment

The concept of AI sovereignty has rapidly evolved from aspirational goal to strategic necessity, driven by converging regulatory and geopolitical pressures. Digital sovereignty in the AI context encompasses four critical dimensions:

  • Technology sovereignty over AI infrastructure and architecture,
  • Operational sovereignty including the skills and access needed to operate systems independently,
  • Data sovereignty ensuring information remains within appropriate jurisdictions and
  • Assurance sovereignty establishing verifiable security and integrity.

Open-source AI directly addresses each sovereignty dimension. Organizations can deploy models within their own infrastructure boundaries, maintaining data residency requirements essential for GDPR compliance and other regulatory frameworks. They can verify model behavior through code inspection rather than relying on vendor assurances. They avoid dependencies on foreign technology providers that create national security or compliance concerns. Research indicates 81% of AI-leading enterprises consider an open-source data and AI layer central to their sovereignty strategy. The regulatory landscape increasingly favors transparent, auditable AI systems. The EU AI Act, effective August 2024 with full compliance required by August 2026, establishes comprehensive transparency requirements with penalties reaching €35 million or 7% of global annual turnover for serious violations. Open-source models naturally align with these transparency mandates, as their publicly accessible code enables the audits, bias detection, and accountability documentation that regulations demand.

Innovation Acceleration Through Community Collaboration

Open-source AI harnesses collective intelligence at unprecedented scale. Rather than depending on a single vendor’s research team, open-source projects benefit from contributions by thousands of developers, researchers, and domain experts worldwide. This collaborative model accelerates innovation through rapid bug identification and remediation, continuous feature development reflecting diverse use cases, and shared best practices across industries and geographies. The network effects prove substantial. When Meta donated PyTorch to the Linux Foundation, corporate contributions increased notably, particularly from chip manufacturers seeking to optimize their hardware for the platform. Research demonstrates a positive relationship between open-source contributions and startup formation at both country and company levels, with open-source activity fostering entrepreneurial ecosystems. Nearly all software developers have experimented with open models, and 89% of organizations using AI incorporate open-source AI somewhere in their infrastructure. This community-driven development model ensures AI capabilities evolve to address real-world enterprise needs rather than vendor-perceived market opportunities. Domain experts contribute specialized knowledge, improving model performance in specific industries. Security researchers identify vulnerabilities that might remain hidden in proprietary code. Optimization specialists improve efficiency, reducing computational costs and environmental impact.

Cost Efficiency and Resource Optimization

While open-source AI eliminates direct licensing fees, the total cost of ownership calculation extends beyond acquisition costs. Proprietary models typically operate on pay-per-use pricing, with costs like $0.004 per 1,000 tokens for GPT-4. At scale, processing 100 million tokens daily translates to approximately $120,000 monthly in API fees. Self-hosting open-source models involves upfront infrastructure investments and engineering resources but can achieve inference costs as low as $0.01 per 1,000 tokens at scale. The cost calculus favors open-source as usage scales. Organizations with substantial AI workloads benefit from capital investment in infrastructure rather than ongoing operational expenses that grow linearly with usage. Development teams can experiment freely without metered costs constraining innovation. Resources can be allocated toward customization and optimization rather than licensing fees. Survey data shows 60% of decision makers report lower implementation costs with open-source AI compared to similar proprietary tools, with two-thirds of organizations citing cost savings as a primary reason for choosing open-source

Beyond direct cost savings, open-source AI enables strategic resource allocation. Organizations avoid the sunk costs of vendor-specific skills that become obsolete when changing platforms. They can negotiate more favorable terms with cloud providers by maintaining platform independence. They can optimize infrastructure for their specific use cases rather than accepting vendor-determined configurations. AI-enhanced business operations can reduce costs by over 50% while maintaining user-friendliness and performance, with these benefits multiplied when using cost-effective open-source foundations.

The Low-Code Standardization Layer

Open-source AI delivers tremendous value but introduces complexity that can overwhelm organizations lacking deep technical expertise.

Low-code platforms bridge this gap, providing a standardization layer that makes open-source AI accessible, governable, and scalable across enterprise environments. Low-code development platforms provide visual interfaces that abstract complex AI concepts into manageable components. Rather than requiring extensive machine learning expertise to deploy AI capabilities, low-code platforms offer pre-built AI components and services integrated through drag-and-drop interfaces. This democratization enables both citizen developers and professional developers to create intelligent applications by leveraging pre-trained models and automated workflows. The standardization benefits prove essential for enterprise-scale AI adoption. Low-code platforms establish consistent architectural patterns across AI implementations, ensuring applications follow proven design principles. They provide standardized APIs and connectors enabling seamless integration with existing enterprise systems, from ERP and CRM platforms to legacy applications. They embed security controls, role-based access, audit logging, and compliance capabilities directly into the development framework. This standardization accelerates development while reducing the risks of inconsistent implementations across organizational silos.

Governance and Compliance Through Low-Code

Enterprise AI governance represents one of the most challenging aspects of AI adoption. Organizations must balance innovation velocity with security, compliance, and risk management requirements. Low-code platforms transform governance from constraint into enabler by embedding controls directly into the development environment. Modern enterprise low-code platforms incorporate comprehensive governance frameworks addressing critical requirements. Role-based access control determines who can build, edit, deploy, and view applications, with permissions connected to granular controls restricting access to specific data sources, credentials, and environments. Environment separation creates distinct spaces for development, testing, and production systems, with deployment controls governing progression through approval workflows and testing checkpoints. Integration management controls how applications connect to databases, APIs, and external services through catalogs of pre-approved, security-vetted connectors. Audit capabilities prove essential for regulatory compliance and risk management. Low-code platforms provide comprehensive logging of who built or modified applications, what data was accessed, and when changes were deployed. Automated security scanning flags exposed secrets, problematic API calls, and compliance violations. Version control and rollback capabilities enable rapid recovery when issues emerge. These governance features align with transparency requirements in regulations like the EU AI Act, NIST AI RMF, and ISO 42001.

The combination of open-source AI models with low-code governance creates a powerful synergy. Organizations gain the transparency and control benefits of open-source while maintaining enterprise-grade oversight through low-code frameworks. They can customize AI models for specific business needs while ensuring modifications follow security and compliance policies. They can democratize AI development across business units while IT maintains centralized visibility and control.

Standardization as Competitive Advantage

Standardization through low-code platforms delivers competitive advantages that compound over time. Organizations developing common components, templates, and patterns accelerate subsequent development projects. When a security update or feature enhancement applies to a shared component, it propagates across all applications using that component instantly. This reusability dramatically improves development efficiency while reducing maintenance burden Cross-team collaboration improves as low-code provides a common development environment that both technical and business stakeholders can engage with. Business analysts and domain experts participate directly in application development rather than merely providing requirements to IT teams. This proximity between problem understanding and solution creation accelerates innovation cycles and improves solution relevance.

Platform standardization reduces technical debt and improves long-term maintainability. When applications share common architectural patterns, upgrading to new capabilities or migrating to updated infrastructure becomes manageable rather than requiring individual assessment of dozens of custom implementations. Organizations can adopt emerging AI models or frameworks by updating platform components rather than refactoring every application. The scalability benefits prove essential as AI initiatives expand from pilots to production deployments across the enterprise. Low-code platforms handle infrastructure concerns like load balancing, auto-scaling, and high availability automatically. They support multiple development environments enabling teams to build, test, and deploy applications across departments and geographies. They provide centralized management of AI models and applications, ensuring consistent implementation of security policies and regulatory requirements.

Accelerating Digital Transformation

The convergence of open-source AI and low-code development fundamentally accelerates digital transformation initiatives. Traditional AI application development required months or years, but low-code platforms can reduce development time from months to weeks or even days. This acceleration occurs through automated code generation, intelligent suggestions for application design and workflow optimization, and pre-built connectors that integrate with existing enterprise systems. Market projections reflect this transformative impact. The global low-code development platform market, valued at approximately $28 billion to $35 billion in 2024, is projected to reach between $82 billion and $264 billion by 2030 to 2032, representing compound annual growth rates ranging from 22% to 32%. More striking are the adoption forecasts: Gartner predicts 70% to 75% of all new enterprise applications will be developed using low-code or no-code technologies by 2025 to 2026, up from less than 25% in 2020. The integration of AI into low-code platforms amplifies these trends. By 2026, AI-powered low-code platforms are expected to enable up to 80% of business application development, with AI integration predicted to generate over $50 billion in enterprise efficiency gains by 2030.

Development costs can be reduced by up to 60% using AI-powered low-code solutions, while software delivery times are reduced by up to 70% compared to traditional methods.

Enterprise Use Cases and Practical Implementation

The practical applications of open-source AI combined with low-code standardization span diverse enterprise functions.

Internal dashboards pull data from multiple sources to provide real-time business intelligence without extensive data team involvement. Approval workflows automate procurement, legal reviews, and HR onboarding with built-in logic, notifications, and audit trails. Integration layers consolidate APIs across SaaS tools, normalize data, and orchestrate cross-system workflows. Data orchestration transforms, combines, and routes information between systems on schedules or in response to events. Role-based portals provide secure, customized interfaces displaying appropriate data to specific user groups. AI-specific use cases extend these capabilities. Intelligent customer service systems leverage open-source language models customized for organizational knowledge bases. Predictive maintenance applications use open-source machine learning models fine-tuned on proprietary equipment data. Document analysis tools employ open-source computer vision and natural language processing adapted to specific document types and compliance requirements. Automated business process optimization uses reinforcement learning models trained on organizational workflow data. The implementation approach matters significantly. Successful organizations begin with focused pilot projects addressing clear business needs while building platform expertise and demonstrating early wins. They establish comprehensive governance frameworks addressing security, integration, and skill development before scaling initiatives across the enterprise. They partner with platform vendors offering enterprise-grade security, compliance features, and long-term viability for mission-critical applications. They invest in training programs enabling both technical staff and citizen developers to leverage low-code AI capabilities effectively.

Addressing Implementation Challenges

The transition to open-source AI with low-code standardization requires acknowledging and addressing legitimate challenges. Open-source AI involves hidden costs including skilled engineering resources for deployment, infrastructure investments for production-grade performance, and ongoing maintenance of security patches and updates. Organizations must develop or acquire expertise in model selection, fine-tuning, and optimization that proprietary vendors typically handle. Low-code platforms face scalability questions for highly complex, performance-critical applications where extensive customization exceeds platform capabilities. Organizations must establish clear criteria determining when low-code approaches suit business needs versus when traditional development proves more appropriate. Platform selection requires careful evaluation, as capabilities, governance features, and vendor viability vary substantially across offerings. The hybrid approach emerges as the practical solution for most enterprises. Organizations strategically combine open-source and proprietary AI solutions, leveraging open-source for high-volume, cost-sensitive workloads where customization and control prove essential, while incorporating proprietary solutions for specialized capabilities or applications requiring cutting-edge performance with minimal setup effort.

This balanced strategy maximizes open-source benefits while pragmatically addressing scenarios where proprietary advantages justify costs.

The Path Forward

The convergence of open-source AI and low-code standardization represents not merely technological innovation but a fundamental restructuring of enterprise software development. Organizations embracing this paradigm position themselves for sustained competitive advantage through faster innovation cycles, lower costs, and greater strategic autonomy. Those clinging to proprietary, high-code approaches will increasingly struggle to match the velocity, flexibility, and efficiency that market conditions demand. The decade ahead will witness the maturation of this model as the dominant enterprise AI architecture. By 2030, the distinction between “AI systems” and “enterprise systems” will largely disappear, as AI capabilities become embedded throughout organizational infrastructure. The question facing enterprises is not whether this transformation will occur but how rapidly individual organizations will adapt and what advantages or disadvantages will result from adoption timing. Success requires balancing multiple considerations simultaneously. Organizations must leverage open-source transparency and control while maintaining appropriate governance, security, and architectural discipline. They must democratize AI development through low-code accessibility while ensuring professional oversight of mission-critical implementations. They must standardize approaches to achieve efficiency and consistency while preserving flexibility for innovation and experimentation. They must move rapidly to capture competitive advantages while building sustainable foundations for long-term AI capabilities. The convergence of open-source AI and low-code standardization offers a path forward that reconciles these tensions. It provides the transparency, control, and cost-efficiency enterprises require while making AI accessible to the broad base of developers and domain experts who understand business challenges most intimately. It enables the governance and compliance frameworks regulators demand while maintaining the innovation velocity markets require. It delivers on AI’s transformative promise while avoiding the vendor dependencies and black-box opacity that undermine trust and sustainability.

The AI enterprise must be open-source because anything less sacrifices the transparency, autonomy, and resilience that enterprise systems demand. Low-code provides the standardization layer that makes this vision practical, governable, and scalable. Together, they represent the architectural foundation for enterprise AI that serves organizational needs rather than vendor interests, that remains auditable rather than opaque, and that empowers broad participation rather than concentrating capability in narrow specialist communities. This is not simply one possible approach to enterprise AI – it is increasingly the only approach consistent with long-term organizational success in an AI-driven economy.

References:

  1. https://www.linuxfoundation.org/blog/open-source-ai-is-transforming-the-economy
  2. https://www.planetcrust.com/how-low-code-complements-ai-enterprise-systems/
  3. https://www.planetcrust.com/how-does-ai-impact-sovereignty-in-enterprise-systems/
  4. https://www.instaclustr.com/education/open-source-ai/top-10-open-source-llms-for-2025/
  5. https://opensource.org/ai
  6. https://www.linkedin.com/pulse/ai-auditability-transparency-standards-building-trust-bhalsod-ct1wf
  7. https://lucidquery.com/blog/enterprise-ai-transparency/
  8. https://gdprlocal.com/ai-transparency-requirements/
  9. https://sparkco.ai/blog/enterprise-guide-to-avoiding-vendor-lock-in-in-ai-development
  10. https://xenoss.io/ai-and-data-glossary/vendor-lock-in
  11. https://www.leanix.net/en/blog/ai-vendor-lock
  12. https://ctomagazine.com/ai-vendor-lock-in-cto-strategy/
  13. https://www.planetcrust.com/enterprise-systems-group-rely-on-open-source-ai/
  14. https://em360tech.com/tech-articles/open-source-ai-vs-proprietary-models
  15. https://newsroom.accenture.com/news/2025/europe-seeking-greater-ai-sovereignty-accenture-report-finds
  16. https://wire.com/en/blog/digital-sovereignty-2025-europe-enterprises
  17. https://www.nutrient.io/blog/enterprise-governance-guide/
  18. https://www.techtarget.com/searchenterpriseai/tip/How-to-audit-AI-systems-for-transparency-and-compliance
  19. https://www.moesif.com/blog/technical/api-development/Open-Source-AI/
  20. https://openfuture.eu/publication/data-governance-in-open-source-ai/
  21. https://www.anaconda.com/topics/open-source-ai
  22. https://www.virtualgold.co/post/choosing-the-right-enterprise-ai-model-proprietary-vs-open-source-llms-for-cost-security-and-per
  23. https://seniorexecutive.com/open-source-ai-vs-proprietary-platforms/
  24. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/open-source-in-the-age-of-ai
  25. https://www.appsmith.com/blog/top-low-code-ai-platforms
  26. https://aireapps.com/articles/open-source-ai-and-standards/
  27. https://www.appsmith.com/blog/enterprise-low-code-development
  28. https://www.superblocks.com/blog/enterprise-low-code
  29. https://www.superblocks.com/blog/low-code-governance
  30. https://www.vegam.ai/low-code/governance
  31. https://sparkco.ai/blog/auditability-in-ai-tools-enterprise-compliance-blueprint
  32. https://www.superblocks.com/blog/benefits-low-code
  33. https://www.planetcrust.com/how-ai-driven-low-code-enterprise-systems-will-dominate/
  34. https://coworker.ai/blog/enterprise-ai-trends-2025
  35. https://kissflow.com/low-code/benefits-of-low-code-development-platforms/
  36. https://dzone.com/articles/benefits-and-challenges-of-low-code-platforms
  37. https://www.stack-ai.com/blog/study-about-enterprise-ai-market
  38. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/charting-a-path-to-the-data-and-ai-driven-enterprise-of-2030
  39. https://a16z.com/ai-enterprise-2025/
  40. https://www.matillion.com/learn/blog/top-low-code-integration-platforms-ai-automation
  41. https://www.tooljet.ai
  42. https://www.enterprisedb.com/what-is-sovereign-ai-data-sovereignty
  43. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  44. https://www.superblocks.com/blog/low-code-platforms
  45. https://www.avenga.com/magazine/what-does-the-concept-of-digital-sovereignty-mean-for-enterprises-in-2026/
  46. https://hai.stanford.edu/ai-index/2025-ai-index-report
  47. https://www.mendix.com
  48. https://www.redhat.com/en/blog/path-digital-sovereignty-why-open-ecosystem-key-europe
  49. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
  50. https://www.digitide.com/integrating-ai-with-low-code-for-smarter-applications/
  51. https://kissflow.com/low-code/enterprise-low-code-platform/
  52. https://aiforgood.itu.int/advancing-open-source-ai-definitions-standards-and-global-implementation-for-a-sustainable-future/
  53. https://onlinelibrary.wiley.com/doi/10.1111/isj.70001
  54. https://www.business-reporter.co.uk/ai–automation/breaking-free-of-vendor-lock-in
  55. https://iccwbo.org/wp-content/uploads/sites/3/2025/07/2025-ICC-Policy-Paper-AI-governance-and-standards.pdf
  56. https://www.caspio.com/blog/low-code-for-enterprise-apps/
  57. https://codeninjaconsulting.com/blog/open-source-ai-vs-proprietary-ai-infrastructure-for-enterprise-AI
  58. https://www.oracle.com/sa/application-development/low-code/
  59. https://tellix.ai/how-to-avoid-vendor-lock-in-when-implementing-ai-solutions/
  60. https://www.mirantis.com/blog/ai-governance-best-practices-and-guide/
  61. https://origami.ms/low-code-and-no-code-the-future-of-enterprise-applications/
  62. https://lucidworks.com/blog/the-role-of-open-standards-in-mcp-and-acp-why-interoperability-matters
  63. https://www.truefoundry.com/blog/ai-interoperability
  64. https://www.bizagi.com/en/blog/low-code-governance
  65. https://fabrix.ai/blog/some-of-the-open-source-standards-used-with-ai-agents-or-agentic-frameworks/
  66. https://digino.org/blog/low-code-governance/
  67. https://www.imbrace.co/how-open-source-powers-the-future-of-sovereign-ai-for-enterprises/
  68. https://www.edpb.europa.eu/system/files/2024-06/ai-auditing_checklist-for-ai-auditing-scores_edpb-spe-programme_en.pdf
  69. https://joget.com/the-essential-guide-to-low-code-governance/
  70. https://opea.dev
  71. https://fairnow.ai/ai-transparency-policy-guide/
  72. https://www.columbusglobal.com/insights/articles/governance-the-missing-but-critical-link-in-no-code-low-code-development/
  73. https://anshadameenza.com/blog/technology/low-code-revolution/
  74. https://www.linkedin.com/posts/greg-coquillo_llm-artificialintelligence-activity-7357062767113113601-AXBV
  75. https://zbrain.ai/low-code-development/
  76. https://xccelerance.com/democratization-of-development-through-low-code-no-code-citizen-ai/
  77. https://www.redhat.com/fr/blog/open-source-artificial-intelligence
  78. https://aws.amazon.com/blogs/machine-learning/democratizing-ai-how-thomson-reuters-open-arena-supports-no-code-ai-for-every-professional-with-amazon-bedrock/
  79. https://www.open-tech.es/en/open-tech-blog/open-source-ai/
  80. https://www.planetcrust.com/open-source-software-v-proprietary-software-2025/
  81. https://shiftasia.com/column/dead-or-transformed-the-future-of-low-code-development-platforms-in-an-ai-driven-world/
  82. https://www.techtarget.com/searchenterpriseai/tip/How-open-source-AI-models-benefit-developer-innovation
  83. https://www.jitterbit.com/blog/ai-infused-enterprise-app-development-and-apim-transform-low-code-into-no-code/
  84. https://www.goodcorporation.com/frameworks/ai-governance-framework/
  85. https://www.mordorintelligence.com/industry-reports/enterprise-ai-market
  86. https://www.newhorizons.com/resources/blog/benefits-of-low-code
  87. https://www.superblocks.com/blog/ai-code-governance-tools
  88. https://www.globenewswire.com/news-release/2025/09/03/3143482/28124/en/Enterprises-AI-Market-Research-Report-2025-2030-Growing-Collaboration-With-Enterprise-AI-Agents-Rising-Adoption-of-AI-for-Cybersecurity-and-Risk-Management.html
  89. https://www.ibm.com/think/insights/deepseek-open-source-models-ai-governance
  90. https://adeptiv.ai/best-ai-governance-tools-foundation-for-responsible-ai/
  91. https://www.sciencedirect.com/science/article/pii/S0926580523001693
  92. https://github.com/bluewave-labs/verifywise
  93. https://aretiiles.com/2025/04/14/the-future-of-ai-adoption-trends-and-predictions-for-2025-2030/
  94. https://www.reddit.com/r/ITManagers/comments/1gjmy80/pros_and_cons_of_buying_lowcodenocode_platforms/
  95. https://verifywise.ai
  96. https://www.munich-enterprise.com/en/it-trends-2025-and-beyond-what-counts-now-and-whats-next
  97. https://assets.kpmg.com/content/dam/kpmg/pt/pdf/pt-low-code-adoption-driver-digital-transformation.pdf

Agentic AI, Robotics and Customer Resource Management

Introduction

The convergence of Agentic AI, Robotics, and Customer Resource Management (CRM) represents a transformative shift in how businesses operate, moving from passive data systems to autonomous, intelligent networks that seamlessly bridge digital and physical operations. This integration is fundamentally redefining enterprise capabilities across sales, service, and operational domains.

From Digital Intelligence to Physical Action

The architectural foundation for this convergence lies in recognizing that digital AI agents and physical robotic systems share remarkably similar core components. Both require memory for storing information, a reasoning brain for planning and decision-making, actuators for taking action, and sensors for perceiving their environment. The critical distinction is that digital agents operate through APIs and software interfaces while physical robots interact through motors and sensors, but the intelligence layer – the ability to plan, adapt, and learn – remains fundamentally consistent. This parallel architecture enables organizations excelling at digital AI implementation today to build the foundational capabilities needed for advanced robotics integration tomorrow. The frameworks for data management, process orchestration, and system integration that power digital agents in CRM systems provide the essential infrastructure for robotic deployments across the enterprise.

Autonomous Decision-Making in Customer Relationships

Agentic CRM platforms represent a paradigm shift from traditional systems that primarily focused on passive data storage and manual analysis. Modern agentic systems integrate artificial intelligence and machine learning to enable autonomous task execution, proactive decision-making, and self-directed customer interactions. These platforms can independently qualify leads, generate contextual follow-ups, predict deal outcomes, and execute engagement strategies across all channels without requiring explicit human instruction for each action. The business impact is substantial. Companies implementing AI-powered CRM solutions have experienced an average increase of 25% in sales revenue and a 30% reduction in customer complaints. By 2025, the CRM market is expected to reach $43.7 billion, with 75% of companies utilizing some form of CRM automation, indicating a decisive shift toward automated and AI-driven solutions. These autonomous agents move beyond simple task automation to execute strategy independently, analyzing buyer behavior, personalizing outreach, managing conversations, and booking meetings without human input. They continuously optimize engagement strategies using real-time data, context, and reasoning, marking the evolution from static automation to systems that decide why and when to act

Multi-Agent Orchestration as the Enterprise Operating System

The sophistication of this convergence manifests through multi-agent orchestration systems that coordinate specialized AI agents working collaboratively to solve complex, multi-step problems. Rather than deploying monolithic AI systems, enterprises are building networks of domain-specific agents in finance, HR, compliance, logistics, and marketing that execute tasks while collaborating within a governed framework. Multi-agent orchestration functions through six interconnected stages: capturing intent through natural language interfaces, planning execution roadmaps with defined dependencies, assigning roles based on capability and governance rules, enabling collaboration across specialized agents, monitoring workflows with human-in-the-loop oversight when stakes are high, and building institutional intelligence through continuous learning and feedback loops. This orchestration approach enables organizations to move from reactive customer service to autonomous resolution of complex issues. Specialized agents can assess context, adapt actions dynamically, and deliver seamless end-to-end resolutions without multiple handoffs or manual interventions. The system maintains unified data layers that combine structured records and unstructured conversational signals, providing instant context for AI agents to make informed decisions, learn continuously, and deliver personalized experiences. Salesforce’s Agentforce platform exemplifies this evolution, with its Atlas Reasoning Engine providing the “brain” that powers digital workflows today and informs physical operations tomorrow. Agentforce 2.0 extends this capability with expanded libraries of pre-built functions, cross-system workflow integration through MuleSoft, and multi-agent orchestration where primary agents serve as coordinators for specialized AI teams solving complex problems collaboratively.

Physical AI: Bridging Digital Intelligence and Real-World Operations

Physical AI represents the next frontier, where intelligent systems transcend digital boundaries to perceive, understand, and manipulate the tangible world.

This convergence marks a pivotal moment where AI algorithms move beyond screen-based interactions to coordinate physical actions through robotics, creating unprecedented opportunities for operational efficiency and customer experience transformation. The technology stack supporting physical AI consists of five integrated layers: robotic hardware providing the mechanical foundation with actuators and sensors, edge hardware enabling real-time AI inference without cloud reliance, operating systems managing hardware abstraction and component communication, simulation and training environments using digital twins for development and testing, and application interfaces enabling end-user interaction and system integration. In warehouse environments, AI-powered autonomous mobile robots (AMRs) demonstrate this convergence by navigating complex spaces, optimizing delivery routes, and interacting safely with human workers while maintaining real-time synchronization with inventory management systems. These systems analyze historical demand and real-time market trends to predict demand spikes, achieving inventory accuracy improvements up to 99% and reducing labor costs by 25%. Companies implementing AI-powered warehouse solutions report ROI of up to 300% within the first two years.

Humanoid Robots in Customer-Facing Operations

The humanoid robotics market is experiencing explosive growth, projected to expand from $1.8 billion in 2023 to $13.8 billion by 2028, driven by advances in AI, sensor technology, and adaptive motion control. These bipedal robots with dexterous movement, advanced sensing, and AI-powered reasoning are transitioning from pilot programs to commercial deployments in logistics, retail, healthcare, and customer service environments. Customer-facing applications showcase the convergence potential. Humanoid robots equipped with facial recognition, conversational AI, and expressive body language are being deployed in banks, airports, and retail stores to greet customers, answer questions in multiple languages, and guide visitors to specific locations. Integration with point-of-sale and inventory systems enables real-time product availability information and personalized recommendations.

The embodied AI market driving these applications is fueled by the need for natural human-machine interaction through advanced natural language processing, gesture recognition, and emotional intelligence. Retailers are investing in embodied AI to provide personalized customer experiences through interactive robots and intelligent kiosks, while service sectors leverage AI-powered humanoids to handle physical support combined with emotional interaction.

Integration Through Enterprise Systems and Digital Twins

The convergence materializes through seamless integration of AI agents, robotic systems, and CRM platforms via unified data architectures and orchestration layers.

SAP’s partnerships with robotics companies demonstrate how cognitive robotics integrate with enterprise systems, transforming business operations through physical AI platforms that connect robots, sensors, and digital twins into enterprise workflows. Digital twins serve as critical enablers, creating virtual representations of customers, products, and systems that mirror and predict real-world behaviors. These advanced digital replicas gather real-time data from IoT devices and AI technologies, enabling hyper-personalization and predictive capabilities. In customer experience contexts, digital twins simulate interaction scenarios, analyze behavioral patterns, and enable businesses to test strategies before physical implementation. For robotics applications, digital twins simulate thousands of customer interaction scenarios, refining speech and body language models over time while enabling continuous optimization of physical robot behaviors based on virtual testing. This sim-to-real transfer capability accelerates robot development, reduces deployment risks, and ensures reliable performance in production environments.

The Unified Intelligence Layer

The convergence creates an intelligent fabric where CRM systems evolve from reactive record-keeping to proactive intelligence platforms that interpret customer signals, predict revenue opportunities, and autonomously execute engagement strategies across both digital and physical channels. This transformation addresses the fundamental reality that customer expectations have outpaced traditional CRM workflows, demanding zero-lag personalization, seamless cross-channel continuity, and instant resolution. Robotic process automation (RPA) combined with generative AI enhances this capability by automating data entry, workflow coordination, and complex decision-making processes that connect CRM systems with physical operations. RPA bots analyze incoming customer communications, extract relevant information, update CRM records, classify support tickets, route inquiries to appropriate agents or robotic systems, and automate order processing with real-time tracking integration. The integration enables post-interaction automation where AI agents update CRM records after customer calls while autonomous systems prepare and deliver follow-up communications or coordinate physical fulfillment through robotic systems – all without human intervention. This level of orchestration delivers autonomous, personalized, and consistent service across every digital and physical touchpoint.

Industry Transformation and Future Trajectories

The convergence is already delivering measurable transformation across industries. Amazon’s application of physical AI in fulfillment centers has yielded improved workplace safety, creation of 30% more skilled jobs onsite, 25% faster delivery to customers, and 25% efficiency improvements. Companies like ABB have transformed decades of digital process automation expertise into sophisticated industrial robots, while healthcare organizations like Intuitive Surgical evolved digital surgical planning into thousands of robotic systems performing millions of procedures. The autonomous vehicle sector provides compelling evidence of this pattern, with companies like Waymo leveraging digital workflow expertise to deploy advanced robotics demonstrating approximately 90% reduction in collision incidents compared to human drivers across 39 million real-world miles. These examples illustrate how digital AI capabilities accelerate physical automation adoption with increasingly compelling safety and efficiency benefits. Looking forward, the period between 2025 and 2030 will witness AI agents evolving into adaptive, multi-functional collaborators operating seamlessly across different domains, interfaces, and environments. Agents will become self-learning, collaborative systems integrated into cloud, edge, and hybrid environments, interacting with each other using multi-agent protocols and leveraging real-time data streams to anticipate needs and make proactive decisions. The convergence will enable complex use cases where multiple agents orchestrate simulations of new product launches, marketing campaigns, and service scenarios across both digital CRM systems and physical robotic operations, developing recommendations for adjustments based on comprehensive analysis. Organizations that embrace this convergence early will gain decisive advantages in productivity, personalization, and operational intelligence, transforming CRM from a passive database into an active partner coordinating both human employees and robotic systems. Human-AI collaboration will become mainstream, with knowledge workers supported by AI copilots that proactively suggest solutions, conduct research, manage meetings, and coordinate with physical robotic systems to execute complex workflows spanning digital customer relationships and physical operations. The winners in this new paradigm will combine leadership vision with expert implementation, creating the right infrastructure – the foundational business processes, security protocols, ethical guidelines, and data flows – that connect enterprise CRM systems with the agentic layer powering both digital agents and physical robots.

References:

  1. https://superagi.com/top-10-agentic-crm-platforms-in-2025-a-comparative-analysis-of-features-and-benefits-3/
  2. https://www.infosys.com/iki/perspectives/ai-agents-unlock-value.html
  3. https://www.iopex.com/blog/agentic-ai-salesforce-crm-transformation
  4. https://www.salesforce.com/eu/blog/the-convergence-of-digital-and-physical-ai/
  5. https://www.aalpha.net/blog/how-to-integrate-ai-agents-with-crm/
  6. https://www.jeeva.ai/blog/what-are-autonomous-ai-sales-agents
  7. https://www.kore.ai/blog/what-is-multi-agent-orchestration
  8. https://www.talkdesk.com/blog/multi-agent-orchestration/
  9. https://www.salesfive.com/en/salesforce-guide/agentforce-2-0/
  10. https://gearset.com/blog/salesforce-agentforce-a-complete-guide/
  11. https://www.salesforce.com/agentforce/multi-agent-orchestration/
  12. https://aws.amazon.com/blogs/machine-learning/transforming-the-physical-world-with-ai-the-next-frontier-in-intelligent-automation/
  13. https://talbotwest.com/ai-insights/what-is-physical-ai
  14. https://reports.weforum.org/docs/WEF_Physical_AI_Powering_the_New_Age_of_Industrial_Operations_2025.pdf
  15. https://djangostars.com/blog/ai-in-warehouse-management/
  16. https://superagi.com/future-proof-your-warehouse-trends-and-innovations-in-ai-powered-inventory-management-for-2025-and-beyond/
  17. https://standardbots.com/blog/humanoid-robot
  18. https://www.bain.com/insights/humanoid-robots-from-demos-to-deployment-technology-report-2025/
  19. https://www.automate.org/blogs/humanoid-robots-in-customer-facing-roles
  20. https://www.marketsandmarkets.com/ResearchInsight/industry-analysis-embodied-ai-market.asp
  21. https://tridorian.com/embodied-ai-agents-business-automation
  22. https://news.sap.com/2025/11/sap-physical-ai-partnerships-new-robotics-pilots/
  23. https://www.sap.com/products/crm/what-is-crm/crm-technology-trends.html
  24. https://www.digitalexperience.live/digital-twins-transforming-cx-2024
  25. https://www.delve.ai/blog/digital-twin-of-a-customer
  26. https://promwad.com/news/digital-twins-for-robotics-performance-optimization
  27. https://www.crmsoftwareblog.com/2025/10/emerging-trends-in-agentic-ai-for-2025/
  28. https://creatum.online/2024/11/23/what-is-crm-rpa-integration-understanding-the-basics-and-benefits/
  29. https://osher.com.au/blog/enhancing-crm-with-robotic-process-automation/
  30. https://www.helpdesk.com/learn/robotic-process-automation/
  31. https://oodaloop.com/analysis/archive/what-you-need-to-know-about-the-convergence-of-robot-process-automation-rpa-and-ai/
  32. https://www.tungstenautomation.fr/learn/blog/enhancing-customer-service-outcomes-with-ai-and-robotic-process-automation
  33. https://blog.applabx.com/trends-and-innovations-in-ai-agent-development-2025-2030/
  34. https://www.salesforce.com/news/stories/future-of-salesforce/
  35. https://www.tencentcloud.com/techpedia/127536
  36. https://www.crmsoftwareblog.com/2025/11/emerging-trends-in-agentic-ai-for-2025-business-impact-opportunities/
  37. https://www.uipath.com/blog/product-and-updates/api-automation-expands-crm-power
  38. https://wotnot.io/blog/best-agentic-ai-companies
  39. https://www.imbrace.co/the-role-of-ai-in-customer-relationship-management-crm/
  40. https://blog.n8n.io/best-autonomous-ai-agents/
  41. https://www.linkedin.com/pulse/agentic-ai-revolution-why-october-2025-changes-renner-micah-phd–jlbke
  42. https://nkk.com.vn/ai-chatbot-development-integrating-with-crm/
  43. https://www.salesforce.com/eu/agentforce/
  44. https://lauriemccabe.com/2025/11/04/dreamforce-2025-salesforces-agentic-ai-vision/
  45. https://cogniagent.ai/best-autonomous-ai-agents/
  46. https://www.cm.com/blog/agentic-ai-now-and-in-the-future/
  47. https://www.simpleindex.com/rpa-automates-crm-attachments/
  48. https://www.digital-robots.com/en/news/la-automatizacion-robotica-mejora-la-experiencia-del-cliente-en-soporte
  49. https://www.robylon.ai/blog/best-ai-agents-of-2025
  50. https://www.bvp.com/atlas/intelligent-robotics-the-new-era-of-physical-ai
  51. https://blogs.nvidia.com/blog/igx-thor-processor-physical-ai-industrial-medical-edge/
  52. https://www.itconvergence.com/blog/how-is-hyper-automation-impacting-customer-service/
  53. https://www.text.com/blog/future-of-ai-in-customer-support/
  54. https://think.in2p3.fr/2025/10/08/simplify-physical-ai-deployment-with-intel-robotics-ai-suite/
  55. https://www.sciencedirect.com/science/article/pii/S2405896324015520
  56. https://www.apideck.com/blog/ai-agents-explained-everything-you-need-to-know-in-2025
  57. https://technologymagazine.com/articles/how-neura-robotics-sap-and-nvidia-are-shaping-business-ai
  58. https://www.zaptest.com/the-impact-of-ai-in-robotic-process-automation-a-comprehensive-discussion-on-the-convergence-of-ai-rpa
  59. https://www.tekrevol.com/blogs/keeping-up-to-date-with-crm-trends-heres-what-to-expect/
  60. https://humanoidroboticstechnology.com/articles/top-12-humanoid-robots-of-2025/
  61. https://ifr.org/ifr-press-releases/news/humanoid-robots-vision-and-reality-paper-published-by-ifr
  62. https://innowise.com/blog/rpa-market-trends/
  63. https://arxiv.org/html/2504.21433v1
  64. https://tkxel.com/blog/6-robotic-process-automation-trends-to-watch-for/
  65. https://www.frenchtechjournal.com/vivatech-2025-attack-of-the-humanoid-robots/
  66. https://www.linkedin.com/posts/silvio-savarese-97b76114_the-convergence-of-digital-and-physical-ai-activity-7300305331211882496-D1CS
  67. https://frends.com/insights/the-future-of-integration-ipaas-ai-and-the-rise-of-boat
  68. https://news.berkeley.edu/2025/08/27/are-we-truly-on-the-verge-of-the-humanoid-robot-revolution/
  69. https://www.mseq.vc/msv-blog/our-investment-in-breaker-physical-ai-that-supercharges-teams-and-missions
  70. https://unito.io/blog/salesforce-agentforce/
  71. https://www.cxtoday.com/crm/how-can-multi-agent-ai-orchestration-optimize-customer-interactions/
  72. https://www.toobler.com/blog/digital-twins-in-customer-experience
  73. https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns
  74. https://ctomagazine.com/customer-experience-automation/
  75. https://www.salesforce.com/artificial-intelligence/rpa-robotic-process-automation/
  76. https://appexchange.salesforce.com/appxListingDetail?listingId=02dee35b-6116-4b62-a83a-621c832cff49
  77. https://research.aimultiple.com/agentic-orchestration/
  78. https://www.blueplanet.com/blog/2025/from-what-if-to-why-not-how-real-time-digital-twins-transform-customer-experience
  79. https://www.salesforce.com/plus/experience/dreamforce_2025/series/salesforce_on_salesforce_at_dreamforce_2025/episode/episode-s1e37
  80. https://www.domo.com/fr/glossary/multi-agent-orchestration
  81. https://www.datarobot.com
  82. https://academ.escpeurope.eu/pub/IP2024-51%20Gonzale.pdf
  83. https://www.dexory.com
  84. https://www.bearrobotics.ai
  85. https://www.fundacionbankinter.org/en/noticias/embodied-ai-in-the-home-the-future-of-intelligent-assistance/
  86. https://nomagic.ai/how-ai-powered-robots-are-reshaping-warehouse-efficiency-in-2025/
  87. https://spyro-soft.com/expertise/professional-service-robotics
  88. https://www.euclea-b-school.com/the-future-is-embodied-robotics-and-ai-in-the-real-world/
  89. https://www.autostoresystem.com/insights/warehouse-robotics-guide
  90. https://neura-robotics.com
  91. https://arxiv.org/html/2407.06886v1
  92. https://www.oracle.com/fr/scm/ai-warehouse-management/
  93. https://www.nvidia.com/en-us/industries/robotics/
  94. https://www.morganstanley.com.au/ideas/embodied-ai
  95. https://www.logiwa.com/blog/warehouse-robotics
  96. https://www.uipath.com

AI Sovereignty in Enterprise Systems

Introduction

AI Sovereignty in enterprise systems represents the ability of organizations to develop, deploy, and govern artificial intelligence systems while maintaining complete control over infrastructure, data, models, and operations within their legal and strategic boundaries. This concept extends far beyond simple data residency or cloud provider selection – it encompasses organizational autonomy over the entire AI lifecycle, from training data selection through model deployment and continuous governance.

The Four Core Dimensions of Enterprise AI Sovereignty

Enterprise AI sovereignty operates across four interconnected dimensions that enable organizations to maintain strategic control.

  1. Technology sovereignty addresses the ability to independently design, build, and operate AI systems with full visibility into model architecture, training data, and system behavior. This includes controlling the hardware platforms on which AI models run, reducing dependence on foreign-made accelerators and establishing trust over computational infrastructure. Organizations pursuing technology sovereignty invest in domestic hardware alternatives and develop capabilities to operate AI systems on locally trusted infrastructure.
  2. Operational sovereignty extends beyond infrastructure ownership to encompass the authority, skills, and access required to operate and maintain AI systems. Organizations must build internal talent pipelines of AI engineers, machine learning operations specialists, and cybersecurity professionals, while reducing reliance on foreign managed service providers. This dimension recognizes that physical infrastructure ownership means little without the operational expertise to manage systems effectively and securely.
  3. Data sovereignty ensures that data collection, storage, and processing occur within the boundaries of national laws, organizational values, and compliance requirements. In the AI context, data sovereignty becomes particularly complex because AI systems require large volumes of training data and continuous access to operational data. Organizations must establish controlled environments where sensitive information remains within defined geographical and jurisdictional boundaries, complying with regulations such as GDPR and HIPAA while maintaining competitive advantage over proprietary datasets
  4. Assurance sovereignty establishes verifiable integrity and security through encryption protocols, access controls, and comprehensive audit trails. Organizations need to verify that AI systems operate as intended, that data remains secure from unauthorized access, and that decision-making processes can be traced and audited for compliance purposes. This dimension addresses regulatory requirements and provides the transparency necessary for high-stakes applications in finance, healthcare, and critical infrastructure.

The Role of Open Source Technologies

Open source technologies have become central to realizing sovereign AI capabilities across enterprise systems. Open source models provide organizations and regulators with the ability to inspect architecture, model weights, and training processes, which proves crucial for verifying accuracy, safety, and bias control. Unlike proprietary black-box systems where organizations cannot understand internal operations, open source frameworks such as LangGraph, CrewAI, and AutoGen allow complete visibility into how AI systems function and make decisions. Research indicates that 81% of AI-leading enterprises consider an open-source data and AI layer central to their sovereignty strategy. This adoption reflects recognition that proprietary vendor-controlled AI systems create fundamental sovereignty vulnerabilities. Organizations adopting open source frameworks avoid vendor lock-in while maintaining complete control over model weights, prompts, and orchestration code. The transparency of open source also enables seamless integration of human-in-the-loop workflows and comprehensive audit logs, enhancing governance and verification for critical business decisions.

Enterprise Architecture and Implementation Approaches

Implementing sovereign AI requires comprehensive enterprise architecture spanning multiple technological layers.

At the infrastructure level, organizations adopt hybrid approaches that leverage public cloud capabilities while maintaining critical data and models within sovereign boundaries. The emerging concept of digital data twins enables organizations to create real-time synchronized copies of critical data in sovereign locations while maintaining normal operations on public cloud infrastructure, balancing sovereignty requirements with operational efficiency. The Bring Your Own Cloud (BYOC) model has emerged as a critical bridge between sovereignty and operational efficiency. BYOC allows enterprises to deploy AI software directly within their own cloud infrastructure rather than vendor-hosted environments, preserving control over data, security, and operations while benefiting from cloud-native innovation. In BYOC configurations, software platforms operate under vendor management but run entirely within customer-controlled cloud accounts, maintaining infrastructure and data ownership while delegating operational responsibilities.

Low-code platforms represent a significant advancement in democratizing AI development while maintaining sovereignty. These platforms enable business technologists and citizen developers to compose AI-powered workflows without exposing sensitive data to external Software-as-a-Service platforms. Democratizing AI development accelerates solution delivery by 60-80% while bringing innovation closer to business domains within sovereign boundaries. Modern low-code platforms increasingly incorporate AI-specific governance features, including role-based access controls, automated policy checks, and comprehensive audit trails that allow organizations to configure systems for local compliance requirements while maintaining data residency within specific jurisdictions.

Regulatory Compliance and Governance

The regulatory landscape surrounding AI sovereignty continues evolving rapidly, with significant implications for enterprise systems. The European Union’s AI Act, GDPR, and emerging national regulations establish new compliance requirements that extend beyond traditional data protection. Organizations must demonstrate not only where AI systems are hosted but also how data flows through these systems and who controls algorithmic decision-making processes. Effective AI governance frameworks require comprehensive visibility across the entire AI lifecycle, from initial design through deployment and continuous monitoring. Organizations must implement AI Bill of Materials (AI-BOM) tracking systems that document all models, datasets, tools, and third-party services in their environment. This documentation proves essential for compliance audits and enables organizations to understand dependencies and potential sovereignty vulnerabilities.

European organizations increasingly view sovereign AI as essential, with 62% seeking sovereign solutions in response to geopolitical uncertainty, while sectors with regulatory requirements and sensitive data like banking (76%), public service (69%), and utilities (70%) lead adoption.

Strategic Competitive Implications

The business case for sovereign AI extends beyond compliance considerations to encompass competitive differentiation and strategic autonomy. Organizations prioritizing data sovereignty gain accelerated access to markets with strict compliance barriers, higher customer trust levels, and reduced exposure to geopolitical or legal conflicts. The ability to co-develop AI systems with public sector or national infrastructure partners provides additional strategic advantages. Research indicates that enterprises with integrated sovereign AI platforms are four times more likely to achieve transformational returns from their AI investments. However, many organizations still view sovereign AI primarily through a compliance lens rather than as a strategic opportunity. Only 19% of European organizations view sovereign AI as a competitive advantage, while 48% cite compliance requirements as their primary motivation for adoption. Only 16% of European companies have made AI sovereignty a CEO or board-level concern, suggesting that organizations are not yet fully recognizing sovereignty’s strategic potential to enable customization, rapid iteration, and competitive differentiation.

Implementation Challenges and Barriers

Organizations pursuing sovereign AI face substantial implementation challenges that can overwhelm their capabilities. A critical barrier involves talent shortages, with over 68% of organizations lacking internal capability to build and govern sovereign models end-to-end. The specialized knowledge required spans multiple technical and regulatory domains, creating significant expertise gaps. Only 6% of business enterprises report having smooth implementation experiences with enterprise AI and sovereignty initiatives, primarily due to lack of specialized expertise in management and technical teams. Technical integration and interoperability challenges present additional obstacles. Modern enterprise systems consist of interconnected components with explicit dependencies, creating cascading failure risks when sovereignty requirements restrict integration options. Open-source enterprise systems, while supporting sovereignty objectives, frequently lack built-in connectors and integration capabilities that are standard in commercial platforms, requiring substantial custom development work. Legacy system integration presents particularly acute challenges, often requiring complete system redesigns rather than straightforward migrations, substantially increasing project scope and complexity. Governance complexity extends beyond technical implementation to encompass ongoing monitoring and audit requirements. Sovereign systems typically require more extensive documentation, audit trails, and compliance reporting than traditional enterprise systems. Organizations must implement robust governance frameworks demonstrating compliance across multiple jurisdictions while maintaining operational efficiency, creating substantial administrative overhead. Additionally, sovereign implementations can inadvertently create new forms of vendor lock-in with specialized sovereign cloud providers or consulting firms that possess unique expertise, potentially restricting organizations’ future flexibility and negotiating power. Energy and sustainability considerations also introduce complexity. Running high-performance compute clusters 24/7 increases an organization’s energy footprint at a time when ESG metrics face increasing scrutiny from investors and regulators. The shift from shared cloud infrastructure to self-managed data centers exacerbates carbon burdens, forcing organizations to balance sovereignty objectives with sustainability commitments.

AI Sovereignty in enterprise systems represents a fundamental paradigm shift requiring organizations to rethink their entire relationship with AI technology, cloud infrastructure, and data governance. Success demands balancing legitimate sovereignty objectives with practical realities of operational efficiency, cost management, and technical complexity while building necessary organizational capabilities to support long-term success.

References:

  1. https://www.planetcrust.com/how-does-ai-impact-sovereignty-in-enterprise-systems/
  2. https://www.opentext.com/what-is/sovereign-ai
  3. https://technode.global/2025/08/22/sovereign-ai-the-new-strategic-imperative-for-governments-and-enterprises/
  4. https://newsroom.accenture.com/news/2025/europe-seeking-greater-ai-sovereignty-accenture-report-finds
  5. https://www.datadynamicsinc.com/blog-the-sovereign-ai-paradox-building-autonomy-without-breaking-the-business/
  6. https://www.planetcrust.com/challenges-of-sovereign-business-enterprise-software/
  7. https://www.rizkly.com/digital-sovereignty-in-the-ai-realm/
  8. https://www.linkedin.com/pulse/what-ai-sovereignty-why-should-highest-priority-mark-montgomery-192se
  9. https://www.katonic.ai/blog/from-cloud-first-to-sovereignty-first-the-great-enterprise-ai-migration
  10. https://zammad.com/en/blog/digital-sovereignty
  11. https://arxiv.org/abs/2410.17481
  12. https://www.artefact.com/blog/what-does-ai-sovereignty-really-mean/
  13. https://www.verge.io/wp-content/uploads/2025/06/The-Sovereign-AI-Cloud.pdf
  14. https://coppelis.com/blog/sovereign-artificial-intelligence/
  15. https://www.accenture.com/content/dam/accenture/final/capabilities/technology/cloud/document/The-Operating-System-Sovereign-AI-Clouds-Digital.pdf
  16. https://vantiq.com/blog/the-five-biggest-challenges-in-enterprise-ai-adoption/
  17. https://blog.equinix.com/blog/2025/10/23/designing-for-sovereign-ai-how-to-keep-data-local-in-a-global-world/
  18. https://commission.europa.eu/document/download/09579818-64a6-4dd5-9577-446ab6219113_en?filename=Cloud-Sovereignty-Framework.pdf
  19. https://blog.premai.io/sovereign-ai-businesses-statistics/

How Business Technologists Drive AI Enterprise Adoption

Introduction

Business technologists have emerged as crucial orchestrators in the journey toward responsible and effective AI enterprise adoption. Their unique position bridging technical capabilities and business strategy enables them to navigate the complex landscape of deploying AI systems that deliver value while managing risk. Enterprise AI adoption has accelerated dramatically, with 87% of large enterprises implementing AI solutions in 2025, yet success demands far more than technology deployment – it requires a strategic, people-centered approach that prioritizes safety, governance, and sustainable value creation.

Establishing Comprehensive Governance Frameworks

The foundation of safe AI adoption rests on robust governance structures that provide clear accountability and risk management throughout the AI lifecycle. Business technologists lead the development of governance frameworks that span four critical functions: mapping AI risks within business contexts, establishing policies and accountability structures, implementing controls across the AI lifecycle, and continuously measuring system performance against risk tolerance. These frameworks must align with established standards such as the NIST AI Risk Management Framework, ISO/IEC 42001, and emerging regulations like the EU AI Act, which categorizes AI systems by risk level and imposes strict compliance requirements for high-risk applications. Effective governance extends beyond documentation to become operational reality. Business technologists assign clear roles across cross-functional teams comprising AI risk officers, legal and compliance advisors, IT security specialists, and business unit leaders who collectively oversee AI system development and deployment. This organizational structure ensures that governance principles translate into practical controls embedded directly into workflows rather than existing as parallel approval processes that slow innovation.

Building Trust Through Transparency and Explainability

Trust represents perhaps the most critical barrier to successful AI adoption, with 73% of business leaders expressing concern about deploying AI systems they cannot understand or audit. Business technologists address this challenge by championing explainable AI practices that make system decisions transparent and comprehensible to stakeholders at all levels. Transparency encompasses multiple dimensions: documenting reasoning steps that show how AI arrives at conclusions, identifying data sources used in decision-making, communicating confidence levels in recommendations, and providing visibility into alternative scenarios the AI considered. Organizations implementing transparent AI systems report 45% higher stakeholder confidence in AI-driven strategic decisions. This trust-building extends to establishing comprehensive audit trails with timestamped records of all AI decisions, complete data lineage tracking, model version control, and documentation of human intervention points. Business technologists ensure these capabilities serve not just compliance requirements but actually enable business users to understand, question, and appropriately rely on AI outputs in their daily work

Implementing Human-in-the-Loop Controls

Rather than pursuing full automation, business technologists design AI systems with strategic human oversight at critical decision points. Human-in-the-loop approaches integrate human judgment across three key phases:

  • Training, where domain experts curate datasets and refine algorithms
  • Inference and decision-making, where humans review and approve AI recommendations before implementation in high-stakes scenarios
  • Feedback loops, where human corrections create iterative improvement cycles.

This approach proves particularly valuable in regulated industries like finance and healthcare where automated decisions carry significant consequences. The benefits of human-in-the-loop design extend beyond risk mitigation to drive continuous improvement. When AI agents encounter uncertain or sensitive situations, escalation to human experts ensures appropriate handling while simultaneously creating labeled examples that improve future model performance. Business technologists establish clear escalation paths, review triggers for decisions with reputational or legal consequences, and monitoring dashboards that identify when human intervention becomes necessary. This balanced approach delivers the scale of automation with the contextual judgment of experienced professionals, reducing errors while maintaining trust.

Developing AI Literacy Across the Workforce

Safe AI adoption depends fundamentally on workforce readiness, yet only 28% of employees know how to use their company’s AI applications effectively. Business technologists address this critical gap by championing comprehensive AI literacy programs tailored to different organizational roles and skill levels. Successful programs combine targeted training workshops aligned to specific job functions, continuous learning opportunities through mentorship and knowledge-sharing, and hands-on experience with AI tools in realistic scenarios. Leading organizations establish tiered learning pathways ranging from foundational AI concepts for general employees to advanced specialization for data scientists and AI engineers. Business technologists ensure these programs emphasize not just technical capabilities but also responsible AI practices including identifying bias, protecting data privacy, and understanding when AI outputs require human review. This investment in people proves essential, with 88% of leaders acknowledging workforce up-skilling as critical to AI success. Organizations that effectively develop AI literacy report faster adoption rates, better integration of AI into workflows, and reduced resistance to change.

Managing Risk

Rather than attempting enterprise-wide roll-outs, business technologists employ structured pilot programs that validate AI value while minimizing risk exposure. Effective pilots begin with clearly defined objectives aligned to business goals and measurable key performance indicators such as cost savings, time reduction, or revenue growth. The selection of pilot use cases prioritizes high-impact, low-risk applications that promise significant value with minimal disruption – automating repetitive tasks, optimizing logistics, and enhancing customer service represent common starting points. Successful pilots incorporate production-like datasets and realistic performance targets to surface challenges early rather than encountering surprises during scaling. Business technologists establish decision gates at each phase: discovery and prioritization, pilot execution, production readiness, scaling, and continuous optimization. This disciplined approach includes baseline measurements to isolate AI impact, time-boxed execution to avoid scope creep, and comprehensive documentation of assumptions and failure modes so the organization learns systematically.

Implementing Multi-Layered Security Controls

AI systems create new attack surfaces that traditional security measures cannot adequately address, requiring specialized controls designed for AI-specific vulnerabilities. Business technologists implement AI Security Posture Management that provides continuous visibility into AI system behavior, establishes behavioral baselines for normal operation, detects drift distinguishing between natural model evolution and malicious manipulation, and automates responses to suspicious patterns. Zero-trust architecture principles apply to AI systems through multi-factor authentication for AI agent access, least-privilege policies limiting AI system permissions, continuous monitoring of AI communications and data access, and micro-segmentation restricting AI network access. Additional security layers include adversarial testing programs that proactively identify vulnerabilities before attackers exploit them, secure development practices embedding security throughout the AI lifecycle, and comprehensive data protection through encryption, access controls, and real-time anomaly detection.

Measuring and Communicating Value Realization

Business technologists translate technical AI capabilities into tangible business outcomes through rigorous value measurement frameworks. Rather than relying on single metrics or expecting immediate payback, sophisticated organizations combine financial metrics like cost savings and revenue uplift with operational metrics including productivity gains and cycle time reductions, plus strategic metrics such as competitive positioning. The standard ROI formula adapts for AI as: (Net Gain from AI – Cost of AI Investment) / Cost of AI Investment (where costs encompass development, personnel, infrastructure, and ongoing maintenance and retraining).Critical to success is defining success metrics before implementation, establishing baselines of current performance, and tracking improvements post-deployment across multiple dimensions. Business technologists create dashboards tailored to different stakeholder groups, enabling executives to see strategic impact while operational teams monitor daily performance. This transparency in measuring outcomes builds executive consensus, supports scalable investment decisions, and enhances collaboration between business and IT teams around shared objectives.

Fostering a Culture of Responsible Innovation

Beyond technical controls, business technologists cultivate organizational cultures that embrace AI as a tool for augmenting human capabilities rather than replacing them. This cultural transformation requires clear communication from leadership about AI’s role, transparent discussion of benefits while addressing employee concerns, and demonstration through small projects that AI enhances rather than threatens jobs. Organizations establish AI Centers of Excellence that provide cross-functional collaboration spaces, empower experimentation within governance boundaries, and celebrate meaningful impact to drive adoption. Change management emerges as a pivotal capability, with structured approaches using models like Prosci’s ADKAR framework that addresses the five elements individuals need for effective change: awareness of why change is needed, desire to support the change, knowledge of how to change, ability to implement new skills, and reinforcement to sustain the change. Business technologists embed AI-focused change management practices that build trust through transparency about objectives and job transformations, provide extensive up-skilling opportunities, maintain agility to adapt strategies as technologies evolve, and establish mechanisms for employees to challenge AI decisions and report ethical concerns.

Continuous Monitoring and Improvement

Safe AI adoption is not a one-time achievement but requires ongoing vigilance as models, usage patterns, and threats evolve. Business technologists establish continuous monitoring systems tracking model performance, data quality, user adoption metrics, and business outcomes against established KPIs. Real-time dashboards surface model drift, emerging biases, or operational risks before they impact business operations. Automated retraining pipelines enable model adaptation as data distributions change, while regular audits verify continued compliance with governance frameworks. This commitment to continuous improvement extends to regular adversarial testing where teams attempt to identify system vulnerabilities, periodic risk assessments incorporating lessons learned from production deployments, and integration of threat intelligence about emerging AI attack techniques.

Organizations that successfully scale AI treat it as a living capability requiring sustained attention rather than a project with a defined endpoint.

Strategic Integration with Business Objectives

Ultimately, business technologists ensure AI initiatives remain tightly aligned with strategic business priorities rather than becoming technology experiments disconnected from value creation. This alignment starts with linking AI governance directly to measurable business outcomes, whether improving customer experiences, reducing operational costs, or enabling new revenue streams. AI systems are added to enterprise risk registers with appropriate ratings, AI-specific controls integrate into existing audit programs, and AI governance reporting syncs with current risk management cycles. The most successful organizations view AI adoption through a composable operating model that blends strategy, governance, and real-time intelligence into flexible architectures supporting diverse use cases. Business technologists orchestrate this integration by translating business requirements into technical specifications, ensuring AI solutions address actual problems rather than hypothetical capabilities, and maintaining focus on sustainable value creation at scale. By combining robust governance, transparent operations, strategic human oversight, comprehensive workforce development, rigorous security practices, and continuous measurement, business technologists create the conditions for AI to deliver transformative business value while maintaining the trust, compliance, and safety essential for long-term success. This holistic approach transforms AI from experimental technology into a reliable competitive advantage that organizations can confidently scale across their operations.

References:

  1. https://www.secondtalent.com/resources/ai-adoption-in-enterprise-statistics/
  2. https://www.esystems.fi/en/blog/best-ai-governance-framework-for-enterprises
  3. https://www.ai21.com/knowledge/ai-governance-frameworks/
  4. https://www.mirantis.com/blog/ai-governance-best-practices-and-guide/
  5. https://www.superblocks.com/blog/enterprise-ai-risk-management
  6. https://lucidquery.com/blog/enterprise-ai-transparency/
  7. https://www.haptik.ai/blog/what-is-human-in-the-loop-ai
  8. https://spd.tech/artificial-intelligence/human-in-the-loop/
  9. https://www.electricmind.com/whats-on-our-mind/ctos-guide-to-designing-human-in-the-loop-systems-for-enterprises
  10. https://www.walkme.com/blog/enterprise-ai-adoption/
  11. https://www.salesforce.com/eu/blog/ai-literacy-builds-future-ready-workforce/
  12. https://www.iil.com/ai-skills-development-across-the-enterprise-workforce-by-terry-neal/
  13. https://www.ibm.com/think/insights/change-management-responsible-ai
  14. https://www.linkedin.com/posts/analytics-india-magazine_ey-has-launched-the-ai-academy-a-comprehensive-activity-7348987547059974145-fJ_R
  15. https://theaiinnovator.com/coursera-cto-skills-development-is-crucial-to-enterprise-transformation/
  16. https://www.microsoft.com/insidetrack/blog/enterprise-ai-maturity-in-five-steps-our-guide-for-it-leaders/
  17. https://cloudsecurityalliance.org/blog/2025/03/28/a-guide-on-how-ai-pilot-programs-are-shaping-enterprise-adoption
  18. https://www.workmate.com/blog/enterprise-ai-roadmap-from-pilot-to-production
  19. https://agility-at-scale.com/implementing/roi-of-enterprise-ai/
  20. https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/ai/secure
  21. https://www.obsidiansecurity.com/blog/ai-security-risks
  22. https://www.fiddler.ai/articles/ai-security-for-enterprises
  23. https://blog.qualys.com/product-tech/2025/02/07/must-have-ai-security-policies-for-enterprises-a-detailed-guide
  24. https://www.tredence.com/blog/ai-roi
  25. https://www.prosci.com/blog/ai-adoption
  26. https://huble.com/blog/ai-adoption-strategies
  27. https://sparkco.ai/blog/best-practices-for-enterprise-ai-risk-management-2025
  28. https://aws.amazon.com/blogs/security/enabling-ai-adoption-at-scale-through-enterprise-risk-management-framework-part-2/
  29. https://www.mckinsey.com/about-us/new-at-mckinsey-blog/beyond-the-buzz-making-ai-work-for-real-business-value
  30. https://www.auxis.com/maximize-ai-automation-roi-8-best-practices-for-success/
  31. https://www.credera.com/services/technology-and-data-excellence/ai-strategy-and-value-realization
  32. https://www.linkedin.com/pulse/enterprise-value-realization-new-mandate-ai-mario-guerendo-1r9xf
  33. https://www.bcg.com/publications/2025/how-agentic-ai-is-transforming-enterprise-platforms
  34. https://www.netguru.com/blog/ai-adoption-statistics
  35. https://macaron.im/blog/enterprise-ai-adoption-2025
  36. https://www.practical-devsecops.com/best-ai-security-frameworks-for-enterprises/
  37. https://digital.nemko.com/insights/modern-ai-governance-frameworks-for-enterprise
  38. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  39. https://appian.com/blog/2025/building-safe-effective-enterprise-ai-systems
  40. https://www.datagalaxy.com/en/blog/ai-governance-framework-considerations/
  41. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
  42. https://cdn.openai.com/business-guides-and-resources/ai-in-the-enterprise.pdf
  43. https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/ai-governance
  44. https://www.oecd.org/en/publications/the-adoption-of-artificial-intelligence-in-firms_f9ef33c3-en.html
  45. https://www.netcomlearning.com/blog/AI-Security-and-compliance-key-considerations-for-enterprises
  46. https://www.planetcrust.com/business-technologists-ais-impact-on-enterprise-systems/
  47. https://pellera.com/blog/top-5-ai-adoption-challenges-for-2025-overcoming-barriers-to-success/
  48. https://aireapps.com/articles/why-do-business-technologists-matter/
  49. https://www.linkedin.com/pulse/change-management-ai-adoption-complete-guide-businesses-kommunicate-q7ssc
  50. https://www.slalom.com/ca/fr/insights/evolving-role-business-technologist-ai-era
  51. https://www.soraia.io/blog/7-practical-strategies-to-overcome-ai-adoption-challenges
  52. https://www.forbes.com/sites/sap/2024/12/11/how-ai-is-transforming-change-management/
  53. https://www.ibm.com/think/insights/ai-adoption-challenges
  54. https://www.boozallen.com/insights/ai-research/change-management-for-artificial-intelligence-adoption.html
  55. https://online.hbs.edu/blog/post/ai-digital-transformation
  56. https://leobit.com/blog/top-ai-adoption-challenges-and-how-to-solve-them/
  57. https://www.mckinsey.com/capabilities/quantumblack/our-insights/reconfiguring-work-change-management-in-the-age-of-gen-ai
  58. https://knowledge.insead.edu/strategy/ai-transformation-not-about-tech
  59. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/the-learning-organization-how-to-accelerate-ai-adoption
  60. https://www.rolandberger.com/en/Insights/Publications/Change-management-and-AI.html
  61. https://tray.ai/resources/blog/business-technologist
  62. https://www.seedext.com/en/articles/blog-ia-securite-donnees-2025
  63. https://www.ibm.com/think/topics/responsible-ai
  64. https://professional.dce.harvard.edu/blog/building-a-responsible-ai-framework-5-key-principles-for-organizations/
  65. https://www.nist.gov/itl/ai-risk-management-framework
  66. https://www.fairly.ai/blog/policies-platform-and-choosing-a-framework
  67. https://www.ai21.com/knowledge/ai-risk-management-frameworks/
  68. https://www.isaca.org/resources/news-and-trends/industry-news/2025/safeguarding-the-enterprise-ai-evolution-best-practices-for-agentic-ai-workflows
  69. https://www.sciencedirect.com/science/article/pii/S0963868724000672
  70. https://www.datagalaxy.com/en/blog/ai-risk-management/
  71. https://www.invicti.com/blog/web-security/ai-security-challenges-best-practices-for-2025
  72. https://www.consilien.com/news/ai-governance-frameworks-guide-to-ethical-ai-implementation
  73. https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/deploying-agentic-ai-with-safety-and-security-a-playbook-for-technology-leaders
  74. https://www.leanware.co/insights/enterprise-ai-architecture
  75. https://www.correlation-one.com/generative-ai-training-for-employees
  76. https://aiinnovision.com/ai-literate-workforce-competitive-advantage/
  77. https://www.gpstrategies.com/ai-solutions/ai-enterprise-skilling/
  78. https://t3-consultants.com/ai-training-for-enterprise-a-step-by-step-guide/
  79. https://www.vktr.com/digital-workplace/ai-literacy-is-the-new-must-have-workplace-skill/
  80. https://www.activepieces.com/blog/top-ai-training-programs-for-employees-in-2024
  81. https://www.paradisosolutions.com/blog/ai-literacy-in-workplace-benefits-and-strategies/
  82. https://www.edstellar.com/category/artificial-intelligence-training
  83. https://htec.com/insights/the-risk-of-ignoring-workforce-ai-literacy/
  84. https://www.uctoday.com/immersive-workplace-xr-tech/ai-immersive-learning-accelerating-skill-development-with-ai-and-xr/
  85. https://www.navex.com/en-us/courses/ai-employee-training/
  86. https://www.cedefop.europa.eu/nl/news/ai-literacy-work-bridging-skills-policy-and-practice-europes-digital-transition
  87. https://www.recruiterslineup.com/top-ai-training-platforms-for-employees/
  88. https://www.sciencedirect.com/science/article/pii/S0007681325001673
  89. https://kanerika.com/blogs/ai-pilot/
  90. https://about.gitlab.com/blog/measuring-ai-roi-at-scale-a-practical-guide-to-gitlab-duo-analytics/
  91. https://10pearls.com/blog/enterprise-ai-pilot-to-production/
  92. https://propeller.com/blog/measuring-ai-roi-how-to-build-an-ai-strategy-that-captures-business-value
  93. https://www.trigyn.com/insights/overcoming-barriers-scaling-ai-pilots-best-practices-achieving-ai-scale
  94. https://www.ibm.com/think/insights/ai-roi
  95. https://www.deloitte.com/se/sv/services/consulting/perspectives/how-to-master-value-realisation-with-your-ai-customer-agents.html
  96. https://exec-ed.berkeley.edu/2025/09/beyond-roi-are-we-using-the-wrong-metric-in-measuring-ai-success/
  97. https://www.tbmcouncil.org/learn-tbm/resource-center/tbm-for-ai-value-realization/
  98. https://letsprocessit.com/scaling-ai-pilot-projects-enterprise-success/
  99. https://www.sandtech.com/insight/a-practical-guide-to-measuring-ai-roi/
  100. https://geekyants.com/blog/why-businesses-need-explainable-ai—and-how-to-deliver-it
  101. https://www.trustpath.ai/blog/ai-transparency-what-it-is-and-why-it-matters-for-compliance
  102. https://digital.nemko.com/insights/explainable-ai-unlocking-trust-and-business-value
  103. https://aign.global/ai-governance-insights/patrick-upmann/to-what-extent-should-ai-systems-provide-transparency-to-make-their-decision-making-processes-understandable/
  104. https://www.ibm.com/think/topics/human-in-the-loop
  105. https://www.mckinsey.com/capabilities/quantumblack/our-insights/why-businesses-need-explainable-ai-and-how-to-deliver-it
  106. https://galileo.ai/blog/ai-trust-transparency-governance
  107. https://amquesteducation.com/explainable-ai-in-business/
  108. https://www.zendesk.com/blog/ai-transparency/
  109. https://www.superannotate.com/blog/human-in-the-loop-hitl
  110. https://www.ibm.com/think/topics/explainable-ai
  111. https://www.sciencedirect.com/science/article/pii/S2444569X25001155
  112. https://www.linkedin.com/posts/carmeloiaria_human-in-the-loop-design-patterns-activity-7387503023591165952-sBmL
  113. https://www.media.thiga.co/en/en/how-to-make-sure-your-ai-products-get-used-ai-explainability
  114. https://www.mckinsey.com/capabilities/quantumblack/our-insights/building-ai-trust-the-key-role-of-explainability
  115. https://matterway.io/blogs/beyond-rpa-why-human-in-the-loop-ai-is-essential-for-enterprise-trust-and-accuracy

How Proprietary Licenses Encourage Enterprise System Silos

Introduction

Proprietary licensing structures fundamentally constrain the architectural flexibility that enterprises need to build integrated systems. Rather than enabling seamless data flow and functional collaboration across organizational units, these licensing models actively incentivize isolated, vertically-aligned technology stacks that cannot easily communicate with one another.

Proprietary Licenses and Enterprise Silos go Hand-in-Hand

  • The mechanism operates through deliberate contractual restrictions embedded in End User License Agreements (EULAs). These agreements explicitly prohibit reverse engineering, forbid integration with competing solutions, and restrict how organizations can redistribute or modify code. When a company adopts an enterprise software system – say, a CRM from one vendor, an ERP from another, and a reporting tool from a third party – each licensing agreement introduces its own set of interoperability restrictions. Rather than creating a unified ecosystem where data flows freely, organizations find themselves managing incompatible islands of functionality. A finance team using one vendor’s system cannot easily feed data into the operations team’s system without either expensive custom integrations or purchasing additional connector licenses that the vendor has strategically positioned as premium offerings.
  • Proprietary APIs represent another layer of siloing. When vendors control the interfaces through which their systems communicate with the outside world, they have every incentive to make those interfaces proprietary and intentionally limited. Organizations become locked into specific data formats that only that vendor’s tools can read and write. Should a company attempt to export customer data or transaction records into a different system, they encounter licensing prohibitions against circumventing technical protection measures, compounded by contractual language that effectively forbids the reverse engineering necessary for true interoperability.
  • The financial architecture of proprietary licensing reinforces this fragmentation. Federal agencies, for instance, have documented six recurring licensing practices that actively encourage silos: license repurchase requirements when migrating to cloud environments, cross-cloud surcharges for deploying software outside a vendor’s preferred infrastructure, fees for data repatriation when contracts end, and explicit prohibitions against third-party software integration. Each of these mechanisms makes it financially and technically painful to move data or applications between systems. A CIO contemplating consolidation across departments faces switching costs so substantial that continuing to operate separate systems becomes the rational choice, even when those systems duplicate functionality or create operational inefficiencies
  • The complexity of managing heterogeneous licensing creates a secondary dynamic that deepens silos. When an enterprise contains components with conflicting licenses – for instance, a proprietary system that prohibits source code disclosure combined with open-source components that require it – architects must employ workarounds such as establishing “license firewalls” that limit communication pathways between systems. These architectural restrictions literally prevent the integration that would otherwise be possible. The organization’s technical design becomes constrained not by business logic but by the conflicting terms of different vendor agreements.
  • Data portability represents perhaps the most direct path through which licensing encourages siloing. Without contractual guarantees and technical support for exporting data in open formats, organizations cannot consolidate information across systems. Marketing, finance, and operations remain unable to access consistent customer or transaction data because doing so would require extracting information from a vendor’s proprietary database format. Regulatory frameworks like the EU’s General Data Protection Regulation have begun mandating data portability, but many proprietary systems still impose technical and financial barriers that persist even where legally permitted. The result is organizational departments maintaining separate data repositories rather than contributing to enterprise-wide systems.
  • The architectural consequences extend beyond mere inconvenience. As organizations mature and scale, the out-of-the-box solutions that initially made sense become inadequate, yet the switching costs imposed by licensing restrictions prevent timely modernization. Teams across the business adapt their workflows to work around system limitations rather than advocating for integrated solutions. Finance might maintain shadow systems in spreadsheets rather than trying to connect to a corporate ERP. Marketing might duplicate contact data rather than integrating with sales’ customer database. Each workaround is individually rational when the official path to integration is blocked by licensing restrictions, yet collectively they perpetuate enterprise fragmentation.
  • Subscription-based licensing models amplify this tendency by introducing continuous financial disincentives for reconsideration. Unlike perpetual licenses where an organization might eventually justify migration costs against years of license savings, subscription models create recurring revenue streams that vendors actively protect through contractual terms preventing exit. Organizations become reluctant to audit their technology portfolios because doing so might highlight overlapping capabilities across departments – redundancy that would theoretically justify consolidation if portability were technically feasible and legally permitted. The licensing structure thus creates organizational behavior that accepts fragmentation as inevitable rather than treating it as a problem to be solved.

Conclusion

The cumulative effect is that proprietary licensing doesn’t merely constrain technical integration; it reshapes how enterprises think about technology architecture. Rather than viewing the IT landscape as a unified system optimized for business objectives, organizations internalize the vendor-imposed silos as structural givens. Enterprise architects accommodate fragmentation through layered governance and multiple approval processes rather than advocating for true integration. The business consequence is operational inefficiency, increased costs from duplicate systems, impaired decision-making from fragmented data, and reduced organizational agility – outcomes that benefit vendors through continued license purchases but harm the enterprises that must operate within the constraints those licenses impose.

References:

  1. https://www.etelligens.com/blog/proprietary-software-definition-and-examples/
  2. https://myitforum.substack.com/p/vendor-lock-in-how-companies-get
  3. https://www.eff.org/wp/interoperability-and-privacy
  4. https://zylo.com/blog/software-license-management-tips/
  5. https://www.percona.com/blog/can-open-source-software-save-you-from-vendor-lock-in/
  6. https://interoperable-europe.ec.europa.eu/collection/eupl/licences-complementary-agreements
  7. https://www.spendflo.com/blog/software-license-management
  8. https://www.superblocks.com/blog/vendor-lock
  9. https://e-irg.eu/wp-content/uploads/2023/05/paul_uhlir.pdf
  10. https://www.dock.io/post/identity-silos
  11. https://www.chaossearch.io/blog/multi-cloud-data-management
  12. https://www.zartis.com/open-source-vs-closed-source-software/a-comparative-analysis/
  13. https://www.ics.uci.edu/~wscacchi/Papers/New/AlspauchAsuncionScacchi-IWSECO-July09.pdf
  14. https://legittai.com/blog/proprietary-data
  15. https://eclipsesource.com/blogs/2024/07/10/the-rise-of-closed-source-ai-tool-integrations/
  16. https://ceur-ws.org/Vol-505/iwseco09-3AlspaughAcunsionScacchi.pdf
  17. https://aws.amazon.com/what-is/data-porting/
  18. https://www.pingcap.com/article/open-source-vs-closed-source-software-benefits/
  19. https://www.redhat.com/tracks/_pfcdn/assets/10330/contents/430073/7bad8a07-d9f0-4465-be1f-a4d591350eee.pdf
  20. https://www.databricks.com/blog/data-silos-explained-problems-they-cause-and-solutions
  21. https://www.icertis.com/contracting-basics/the-importance-of-the-end-user-license-agreement/
  22. https://www.sciencedirect.com/science/article/pii/S174228760800039X
  23. https://www.e-spincorp.com/is-reverse-engineering-legal/
  24. https://complydog.com/blog/complete-eula-guide-end-user-license-agreement-software-companies
  25. https://www.adldata.org/wp-content/uploads/2015/06/Best_Practices_Eliminating_Fragmentation.pdf
  26. https://direct.mit.edu/books/oa-monograph/chapter-pdf/2368586/9780262295543_cad.pdf
  27. https://en.wikipedia.org/wiki/End-user_license_agreement
  28. https://www.tierpoint.com/blog/data-fragmentation/
  29. https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=2052&context=jil
  30. https://vfunction.com/eula/
  31. https://www.redhat.com/en/blog/architecture-dependencies
  32. https://openit.com/restrictive-software-licensing-overcoming-vendor-imposed-barriers-to-federal-cloud-success/
  33. https://www.nedigital.com/en/blog/assessing-vendor-lock-in-and-exit-costs-in-saas-centric-it-environments
  34. https://clojurefun.wordpress.com/2012/12/21/architecture-is-dependency-management/
  35. https://netlicensing.io/blog/2024/12/25/compliance-security-licensing-management-systems/
  36. https://www.ccsenet.org/journal/index.php/cis/article/view/69798
  37. https://faddom.com/enterprise-architecture-frameworks/
  38. https://www.device42.com/software-license-management-best-practices/software-license-compliance/
  39. https://www.storminternet.co.uk/blog/vendor-lock-in-the-silent-killer-of-saas-flexibility/
  40. https://www.superblocks.com/blog/enterprise-architecture-tools

Mitigating Human Risk In Enterprise Computing Software

Introduction

The human element represents the most significant and persistent vulnerability in enterprise computing environments. While organizations invest heavily in technical security measures – firewalls, encryption, intrusion detection systems – human behavior consistently emerges as the critical failure point in organizational security. According to research findings, human error causes 95% of cybersecurity breaches, with the average financial impact of a data breach reaching $4.48 million in 2024. In enterprise computing software specifically, where sensitive data flows through interconnected systems and employees interact with multiple platforms daily, managing human risk has become imperative for organizational survival. The challenge extends beyond simple negligence or carelessness. Human risk in enterprise computing encompasses a complex interplay of cognitive limitations, organizational dynamics, and the sophisticated social engineering tactics deployed by modern threat actors. From unintentional errors like opening phishing attachments to malicious insider activities exploiting privileged access, human-driven threats cut across all organizational levels and functions.

This article explores comprehensive strategies for mitigating human risk in enterprise software environments, moving beyond compliance checkboxes to establish genuine behavioral transformation and security resilience.

Understanding the Scope of Human Risk

Human risk in enterprise computing manifests through multiple pathways.

1. Research shows that 65% of employees open emails, links, or attachments from unknown sources, while 58% send sensitive work data without verifying sender legitimacy. These behaviors reflect not character flaws but rather the friction between security requirements and operational efficiency. Employees managing multiple applications, systems, and time pressures often take shortcuts that compromise security protocols.

2. Insider threats – both malicious and unintentional – represent a distinct category of human risk. The Cybersecurity and Infrastructure Security Agency defines insider threats as the potential that inside personnel will use their authorized access, wittingly or unwittingly, to harm the organization. Organizations report that 95% of cybersecurity breaches were made possible by human error, often from employees with legitimate system access. This presents a fundamental dilemma: granting employees sufficient access to perform their roles while preventing that same access from being exploited or inadvertently misused.

3. Beyond individual behaviors, organizational factors significantly influence human risk. Poor work planning leading to time pressure, inadequate safety systems, insufficient communication from supervisors, and deficient health and safety culture all contribute to increasing human vulnerability. In enterprise software environments, where change happens rapidly and technical complexity escalates constantly, these organizational factors can overwhelm individual employees’ capacity to maintain vigilance.

Building Security Culture as Foundation

Effective human risk mitigation begins not with technology but with organizational culture. Organizations with successful security cultures deliver security strategies that meet employees where they are, creating an agreed understanding of what kind of security culture the organization wants. This requires investment in developing teams responsible for managing this transformation, recognizing that culture change is iterative and requires sustained leadership commitment. Leadership behavior sets the tone for organizational security culture. When leadership models secure behaviors, prioritizes transparency, and fosters psychological safety – where reporting errors doesn’t result in punishment but learning – employees become security advocates rather than compliance targets. The distinction is critical: security should never be perceived as punitive. Organizations where employees fear repercussions for reporting security incidents inadvertently create environments where problems remain hidden until they escalate into breaches. Psychological safety enables employees to acknowledge mistakes, ask clarifying questions, and report suspicious activities without fear of professional consequences. This foundation becomes essential for enterprise computing environments, where security incidents often require rapid escalation and transparent investigation. When employees trust that reporting a phishing attack or security misconfiguration won’t result in disciplinary action, detection times decrease and organizational resilience increases.

Building security culture requires three distinct but complementary components working together. Security awareness creates cultural sensitivity throughout the organization, typically at an organization-wide level through internal educational sessions and awareness initiatives. Training provides specific technical skills needed to perform security-related tasks appropriately within employees’ roles. Education develops fundamental decision-making capabilities, enabling employees to understand underlying security principles and adapt their behaviors as threats and technologies evolve. These layers must work in concert rather than as isolated initiatives.

Implementing Behavioral-Driven Security Awareness

Traditional security awareness training often fails to achieve lasting behavioral change because it relies on knowledge transfer without addressing the psychological mechanisms underlying human decision-making. Behavior-driven security awareness training, conversely, applies understanding of human behavior and psychology to create sustainable changes in how employees interact with security risks. This approach recognizes that security threats exploiting human vulnerabilities use the same psychological mechanisms that software designers employ to make systems intuitive. The “urge to click” that makes user interfaces efficient can be weaponized in phishing campaigns. Fear responses that evolved to protect humans can be triggered through social engineering. Understanding these mechanisms enables organizations to design countermeasures grounded in behavioral science rather than generic warnings. Effective behavior-driven programs operate on three pillars. Knowledge establishes baselines of individual employee security behaviors through assessments and testing, creating profiles of specific strengths and weaknesses. This personalization enables training delivery tailored to each employee’s actual risk profile rather than generic, one-size-fits-all approaches. Awareness builds cultural sensitivity to security issues through campaigns that create context for learning – for example, simulated phishing exercises that closely mirror real attack tactics, cementing lessons and developing practical skills. Understanding develops through measurement and feedback, with real-time training engaging employees directly with relevant guidance at moments when they need it most. Real-time training platforms represent a significant evolution from traditional security awareness. When employees exhibit risky behavior during simulated phishing exercises, adaptive platforms immediately provide feedback and targeted instruction, leveraging the learning moment when awareness is highest. This just-in-time approach to education proves substantially more effective than quarterly training sessions where retention rapidly decays. Metrics demonstrating behavior change over time provide essential evidence of program effectiveness and return on investment. Organizations implementing mature human risk management programs report engagement increasing six-fold within six months, phishing simulation failure rates declining six-fold, and real threat reporting skyrocketing ten-fold. These numerical improvements reflect genuine behavioral transformation, not merely compliance with training requirements.

Establishing Effective Access Control and Identity Management

  • Human risk compounds when employees have access exceeding what their roles require. The principle of least privilege – granting users only the minimum access necessary to perform their duties – remains foundational for managing human risk in enterprise software environments. Yet implementation proves challenging at scale, particularly in complex organizations where roles evolve, responsibilities shift, and audit requirements demand rapid access provisioning.
  • Identity and Access Management systems must manage both human and non-human identities across increasingly distributed computing environments. The scale of this challenge has grown dramatically: research indicates that non-human identities now outnumber human users by factors ranging from 45-to-1 to potentially 100-to-1 in mature enterprises, with projections suggesting continued escalation. Service accounts, API keys, scripts, and CI/CD workflows create vast numbers of potential attack vectors if not managed through consistent policies.
  • Critical IAM risks include overprivileged access where users retain permissions long after they change roles, standing credentials that persist indefinitely after creation, and lack of visibility over non-human identities living in configuration files or hardcoded into applications. Each of these represents a failure mode where human negligence or organizational inertia creates unnecessary risk exposure.
  • Automated access reviews and recertification processes address the practical challenge of manual identity governance at scale. Regular reviews should examine who has access to what resources, verify that access remains necessary given current roles, and rapidly remove standing credentials no longer in active use. Multi-factor authentication adds a second verification layer beyond credentials alone, protecting systems even when passwords are compromised through phishing or credential theft.
  • Just-in-time access provisioning represents a modern alternative to standing credentials, where users receive temporary elevated access only when performing specific tasks, with access automatically expiring after task completion. This approach dramatically reduces the window during which compromised credentials could be exploited while maintaining operational efficiency.

Detecting and Responding to Behavioral Anomalies

User and Entity Behavior Analytics systems establish baselines of normal behavior for individuals, systems, and applications within enterprise environments, then continuously monitor for deviations potentially indicating compromised accounts, insider threats, or unauthorized access attempts. This behavioral monitoring approach complements traditional rule-based detection by identifying never-before-seen attack patterns that evade signature-based defenses.Effective UEBA implementation collects behavioral telemetry across multiple data sources – authentication logs, network traffic, resource access patterns, application usage – creating comprehensive profiles of normal operations. Machine learning algorithms establish individual baselines accounting for variations in behavior across roles, departments, and time periods. Someone accessing systems at midnight might represent normal behavior for an on-call system administrator but suspicious behavior for a financial analyst whose role operates during standard business hours. UEBA proves particularly valuable for detecting insider threats where attackers use legitimate credentials but behave differently from the account owner. A data analyst normally accessing customer databases during business hours who suddenly exports massive volumes of sensitive information to personal cloud storage exhibits behavioral patterns inconsistent with normal activities. These anomalies trigger investigation and response mechanisms before data exfiltration completes. The contextual insights UEBA provides enable security teams to differentiate between legitimate business activities and genuine threats, reducing false positive alerts that lead to alarm fatigue and decreased security team effectiveness. By correlating data from multiple sources, behavior analytics provide holistic understanding of observed activities rather than isolated events viewed in isolation

Designing Policies That Promote Secure Behavior

Security policies establish organizational boundaries and behavioral expectations, but poorly designed policies create friction that employees circumvent through shadow IT, unauthorized workarounds, or non-compliance.

Effective policies balance security requirements with operational necessity, making compliance the path of least resistance rather than an obstacle to work. Clear policies addressing data classification establish common language and handling requirements across the organization. Data should be classified as public, internal, confidential, or secret, with each classification level specifying handling, transmission, storage, and disposal requirements. When employees understand why certain data requires specific protections and what consequences might result from mishandling, compliance improves substantially. Acceptable use policies establish clear rules for employee system and data usage, specifying what activities are permitted and prohibited. These policies gain effectiveness through employee acknowledgment that they’ve read and understand requirements, creating accountability and deterrence against deliberate violations. Policies must remain relevant through regular review cycles, ideally updated at least semi-annually to address emerging threats, regulatory changes, and organizational modifications. Policies that drift from current threats lose credibility with employees who perceive them as obsolete, reducing compliance more broadly. Implementing policies through technical controls strengthens their effectiveness. Rather than relying solely on employee adherence to policy, technology-enforced constraints limit risky behaviors through automated mechanisms. Data loss prevention systems can prevent certain files from leaving organizational networks. Email gateways can enforce encryption for communications containing sensitive information. Application whitelisting can prevent installation of unauthorized software. These technical controls acknowledge that achieving 100% compliance through policy awareness alone remains impossible in complex environments.

Cultivating Incident Response Resilience

Human factors dramatically shape incident response effectiveness. When security incidents occur, responders face incomplete information, time pressure, high organizational stress, and incomplete understanding of attack scope and impact. Under these conditions, cognitive biases, information overload, and decision fatigue lead to suboptimal choices that can escalate incidents or extend recovery times. Effective incident response plans must account for how humans actually behave during crises rather than assuming ideal decision-making. Clear role assignments with documented responsibilities prevent confusion during active incidents. Checklists and decision trees help responders work through complex scenarios systematically rather than relying on memory or intuition under pressure. These tools reduce cognitive load by structuring decision-making into manageable components. Information filtering mechanisms prevent cognitive overload by ensuring responders receive role-appropriate information rather than every available detail. A database administrator needs different information than a communications manager, yet both play important roles in incident response. Structured information sharing ensures each person receives what they need for their responsibilities without becoming overwhelmed. Leadership behavior during incidents profoundly impacts response effectiveness. Leaders who remain calm, communicate clearly, support team decision-making, and avoid blame during active incidents enable better response outcomes. Conversely, leaders who panic, micromanage, or focus on blame during incidents significantly degrade response effectiveness and may cause responders to make worse decisions to avoid criticism.

Regular incident response exercises and stress inoculation training prepare teams for the psychological demands of actual incidents. Through tabletop exercises and simulations, incident responders experience moderate stress in safe environments, developing muscle memory for their responses and building confidence in procedures before real incidents occur.

Implementing Continuous Monitoring and Measurement

Organizations seeking to reduce human risk require outcome-driven metrics demonstrating actual risk reduction rather than mere compliance indicators.

Metrics should measure behavior change, cyber skills development, resilience improvements, and decreased risk across the human layer. These outcome-driven metrics differ fundamentally from traditional training metrics tracking attendance or course completion. Threat reporting behavior represents the single most important metric for measuring human risk management effectiveness. Employees who confidently identify and report social engineering attempts remove threats from systems while providing security teams with valuable threat intelligence. Increases in both simulated and real threat reporting rates indicate genuine behavioral transformation and cultural change. Phishing simulation failure rates demonstrate employee capability to recognize common attack patterns. Declining failure rates over time indicate that security awareness training translates into practical ability to identify threats. However, these metrics require careful interpretation. For example, aggressive phishing simulations might achieve low failure rates while sophisticated campaigns evade employee detection and training. Metrics should align with actual organizational threat landscape rather than arbitrary targets. Security behavior and culture programs should measure compliance rates with key security policies, incident response times, time-to-detect threats, and access review completion rates. These metrics provide evidence of security posture maturity and institutional strength. Regular assessment and adaptation of programs based on measurement data ensures continuous improvement. As organizational threat landscapes evolve, as new technologies introduce novel risks, and as employee populations change, human risk management programs must adapt accordingly. Static programs designed once and left unchanged will gradually lose effectiveness as conditions shift.

Addressing Non-Human Identity Challenges

While much attention focuses on human user behavior, non-human identities require equally rigorous management. Service accounts running automated processes, API keys enabling system-to-system communication, and CI/CD pipeline credentials deploying application updates represent potentially high-value attack targets. A single compromised service account with excessive privileges can enable attackers to exfiltrate sensitive data or disrupt critical operations. Non-human identities require the same least privilege principles applied to human users. Service accounts should have access limited to specific systems or resources required for their designated tasks. API keys should be rotated regularly and never hardcoded into application source code. CI/CD credentials should be managed through secrets management systems that prevent human exposure to sensitive credentials. Centralized secrets management systems represent essential infrastructure for managing non-human identity security. These systems store credentials centrally, enforce access policies, maintain audit logs of credential access and usage, and enable automated credential rotation. By preventing developers from manually managing secrets scattered across configuration files and scripts, centralized systems reduce the risk surface and improve visibility. Organizations should implement automated discovery and inventory of non-human identities across their infrastructure. Many service accounts and API keys exist in undocumented locations, creating shadow identities that security teams cannot effectively monitor or control. Scanning tools can identify credentials and service accounts, enabling organization and governance

Conclusion

Mitigating human risk in enterprise computing software requires sustained commitment across multiple dimensions. Organizations must cultivate security cultures where leadership models secure behaviors and employees feel psychological safety to report incidents. Behavior-driven awareness programs grounded in psychological science prove more effective than traditional training approaches. Identity and access management systems must enforce least privilege while maintaining operational efficiency. Behavioral analytics detect anomalies indicating compromised accounts or insider threats. Clear policies balanced with technical controls establish behavioral boundaries. Incident response planning accounts for human decision-making under stress. Continuous measurement and adaptation ensure programs remain effective as threats and organizational contexts evolve. No single intervention eliminates human risk entirely. Rather, layered strategies addressing organizational culture, individual behavior, technical controls, and management practices create cumulative improvements in security posture. Organizations achieving the strongest security culture outcomes – where employees actively identify and report threats, where security becomes integral to operational decision-making, where technology and process enable rather than hinder secure work – demonstrate that human risk transforms from organizational liability into competitive advantage when properly managed.

References:

  1. https://sosafe-awareness.com/products/proactive-human-risk-management/
  2. https://keepnetlabs.com/blog/10-employee-behaviors-that-increase-enterprise-cybersecurity-risk-a-closer-look
  3. https://elnion.com/2025/02/10/enterprise-computing-under-siege-the-10-biggest-threats-facing-it-today/
  4. https://outthink.io/community/thought-leadership/blog/what-is-cybersecurity-human-risk-management-what-you-need-to-know/
  5. https://www.veeam.com/blog/enterprise-cybersecurity.html
  6. https://www.staysafeonline.org/articles/top-10-security-issues-in-enterprise-cloud-computing
  7. https://nisos.com/blog/human-risk-security-challenge/
  8. https://www.sentinelone.com/cybersecurity-101/cybersecurity/what-is-enterprise-cyber-security/
  9. https://www.exabeam.com/explainers/insider-threats/insider-threats/
  10. https://humanrisks.com
  11. https://destcert.com/resources/security-culture-training-awareness/
  12. https://www.titanhq.com/behavior-driven-security-awareness-training/
  13. https://www.proofpoint.com/us/threat-reference/human-risk-management
  14. https://hoxhunt.com/blog/creating-a-company-culture-for-security
  15. https://hoxhunt.com/lp/how-to-create-behavior-change-with-security-awareness-training
  16. https://hoxhunt.com/guide/human-risk-management-playbook
  17. https://www.security.gov.uk/policy-and-guidance/improving-security-culture/
  18. https://www.proofpoint.com/sites/default/files/solution-briefs/pfpt-us-sb-enterprise-security-awareness-training.pdf
  19. https://www.dataguard.com/blog/risk-mitigation-software-and-tools/
  20. https://identitymanagementinstitute.org/user-behavior-analytics/
  21. https://www.paloaltonetworks.com/cyberpedia/inadequate-iam-cicd-sec2
  22. https://x-phy.com/why-zero-trust-cant-be-fully-trusted/
  23. https://gurucul.com/blog/behavioral-analytics-cyber-security-user-behavior-analysis-guide/
  24. https://www.apono.io/blog/8-identity-access-management-iam-best-practices-to-implement-today/
  25. https://www.forbes.com/councils/forbestechcouncil/2022/03/14/why-you-need-the-human-element-in-zero-trust-security/
  26. https://www.oneidentity.com/learn/what-is-user-behavior-analytics.aspx
  27. https://www.cloudeagle.ai/blogs/identity-access-management-risks
  28. https://blog.gitguardian.com/non-human-identity-security-zero-trust-architecture/
  29. https://www.splunk.com/en_us/products/user-and-entity-behavior-analytics.html
  30. https://www.cm-alliance.com/cybersecurity-blog/role-of-human-error-in-cybersecurity-breaches-and-how-to-mitigate-it
  31. https://www.dragnetsecure.com/blog/incident-response-human-factors-the-critical-connection-between-people-and-cybersecurity?hsLang=en
  32. https://www.realtimenetworks.com/blog/protect-your-bottom-line-with-employee-accountability-tracking
  33. https://searchinform.com/articles/cybersecurity/concept/grc/security-policies/enterprise-information-security-policy/
  34. https://www.worksafe.wa.gov.au/system/files/migrated/sites/default/files/atoms/files/information_sheet_human_factors_integrating_human_factors_into_major_accident_event_investigations.pdf
  35. https://searchinform.com/articles/employee-management/engagement/
  36. https://www.inputoutput.com/blog/list-of-cyber-security-policies-every-business-needs
  37. https://www.scrut.io/post/human-element-defending-against-risks-in-incident-response
  38. https://safetyculture.com/topics/corporate-governance/personnel-accountability
  39. https://www.firemon.com/blog/network-security-policies/

Corporate Solutions Redefined By Human Error

Introduction

The mythology of enterprise IT suggests that catastrophic failures emerge from sophisticated cyberattacks, rare hardware failures, or acts of God – dramatic events befitting the stakes involved. The reality is far more humbling. The greatest threats to enterprise systems often wear a human face. Some of the most spectacular, expensive, and jaw-droppingly entertaining disasters in business history trace back not to malicious intent, but to what can only be described as outstanding displays of human creativity in finding new ways to break expensive things.

The $440 Million Typo: Knight Capital’s 45-Minute Meltdown

Few stories encapsulate the beautiful absurdity of human error in enterprise systems quite like Knight Capital’s August 1, 2012 catastrophe. Here was a company responsible for nearly 10% of all trading in U.S. equity securities – a genuine financial powerhouse – about to demonstrate that even the most sophisticated trading algorithms pale in comparison to human incompetence operating at scale. Knight needed to deploy new code to eight trading servers to support the Retail Liquidity Program launching that morning. An engineer dutifully went through each server and installed the new RLP (Retail Liquidity Program) code. Then he forgot about the eighth one. It happens to everyone, right? Perhaps forgetting where you parked your car, or that important dentist appointment. In this case, it happened to involve a $440 million consequence. The eighth server, abandoned in its obsolescence, still contained ancient legacy code from 2003 called “Power Peg” – a test algorithm specifically engineered to buy high and sell low to test other trading systems. Knight had stopped using Power Peg nearly a decade earlier, but like that expired yogurt in the back of your fridge, nobody thought to throw it away. When the new RLP orders arrived at the neglected server, they triggered this dormant code. Power Peg did what it was programmed to do: it bought high and sold low, continuously, without mercy. But here’s where things get truly ridiculous – the code that was supposed to tell Power Peg that its orders had been filled had been broken during a 2005 system refactoring. Confirmation never arrived, so Power Peg kept sending more orders. Thousands per second. In less than an hour, this single forgotten deployment had executed approximately 4 million trades across 154 different stocks, trading over 397 million shares and accumulating $3.5 billion in unwanted long positions and $3.15 billion in unwanted short positions.

What makes this story even more terrifying is the human response. When NYSE analysts noticed trading volumes were double normal levels, Knight’s IT team spent 20 critical minutes diagnosing the problem. Concluding the issue was the new code, they made what seemed like the logical decision –  revert all servers to the “old” working version. This was catastrophic. They installed the same defective Power Peg code on all eight servers. What had been contained to one-eighth of their capacity now consumed the entire enterprise. For the next 24 minutes, all eight servers ran the algorithm without throttling. The final tally was $440 million in losses – nearly the company’s entire market capitalization at the time. The company that survived multiple financial crises folded due to the modern equivalent of forgetting to save one file.

The Halloween Heist: Hershey’s Candy Catastrophe

If Knight Capital teaches us about deployment errors, Hershey’s 1999 ERP implementation disaster teaches us about magical thinking in project scheduling. The chocolate manufacturer decided that the perfect time to go live with a brand new enterprise resource planning system, supply chain management system, and customer relationship management system would be right before Halloween – the year’s biggest sales period. Imagine you’re Hershey’s management. You’re about to replace all your order fulfillment systems during your single most critical sales window of the entire year. What could possibly go wrong? Well, everything, as it turned out. The implementation involved inadequate testing and rushed preparation, and employees were not properly trained on the new systems. The cascading incompatibilities between the new ERP system and existing processes created technical glitches and massive delays in orders. The result was a 19% drop in quarterly profits and stock price that fell by over 8%, resulting in a loss of $100 million in shareholder value. Regulators became involved, financial reporting was delayed, and the company had to manage the embarrassing spectacle of its supply chain collapsing during peak season while its competitors quietly ate its market share. All of this because someone decided that the busy holiday season was the optimal time to perform untested system migrations.

Facebook Disconnects 2.9 Billion People with One Command

On October 4, 2021, approximately 2.9 billion people discovered that Facebook, Instagram, and WhatsApp – services that collectively represent one of the most critical communication infrastructure on Earth – could vanish in a heartbeat due to a single misconfigured command. During routine maintenance, an engineer sent what seemed like an innocuous command to check capacity on Facebook’s backbone routers. The routers that manage traffic between their data centers. The ones that, you know, connect their entire infrastructure to the internet.

Unfortunately, this command inadvertently disabled Facebook’s Border Gateway Protocol (BGP) routers, severing the company’s data centers from the entire internet. Here’s where it gets darker: a bug in an audit tool that should have caught the mistake decided to take the day off as well. The erroneous command propagated across their entire network before anyone noticed. With the BGP routers offline, Facebook’s DNS servers stopped broadcasting routes to the internet, which meant that when the 2.9 billion users tried to access facebook.com, their computers received a response essentially saying “I have no idea where that is.” In many parts of the world, WhatsApp serves as the primary communication method for text messaging and voice calls – Facebook had accidentally disconnected billions of people from their families and friends. The irony was that Facebook’s own internal systems were also affected, hindering the company’s ability to diagnose and fix the problem. Their own tools couldn’t connect to their own infrastructure. It took over six hours to restore service, and the incident made clear that even when you operate at the scale of billions of users, the difference between a thriving global communication network and a complete blackout can be something as simple as a typo in a maintenance command.

The Time Someone Installed a Server in the Men’s Bathroom

If the stories above involve mistakes at grand scale, sometimes the best entertainment comes from the sheer stupidity of basic decision-making. A consultant instructing a construction site to “install the server in a secure and well-ventilated location” seems like straightforward guidance. The project manager, apparently taking this instruction as creative license, installed the equipment inside the men’s bathroom in a construction site trailer. This isn’t a metaphor. The actual server equipment sat in an actual bathroom, vulnerable to moisture, temperature fluctuations, lack of security, and the general indignity of sharing a restroom.

The Server Room Entry Through the Women’s Bathroom

On the topic of bathroom-based infrastructure disasters, when one company switched office floors but needed to maintain their server room on the old floor, the solution they devised deserves recognition for its commitment to the absurd. Since they couldn’t walk through the offices of the new tenants, the building’s management agreed to seal off the server room from the old office and construct a new entrance. There was only one available route: through the handicapped stall in the women’s bathroom. Somehow, someone signed off on this plan…

The Bic Pen Vulnerability

A school installed a sophisticated push-button code lock on their server room door – clearly important equipment warranting security upgrades. However, they made one minor oversight: when installing the push-button lock, they removed the old key lock cylinder, leaving a hole in the door where the key mechanism used to sit. Someone discovered that inserting a standard Bic pen into this hole opened the lock mechanism. Instant access to the entire server room, obtained through the most trivially available office supply. This incident perfectly encapsulates the principle that security theater can be defeated by thinking creatively about where security measures actually end.

Rubber Mallets?

Sometimes enterprise failures involve not the systems themselves but the people trying to save them. In one incident, a major outage required emergency access to secured safes containing recovery credentials. Multiple administrators arrived with tools ready to force entry. The only hammers available were rubber mallets – completely ineffective against actual safes designed to resist precisely this sort of thing. Photos captured the incident showing them striking safes repeatedly with mallets that bounced off harmlessly. The solution? They called a locksmith, who arrived, assessed the situation with the faintest hint of professional disappointment, and opened the safe in seconds using just a screwdriver.

The Plastic Sensor Blocker

Sometimes the Enterprise Gods decide to test humans with riddles disguised as infrastructure issues. One team received an overheating alert suggesting a potential fire in the data center – a proper panic situation. The investigation revealed that a piece of plastic was obstructing the temperature sensor of a networking device. That’s it. A piece of plastic. The sensor was lying, the alert was screaming, and the entire team was running around preparing for a catastrophe that existed only in measurement error.

National Grid’s $585 Million Leap of Faith

National Grid, a gas and electric company serving millions of customers, embarked on a new ERP implementation in November 2012 – just one week after Hurricane Sandy had devastated the Northeast. The timeline was immovable because missing the deadline would cost $50 million in overruns and require regulatory approval delaying everything five more months. The system wasn’t ready. The team deployed it anyway. The results achieved a remarkable level of dysfunction. Employees received random payment amounts – some underpaid, some overpaid, and some not paid at all. The company spent $8 million on overpayments alone, and $12 million on settlements due to underpayment and erroneous deductions. National Grid couldn’t process over 15,000 vendor invoices. The system that was supposed to close their books in four days suddenly required 43 days, destroying cash flow opportunities that the company depended on for short-term financing. The total disaster cost National Grid approximately $585 million when factoring in the remediation effort – the company ended up hiring around 850 contractors at over $30 million per month to fix the disaster they had created. They sued Wipro, the implementation partner, which eventually paid $75 million to settle.

Nike’s $400 Million Sneaker Disaster

In 2000, Nike spent $400 million on a new ERP system to overhaul its supply chain and inventory management. The implementation involved the now-familiar mix of inadequate testing and unrealistic project timelines. What resulted was a system that made profoundly stupid inventory decisions. Nike’s automated system, now making decisions at scale, ordered massive quantities of low-selling sneakers while starving inventory of high-demand products. The company’s revenue dropped 20% in the quarter following implementation, stock price declined significantly, and the firm faced class-action lawsuits. Nike ultimately had to invest another five years and $400 million in the project to fix the original $400 million mistake.

The Ansible Shutdown That Wasn’t

During a data center incident investigation, an entire facility suddenly appeared to lose power. The team initially hypothesized catastrophic power failure, but the on-site technician insisted there was no power issue because the lights were functioning. The lights. The team was talking about LED indicators on equipment; the technician was referring to overhead room lighting. After extensive analysis, the team discovered the actual cause: someone had used Ansible automation to shut down what they believed was a new, non-production system model. It turned out the entire data center was actually running on that model.

The Human Error That Defines the Industry

Research from the Uptime Institute found that human error causes approximately 70% of data center issues – not from malice but from people being in the wrong place at the wrong time, making decisions they weren’t equipped to handle, or simply overlooking obvious mistakes. Data center studies show that staff working shifts longer than 10 hours experience significantly higher error rates, with 12-hour shifts showing 38% higher injury and error rates compared to 8-hour shifts. More recent research indicates that 64% of IT experts recognize unintentional employee deletions as the primary data threat to their organization, surpassing external cyberattacks and malicious actors. Accidental deletion or overwriting of databases represents the most common human error leading to data catastrophes, and many organizations have experienced incidents that cost weeks or months of recovery time. The common thread through all these stories is that enterprise systems are ultimately operated by humans – creative, fallible, occasionally brilliant humans who can accomplish the most extraordinary feats of engineering and the most jaw-droppingly obvious mistakes with approximately equal frequency. The difference between a robust enterprise system and a spectacular failure often depends on whether someone deployed code to the eighth server, whether the team scheduled a go-live during the busiest season, or whether someone remembered that plastic conducts heat poorly and shouldn’t block temperature sensors. These disasters remind organizations that the most sophisticated safeguard isn’t better technology – it’s recognition that human error is not something that can be eliminated, only designed for and mitigated. The question isn’t whether humans will make mistakes; it’s whether the system is designed well enough to survive when they inevitably do.

References:

  1. https://www.swarnendu.de/blog/the-knight-capitals-automation-failure-case-study/
  2. https://permutehq.com/articles/top-10-worst-erp-failures/?amp=1
  3. https://erp.compare/blogs/unlucky-for-some-the-13-biggest-erp-failures-ever/
  4. https://www.ihf.co.uk/facebook-instagram-outage-by-human-error/
  5. https://www.firemon.com/blog/one-simple-misconfiguration-2-9-billion-users-down/
  6. https://www.pingdom.com/blog/data-center-stories-that-will-make-you-laugh-or-cry/
  7. https://www.reddit.com/r/sre/comments/1mwzm09/funniest_incident_youve_had/
  8. https://www.spinnakersupport.com/blog/2023/12/13/erp-implementation-failure/
  9. https://journal.uptimeinstitute.com/long-shifts-in-data-centers-time-to-reconsider/
  10. https://www.fastcompany.com/91434172/data-disasters-and-human-error
  11. https://www.cracked.com/article_141_6-natural-disasters-that-were-caused-by-human-stupidity.html
  12. https://www.webwerks.in/blogs/how-prevent-human-error-data-center
  13. https://www.reddit.com/r/Futurism/comments/1l0yl1p/the_terrifying_theory_of_stupidity_you_were_never/
  14. https://sites.insead.edu/facultyresearch/research/doc.cfm?did=70677
  15. https://www.panorama-consulting.com/top-10-erp-failures/
  16. https://www.reddit.com/r/sysadmin/comments/4bm68h/an_administrator_accidentally_deleted_the/
  17. https://dropbox.tech/infrastructure/disaster-readiness-test-failover-blackhole-sjc
  18. https://nypost.com/2025/10/20/business/amazon-web-services-outage-trolled-as-rehearsal-for-the-end-of-the-internet/
  19. https://learn.microsoft.com/en-us/answers/questions/2123706/i-deleted-the-database-how-can-i-get-it-back
  20. https://siteltd.co.uk/causes-of-data-center-outages/
  21. https://help.ovhcloud.com/csm/en-web-hosting-recover-deleted-database-backup?id=kb_article_view&sysparm_article=KB0064104
  22. https://bridgeheadit.com/understanding-it/wired-for-disaster-the-hidden-risks-of-neglected-structured-cabling
  23. https://www.evolven.com/blog/it-nightmares-and-data-center-horror-stories.html
  24. https://www.qeedio.com/posts-en/when-software-goes-unchecked-financial-giant-knight-capital-nearly-ruined
  25. https://www.sysdig.com/blog/exploit-detect-mitigate-log4j-cve
  26. https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476
  27. https://knowledge.insead.edu/entrepreneurship/knight-capital-group-did-accidentally-evil-computer-knock-down-trading-house
  28. https://hoffmannmurtaugh.com/blog/why-was-facebook-down/
  29. https://unit42.paloaltonetworks.com/apache-log4j-vulnerability-cve-2021-44228/

Customer Resource Management Is A Superior Term For CRM

Introduction

The acronym CRM has been embedded in business vocabulary for three decades, yet the terminology that defines it remains fundamentally limited in scope and strategic intent. While “Customer Relationship Management” has dominated industry discourse since the 1990s, the term “Customer Resource Management” offers a more accurate and strategically aligned description of what modern CRM systems actually accomplish and what businesses truly need from them

The Narrowness of “Relationship” as a Strategic Framework

When Tom Siebel and his peers introduced the term “Customer Relationship Management” in the mid-1990s, it represented a genuine advancement from the manual, transaction-focused sales practices that preceded it. The emphasis on “relationship” reflected a customer-centric shift from purely product-oriented business models, aligning with management philosophy pioneers like Peter Drucker, who recognized that “the primary business of every firm is to create and retain customers.” However, relationship-focused terminology carries inherent limitations that obscure the true value proposition of modern CRM systems. The word “relationship” implies a mutual, reciprocal dynamic – a connection built on shared interest, emotional investment, and symmetrical benefit. In reality, CRM systems are fundamentally asymmetrical instruments designed to extract maximum strategic and financial value from customer interactions, data, and lifetime potential. While businesses certainly benefit from improved customer satisfaction, the underlying architecture of CRM is engineered to optimize the organization’s position rather than create genuinely mutual relationships. Calling it a “relationship” management system thus misrepresents the power dynamics and actual intent embedded in these platforms.

Why “Resource” Better Captures Strategic Intent

“Resource” carries significantly more precise and honest connotations. It accurately reflects how contemporary businesses view customers – as valuable assets whose data, behavior patterns, purchasing history, and lifetime value require strategic management and optimization. This terminology aligns with established business theory, particularly resource-based and market-resource perspectives that examine competitive advantage through strategic asset management. Customer information itself has emerged as a critical competitive resource in the digital economy. Academic research explicitly frames customer data and insights as market-based resources that drive strategic advantage, competitive positioning, and financial performance. Organizations now recognize that customer information assets – encompassing accumulated data on behaviors, preferences, interactions, and transaction history – constitute intellectual capital requiring sophisticated management frameworks. By framing CRM as “resource management,” the terminology acknowledges this fundamental business reality without the euphemistic softening that “relationship” provides.

Alignment with System Capabilities

Current CRM systems do far more than foster relationships. They systematize customer intelligence collection, automate data analysis, segment populations for targeted marketing, track lifetime value metrics, optimize acquisition and retention costs, and engineer personalized experiences designed to maximize customer monetization. These capabilities describe resource optimization more accurately than relationship cultivation. When a CRM system automatically calculates which customer service representatives should prioritize high-value clients, or when it segments audiences to deliver targeted messaging designed to increase conversion rates, the system is explicitly managing customers as resources to be allocated based on strategic value. The term “resource” articulates this function with transparency that “relationship” masks. Furthermore, sophisticated CRM implementations now incorporate artificial intelligence to predict customer behavior, identify upsell opportunities, and even determine optimal pricing strategies – all clearly resource optimization activities rather than relationship-building endeavors.

Technical Implementation Reflects Resource Philosophy

The operational architecture of CRM platforms reinforces that they are fundamentally resource management systems rather than relationship platforms.

These systems centralize customer data into unified databases, enabling visibility into resource availability (customer segments), allocation efficiency (sales pipeline optimization), and performance metrics (customer lifetime value, acquisition cost, retention rates). They facilitate cross-departmental collaboration in exploiting customer information assets across marketing, sales, and customer service functions. The analytics and reporting capabilities embedded in CRM systems focus on extracting maximum value from the customer base – identifying which customer segments generate highest returns, which touchpoints convert most effectively, and where marketing investment yields optimal results. This is classical resource management: understanding asset composition, optimizing allocation, and measuring return on deployed resources.

The term “resource management” honestly describes this operational reality, while “relationship management” obscures it.

Resource Management Acknowledges Power Asymmetry

Modern CRM systems operate within inherently asymmetrical relationships. Businesses deploy increasingly sophisticated data collection technologies, analytical tools, and artificial intelligence to understand customers in ways customers cannot reciprocate. This power imbalance reflects genuine resource control dynamics rather than relationship mutuality. The resource management framework explicitly acknowledges that customers, while valuable to organizations, cannot be “owned” by firms in traditional property terms. Yet they represent controllable, exploitable assets that businesses can strategically develop, segment, prioritize, and optimize. This distinction matters for organizational clarity. When leadership understands CRM as resource management, it frames the system correctly as an instrument for extracting customer value rather than as a sentimental endeavor to build deeper connections. Companies that operate from this perspective make clearer strategic decisions about where to allocate resources, which customer segments justify investment, and how to optimize the entire customer lifecycle for maximum return.

Evolution Beyond Outdated Terminology

The enterprise systems landscape has evolved substantially since the 1990s.

Customer experience management (CEM), which focuses on emotional connection and journey optimization, now often sits alongside CRM in sophisticated implementations. This distinction clarifies that CRM handles transactional resource optimization while CEM addresses experiential architecture – though both operate within asymmetrical business frameworks. Calling CRM “customer resource management” distinguishes it clearly from aspirational relationship-building frameworks while maintaining technical accuracy about what the system actually does. Furthermore, as CRM systems increasingly incorporate agentic AI capabilities, multi-resource orchestration, and enterprise-wide data integration, the “relationship” framing becomes progressively inadequate. These systems now manage customer resources alongside other enterprise resources – inventory, personnel, operational capacity – within integrated enterprise resource planning ecosystems. The resource management framework accommodates this integration naturally, while the relationship terminology becomes increasingly anachronistic.

Strategic Clarity for Digital Transformation

Organizations undergoing digital transformation benefit from precise terminology that reflects actual system function rather than aspirational messaging. When executives understand CRM as customer resource management, it clarifies that the system’s purpose involves optimizing customer lifetime value, segmenting populations for differential treatment based on resource contribution, automating customer intelligence collection, and engineering interactions designed to maximize organizational capture of customer-generated value. This clarity enables more effective resource allocation decisions, more honest internal stakeholder alignment, and more transparent customer communication about data usage The shift from “relationship” to “resource” terminology also acknowledges the sophisticated role customer data and analytics now play in competitive strategy. Business leaders managing digital transformation increasingly recognize that customer information represents a strategic asset class requiring governance frameworks similar to other critical organizational resources.

Terminology that reflects this reality supports more sophisticated strategic thinking than outdated relationship-focused language.

Conclusion

The term “Customer Resource Management” provides substantially more strategic accuracy, operational honesty, and forward-looking precision than “Customer Relationship Management.” While the relationship language served useful purposes in the 1990s when it represented genuine progress beyond purely transactional approaches, contemporary business reality has evolved far beyond that framework. Modern CRM systems manage customer information assets, optimize resource allocation across customer segments, engineer personalized experiences designed for maximum value extraction, and integrate customer data into enterprise-wide resource orchestration. The resource management terminology captures these realities without the euphemistic softening that relationship language provides. As organizations continue advancing their digital transformation initiatives and recognizing customers as critical strategic resources deserving sophisticated management frameworks, adopting the resource management terminology will provide clearer strategic alignment, more honest stakeholder communication, and more accurate system positioning within the broader enterprise architecture.

References:

  1. https://www.breakcold.com/explain/crm-customer-relationship-management
  2. https://www.nice.com/glossary/what-is-contact-center-crm-customer-relationship-management
  3. https://www.appvizer.com/magazine/customer/client-relationship-mgt/history-of-crm
  4. https://localcrm.com/crm-the-history-evolution-of-crm/
  5. https://www.sciencedirect.com/science/article/abs/pii/S0019850120300389
  6. https://www.strategie-aims.com/conferences/28-xxvieme-conference-de-l-aims/communications/4755-customers-as-a-resource-a-new-perspective-in-strategic-management/download
  7. https://www.techtarget.com/searchcustomerexperience/definition/CRM-customer-relationship-management
  8. https://business.adobe.com/blog/basics/customer-relationship-management-what-it-is-how-it-works-why-it-is-important
  9. https://www.ibm.com/think/topics/crm
  10. https://asana.com/resources/crm-strategy
  11. https://www.method.me/blog/customer-experience-management-vs-customer-relationship-management/
  12. https://sashandcompany.com/strategic-communication/customer-experience-management-vs-customer-relationship-management/
  13. https://www.planetcrust.com/customer-resource-management-v-crm/
  14. https://www.netsuite.com/portal/resource/articles/erp/crm-strategies.shtml
  15. https://www.linkedin.com/pulse/resource-management-crucial-corporations-from-esg-alusch-h-amoghli-xas5f
  16. https://www.chemicalindustryjournal.co.uk/srm-strategic-resource-management-can-help-you-harness-the-full-power-of-your-data
  17. https://www.pipedrive.com/en/blog/customer-resource-management
  18. https://gedys.com/en/blog/crm-definition
  19. https://www.tiny.cloud/blog/crm-history-market-future/
  20. https://prismatic-technologies.com/blog/customer-resource-management/
  21. https://pmc.ncbi.nlm.nih.gov/articles/PMC8612906/
  22. https://www.netsuite.com/portal/resource/articles/crm/what-is-crm.shtml
  23. https://www.nutshell.com/crm/resources/crm-terminology
  24. https://www.dataguard.com/blog/customer-relationship-management-crm/
  25. https://userpilot.com/blog/customer-experience-management-vs-customer-relationship-management/
  26. https://www.salesforce.com/eu/crm/strategy/
  27. https://www.runn.io/blog/data-and-resource-management
  28. https://timreview.ca/article/534
  29. https://monday.com/blog/crm-and-sales/crm-strategy/r

Achieving Enterprise Data Sovereignty in 2025

Introduction

The concentration of western data in United States-controlled infrastructure has emerged as one of the most pressing challenges facing European and global enterprises in 2025. With approximately 92 percent of western data stored on US-owned clouds and infrastructure, businesses across Europe, Canada, Australia, and other western democracies face a stark reality: their most valuable digital assets remain subject to foreign jurisdiction, extraterritorial surveillance laws, and geopolitical uncertainties that threaten operational autonomy. This dependency extends far beyond mere technical considerations. American tech giants Amazon Web Services, Microsoft Azure, and Google Cloud control roughly 70 percent of Europe’s cloud infrastructure, creating what French officials have characterized as a form of digital dependency akin to addiction. In Belgium, Microsoft commands 70 percent of cloud infrastructure market share. Sweden has entrusted over 57 percent of its public digital infrastructure, including cities and government services, to Microsoft mail servers. Similar patterns emerge across Finland (77 percent), the Netherlands (60 percent), and Norway (64 percent).

The challenge intensifies when examining the legal landscape. The United States CLOUD Act, enacted in 2018, grants American federal law enforcement agencies authority to compel US-based technology companies to provide requested data stored anywhere globally, regardless of physical location. This extraterritorial reach directly conflicts with European data protection principles enshrined in the General Data Protection Regulation. Similarly, the Foreign Intelligence Surveillance Act Section 702 authorizes warrantless collection of foreign communications by US intelligence agencies, targeting non-US persons located outside American territory for national security purposes.

Understanding the Sovereignty Gap

Data sovereignty fundamentally represents the principle that digital information remains subject to the laws and governance structures of the jurisdiction where it originates or resides. For western businesses operating under increasingly stringent privacy regulations, this concept has evolved from theoretical concern to operational imperative. The European Union alone has implemented a comprehensive regulatory framework encompassing the Data Act, Data Governance Act, Digital Operational Resilience Act, and GDPR, collectively designed to safeguard European citizens’ data rights while promoting digital autonomy. The current dependency on American cloud infrastructure creates multiple vulnerability vectors. Even when data physically resides within European data centers, organizations utilizing US-based providers remain exposed to American legal jurisdiction. US courts can issue production orders requiring disclosure of customer data held by American companies, irrespective of storage location. Under the CLOUD Act, these production orders apply to any data within a cloud provider’s control, while FISA Section 702 enables the National Security Agency to issue directives compelling US cloud providers’ parent companies to disclose customer data stored in Europe. This jurisdictional complexity extends beyond government surveillance concerns. Organizations face compliance challenges when American laws conflict with European regulations. The Court of Justice of the European Union’s landmark Schrems II decision invalidated the EU-US Privacy Shield framework, declaring that FISA Section 702’s lack of judicial oversight and inadequate redress mechanisms for EU citizens make US privacy protections insufficient under GDPR standards. While the EU-US Data Privacy Framework attempts to address these concerns through binding safeguards limiting US intelligence authorities’ data access, legal challenges persist, with the possibility of additional court cases continuing to create uncertainty

European Sovereign Cloud Infrastructure

Europe has responded to these challenges through coordinated initiatives designed to reclaim digital autonomy. The Gaia-X project, launched in 2019 by German Minister of Economic Affairs Peter Altmaier and French counterpart Bruno Le Maire, represents the most ambitious attempt to develop a federated secure data infrastructure for Europe. Rather than creating a competing cloud service provider, Gaia-X aims to establish standards, rules, and verification frameworks enabling transparent data exchange while maintaining European sovereignty principles.​ The initiative has progressed substantially since its inception. Participants now access a comprehensive trust framework defining secure data exchange protocols between different services. The Loire release, presented at the official Gaia-X Summit, provides businesses with technical tools implementing Gaia-X standards through automated compliance with regulatory requirements. Multiple lighthouse projects test Gaia-X technology across industries including agriculture, automotive, and energy sectors. Since 2021, over 200 million euros in funding has supported these projects, with the initiative expanding beyond European borders to include pilots in Japan and Korea. Complementing Gaia-X, Europe has witnessed emergence of truly sovereign cloud providers headquartered and operated entirely within European Union jurisdiction. OVHcloud from France, Scaleway from France, T-Systems from Germany, Hetzner from Germany, UpCloud from Finland, and Exoscale from Switzerland and Austria exemplify this model. These providers offer mature Infrastructure-as-a-Service and increasingly capable Platform-as-a-Service solutions, with their primary advantage residing in enhanced data control, clearer regulatory pathways, and predictable long-term operating conditions. Unlike American hyperscalers establishing European subsidiaries, these organizations maintain no operational ties to United States jurisdiction, creating formidable barriers against foreign data access requests. The European Commission has formalized sovereignty assessment through its Cloud Sovereignty Framework, which evaluates cloud services across eight objectives spanning strategic alignment, legal jurisdiction, operational sovereignty, supply chain transparency, technological openness, security, compliance with EU law, and environmental sustainability. Services receive SEAL rankings from zero (no sovereignty) to four (full digital sovereignty), with the framework explicitly designed for government procurement decisions. A 180 million euro tender launched in 2025 selects up to four providers meeting minimum levels across all eight objectives, with any offer failing criterion thresholds automatically rejected.

Strategic Pathways to Data Sovereignty

  • Western businesses pursuing data sovereignty must navigate complex technical and organizational transitions. The most effective approach combines multiple strategies tailored to specific workload characteristics, regulatory requirements, and operational constraints. Hybrid Cloud Architectures represent the pragmatic middle ground, enabling organizations to maintain sensitive data within sovereign environments while leveraging public cloud capabilities for less critical workloads. This model involves building private on-premises environments securing highly sensitive data while benefiting from hyper-scaler advanced technology for appropriate use cases. Private clouds and edge computing can satisfy requirements for data protection, geographical localization, control, access, and security. By nature, private clouds located within national borders and dedicated to specific customers provide core building blocks required for cloud sovereignty, since workloads and data fall under domestic jurisdiction while remaining fully disconnected from hyperscalers. However, hybrid approaches require careful workload classification. Organizations must determine which data can remain on public cloud infrastructure versus which data must migrate to on-premises environments. This decision framework typically considers data sensitivity classifications, regulatory compliance requirements, performance characteristics, and cost implications. Studies indicate that 19 percent of companies plan to increase on-premises investments, while 13 percent have slowed or completely stopped cloud migrations, driven primarily by control requirements rather than cost considerations.
  • Multi-Cloud Strategies distribute workloads across multiple cloud providers, reducing single-vendor dependency while optimizing for specific regional sovereignty requirements. According to 2024 research, over 92 percent of large enterprises now operate in multi-cloud environments, leveraging services from AWS, Microsoft Azure, Google Cloud Platform, and regional providers based on geographical compliance needs. This approach allows sensitive data deployment on European sovereign cloud infrastructure while utilizing hyperscaler services for global-facing applications or compute-intensive workloads. The multi-cloud model addresses data sovereignty by enabling organizations to select providers with data centers in specific regions meeting local legal requirements. For example, enterprises might utilize OVHcloud or Scaleway for European Union citizen data requiring GDPR compliance, AWS for United States operations, and regional providers for Asia-Pacific markets. However, multi-cloud architectures introduce complexity requiring sophisticated orchestration tools like Kubernetes, Terraform, and Ansible managing deployments across environments, alongside unified monitoring solutions providing insights into application performance.
  • Encryption Key Management emerges as perhaps the most critical technical control for organizations unable to fully repatriate from US cloud providers. Effective key management ensures that even if cloud providers face legal compulsion to provide access, encrypted data remains protected without customer-controlled decryption keys. Solutions like Microsoft Purview Double Key Encryption employ two separate encryption keys, one controlled by Microsoft and one exclusively controlled by the customer, where data can only be decrypted when both keys combine. Critically, all encryption and decryption occurs locally on client devices before data transmission to Microsoft’s cloud, ensuring only encrypted versions ever leave customer environments. Advanced key management implementations incorporate Bring Your Own Key or Hold Your Own Key models empowering enterprise data sovereignty in cloud-hosted environments. These approaches enable organizations to maintain encryption keys within specific geographic locations ensuring adherence to data sovereignty laws, with geo-fencing capabilities preventing key access from unauthorized jurisdictions. The most sophisticated solutions employ secure Multi-Party Computation for key distribution mitigating single points of compromise, while offering deployment flexibility across on-premises, Software-as-a-Service, or hybrid models.
  • Cloud Repatriation has accelerated dramatically, with 83 percent of enterprises planning to repatriate workloads from public to private or on-premises environments in 2024, compared to just 43 percent in 2021. This trend reflects converging factors including exploding AI-driven costs, hybrid cloud infrastructure maturation, and evolving sovereignty regulations. Organizations cite security and compliance hurdles as primary motivations, with 51 percent of decision makers identifying security issues as the dominant reason for repatriation. Data sovereignty requirements specifically drive repatriation decisions, as expanding global regulations govern data location. Sensitive information including personally identifiable information, medical records, and financial records must remain physically stored within specific geographic boundaries. Repatriation enables businesses to align with local mandates while maintaining compliance more effectively than complex multi-jurisdictional cloud arrangements. Rather than wholesale cloud abandonment, repatriation typically involves strategic migration of specific workloads, with organizations maintaining cloud-based services where they deliver clear value while bringing sovereignty-sensitive workloads back under direct control.
  • Low-Code and Open-Source Platforms provide compelling sovereignty enablers by democratizing development capabilities and reducing dependence on foreign enterprise software vendors. Low-code platforms like Corteza allow organizations to build custom enterprise applications resembling Salesforce, Microsoft Dynamics, SAP, and Oracle NetSuite without proprietary licensing restrictions. These platforms accelerate development by 60 to 80 percent while preserving sovereignty through internal solution development addressing specific business needs while maintaining data control and operational autonomy. Open-source enterprise resource systems including Odoo, ERPNext, Dolibarr, and Apache OFBiz offer European alternatives to American proprietary software. These solutions provide full transparency, control, and flexibility without hidden costs or forced updates. Organizations decide how technology operates and where it deploys, rather than accepting terms dictated by foreign corporations. European open-source initiatives like openDesk from Zentrum Digitale Souveränität demonstrate that Europe can build robust digital ecosystems with tools including XWiki, CryptPad, OpenProject, and Nextcloud serving as privacy-oriented alternatives to platforms outside Europe.
  • Edge Computing addresses data sovereignty by processing and storing information closer to its origin rather than centralized data facilities, helping maintain data within national borders subject to local laws. Edge computing reduces risks associated with cross-border data transfers while providing advantages including reduced latency, improved network efficiency, and superior real-time data processing capabilities. For industries requiring low-latency applications or facing stringent data localization requirements, edge architectures enable compliance while maintaining operational performance.

Navigating Regulatory Complexity

Western businesses must align data sovereignty strategies with evolving regulatory frameworks spanning multiple jurisdictions. The European Union’s comprehensive approach encompasses GDPR governing personal data processing, the Network and Information Systems Directive 2 (NIS2) enhancing cybersecurity across essential sectors, and the Digital Operational Resilience Act (DORA) ensuring financial entities can withstand ICT-related disruptions. These regulations exhibit notable intersections, particularly regarding risk management, incident reporting, and security emphasis. Risk management strategies advocated by NIS2 and operational resilience requirements of DORA complement each other, while GDPR’s data protection by design and default requirements support cybersecurity measures outlined in NIS2. Organizations implementing unified compliance platforms can address multiple regulatory requirements simultaneously, eliminating gaps created by fragmented systems failing to communicate effectively. The 144 countries worldwide that have enacted data protection and sovereignty laws create additional complexity for multinational organizations. Each jurisdiction maintains unique requirements regarding data residency, cross-border transfers, encryption standards, and governmental access provisions. Western businesses must conduct comprehensive Transfer Impact Assessments when moving data internationally, often implementing supplementary measures including strong encryption with keys controlled within appropriate jurisdictions.

Building Organizational Capabilities

Achieving data sovereignty requires more than technology deployment. Organizations must develop comprehensive governance frameworks, cultivate internal expertise, and foster cultural shifts recognizing data sovereignty as strategic imperative rather than compliance burden. Successful implementations begin with thorough data classification systems identifying which information requires sovereign treatment based on sensitivity levels, regulatory obligations, and business criticality. This classification drives decisions regarding appropriate storage locations, encryption requirements, access controls, and retention policies. Organizations should establish clear data lineage tracking, documenting where information originates, how it flows through systems, where it resides, and who accesses it throughout lifecycle stages. Vendor selection processes must incorporate sovereignty considerations as primary evaluation criteria. Organizations should assess potential providers across multiple dimensions including legal jurisdiction and ownership structure, operational control and personnel nationality, data center locations and residency guarantees, encryption and key management approaches, contractual commitments regarding data access, audit rights and transparency provisions, and exit strategies preventing vendor lock-in. For truly sovereignty-sensitive workloads, preference should favor providers headquartered within appropriate jurisdictions without subsidiaries or dependencies exposing them to foreign legal requirements. Training and awareness programs ensure personnel understand sovereignty requirements and their individual responsibilities. This extends beyond technical teams to encompass business units, procurement departments, legal counsel, and executive leadership. Organizations should develop clear policies governing data handling, establish approval workflows for cloud service adoption, and implement monitoring mechanisms detecting shadow IT introducing sovereignty risks.

Looking to the Future

Western businesses confronting the reality that 92 percent of their data resides on US-owned infrastructure face complex but navigable challenges.

Achieving genuine data sovereignty requires strategic commitment extending beyond superficial measures. Organizations cannot rely solely on American hyperscalers establishing European subsidiaries or sovereign cloud offerings, as fundamental jurisdictional conflicts remain unresolved despite billions in infrastructure investment. The path forward demands pragmatic, multi-layered approaches combining European sovereign cloud providers for sensitive workloads, hybrid architectures maintaining critical data on-premises, robust encryption with customer-controlled key management, and strategic workload repatriation where appropriate. Success requires treating sovereignty as ongoing program rather than one-time project, with continuous assessment as regulatory landscapes evolve, technologies mature, and geopolitical dynamics shift. The sovereign cloud market demonstrates this priority’s commercial significance, with the global market valued at 123 billion USD in 2024 and projected to reach 824 billion USD by 2033. Europe leads adoption, with 84 percent of European organizations using or planning to use sovereign cloud solutions. This momentum reflects growing recognition that digital sovereignty constitutes not merely regulatory compliance but competitive advantage, customer trust differentiator, and foundation for innovation in an increasingly fragmented digital world. Western businesses possessing clarity regarding sovereignty objectives, technical capabilities for implementation, and organizational commitment required for sustained transformation can reclaim control over their digital destinies. The concentration of data in American infrastructure represents current state, not inevitable future. Through deliberate strategy, appropriate technology selection, and unwavering focus on sovereignty principles, enterprises can achieve operational autonomy while maintaining access to cloud computing’s transformative capabilities

References:

  1. https://www.atlanticcouncil.org/blogs/new-atlanticist/waving-the-flag-of-digital-sovereignty/
  2. https://gartsolutions.com/digital-sovereignty-of-europe-choosing-the-eu-cloud-provider/
  3. https://blogs.vmware.com/cloud-foundation/2022/08/04/how-data-privacy-and-sovereignty-impact-business/
  4. https://www.lexisnexis.com/blogs/int-legal/b/insights/posts/cloud-act-gdpr-implications
  5. https://conceptboard.com/blog/us-cloud-act-european-data-protection/
  6. https://blocksandfiles.com/2025/03/27/eu-data-sovereignty-and-trumps-usa/
  7. https://makandra.de/en/articles/the-cloud-act-667
  8. https://www.isaca.org/resources/news-and-trends/industry-news/2024/cloud-data-sovereignty-governance-and-risk-implications-of-cross-border-cloud-storage
  9. https://www.weforum.org/stories/2025/01/europe-digital-sovereignty/
  10. https://www.nttdata.com/global/en/insights/focus/2025/what-key-management-services-ensure-data-sovereignty-in-the-sovereign-cloud
  11. https://www.linkedin.com/pulse/3-myths-sovereign-cloud-dave-michels-2a9ac
  12. https://unit8.com/resources/eu-cloud-sovereignty-emerging-geopolitical-risks/
  13. https://www.polytechnique-insights.com/en/columns/digital/gaia-x-the-bid-for-a-sovereign-european-cloud/
  14. https://en.wikipedia.org/wiki/Gaia-x
  15. https://en.wikipedia.org/wiki/Gaia-X
  16. https://www.ionos.com/digitalguide/server/know-how/what-is-gaia-x/
  17. https://gaia-x.eu
  18. https://spacetime.eu/blog/who-really-owns-your-data-comparing-european-sovereign-cloud-providers/
  19. https://unit8.com/resources/eu-cloud-sovereignty-four-alternatives-to-public-clouds/
  20. https://www.infobits.io/blog/cloud-providers-eu-vs-us
  21. https://dev.to/engrkhan001/beyond-borders-navigating-data-sovereignty-and-the-illusion-of-local-cloud-providers-oid
  22. https://blog.whaller.com/en/2025/10/29/whaller-eu-cloud-sovereignty-framework/
  23. https://www.infoq.com/news/2025/11/eu-seal-framework-governance/
  24. https://atos.net/en/blog/data-sovereignty-cloud-strategy-sovereign-cloud-part-2
  25. https://www.datacenters.com/news/multi-cloud-vs-hybrid-cloud-what-s-the-right-strategy-in-2025
  26. https://barc.com/the-great-cloud-reversal/
  27. https://www.crmt.com/resources/blog/data-repatriation-and-sovereignty-building-resilient-ai-ready-architectures/
  28. https://www.cache-cloud.com/blogs/how-data-sovereignty-is-changing-cloud-strategies-in-2025
  29. https://www.growin.com/blog/multi-cloud-strategies-business-2025/
  30. https://dev.to/yash_sonawane25/the-rise-of-multi-cloud-strategies-best-practices-for-2025-4goe
  31. https://cpl.thalesgroup.com/blog/encryption/15-best-practices-data-sovereignty
  32. https://www.jisasoftech.com/preserving-privacy-the-vital-role-of-encryption-key-management-in-the-modern-age/
  33. https://duokey.com/resources/achieving-data-sovereignty-in-microsoft-365-protect-your-cloud-data-in-2025
  34. https://www.fortanix.com/blog/key-management-challenges-and-solutions
  35. https://www.opentext.com/what-is/cloud-repatriation
  36. https://www.kyndryl.com/fr/fr/about-us/news/2025/06/enterprise-data-repatriation-trend
  37. https://zpesystems.com/cloud-repatriation-why-companies-are-moving-back-to-on-prem/
  38. https://cloudian.com/blog/cloud-repatriation/
  39. https://www.planetcrust.com/sovereignty-and-low-code-business-enterprise-software/
  40. https://xwiki.com/en/Blog/European-alternatives-to-SaaS/
  41. https://dev.to/dev_tips/top-10-european-open-source-projects-to-watch-in-2025-5ea7
  42. https://www.planetcrust.com/leading-open-source-enterprise-resource-systems-2025/
  43. https://blog.implevista.com/top-5-open-source-erp-solutions-compared/
  44. https://eddie.energy/files/eddie/media/media-library/ICFEC-2023-data-sovereignty.pdf
  45. https://www.ibm.com/think/insights/data-sovereignty-at-the-edge
  46. https://www.forbes.com/councils/forbestechcouncil/2023/04/26/edge-of-sovereignty-navigating-data-security-and-compliance-in-latin-americas-evolving-tech-landscape/
  47. https://www.rtinsights.com/solving-for-sovereign-data-with-edge-ai/
  48. https://nebosystems.eu/comparative-guide-dora-gdpr-nis2-cer/
  49. https://www.kiteworks.com/cybersecurity-risk-management/europe-unified-compliance/
  50. https://evaluationshub.com/gdpr-nis2-dora-in-supplier-onboarding/
  51. https://www.bearingpoint.com/fr-fr/publications-evenements/publications/data-sovereignty-the-driving-force-behind-europes-sovereign-cloud-strategy/
  52. https://www.capgemini.com/wp-content/uploads/2022/07/CRI_Cloud-sovereignity_web10mb.pdf
  53. https://www.cloudfest.com/blog/data-sovereignty-sovereign-cloud-guide/
  54. https://www.nutrient.io/blog/low-code-revolution-enterprise-documents/
  55. https://eliatra.com/blog/the-sovereignty-illusion-why-awss-european-cloud-cannot-escape-us/
  56. https://www.fortunebusinessinsights.com/sovereign-cloud-market-112386
  57. https://explodingtopics.com/blog/corporate-cloud-data
  58. https://www.statista.com/statistics/817316/worldwide-enterprise-workloads-by-cloud-type/
  59. https://cybersecurityventures.com/the-world-will-store-200-zettabytes-of-data-by-2025/
  60. https://www.grandviewresearch.com/industry-analysis/us-sovereign-cloud-market-report
  61. https://www.pump.co/blog/cloud-usage-statistics
  62. https://www.oliverwyman.com/our-expertise/insights/2020/sep/european-digital-sovereignty.html
  63. https://www.cloudzero.com/blog/cloud-computing-statistics/
  64. https://www.linkedin.com/posts/syselfsven_92-of-europes-cloud-infrastructure-is-run-activity-7337406878722768896-Xnd-
  65. https://n2ws.com/blog/cloud-computing-statistics
  66. https://news.broadcom.com/reaching-multicloud-tipping-point
  67. https://aag-it.com/the-latest-cloud-computing-statistics/
  68. https://wire.com/en/blog/risks-of-us-cloud-providers-european-digital-sovereignty
  69. https://www.keepit.com/blog/data-sovereignty-europe/
  70. https://incountry.com/blog/the-eus-data-sovereignty-framework/
  71. https://paiml.com/blog/2025-02-25-eu-cloud-sovereignty-open-source/
  72. https://blog-idceurope.com/digital-sovereignty-in-europe-in-2025-whats-plan-b/
  73. https://wire.com/en/blog/state-digital-sovereignty-europe
  74. https://www.datacenterdynamics.com/en/analysis/gaia-x-has-europes-grand-digital-infrastructure-project-hit-the-buffers/
  75. https://blogs.microsoft.com/blog/2025/06/16/announcing-comprehensive-sovereign-solutions-empowering-european-organizations/
  76. https://www.infoq.com/news/2025/03/european-cloud-providers/
  77. https://european-alternatives.eu/category/cloud-computing-platforms
  78. https://www.forbes.com/councils/forbestechcouncil/2025/11/03/cloud-sovereignty-how-enterprise-and-public-sector-it-are-responding-to-regulations/
  79. https://gaia-x.eu/what-is-gaia-x/
  80. https://www.impossiblecloud.com/blog/how-the-cloud-act-challenges-gdpr-compliance-for-eu-businesses-using-u-s-s3-backup
  81. https://novoserve.com/blog/do-you-own-your-data-sovereignty-the-battle-between-public-cloud-private-cloud-and-hybrid-cloud
  82. https://english.ncsc.nl/latest/weblog/weblog/2022/how-the-cloud-act-works-in-data-storage-in-europe
  83. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4911552
  84. https://geopolitique.eu/articles/the-cloud-act-unveiling-european-powerlessness/
  85. https://cloud.google.com/sovereign-cloud
  86. https://www.iss.europa.eu/publications/briefs/technical-political-when-cloud-certification-scheme-divides-europe
  87. https://www.reddit.com/r/msp/comments/1d46yle/hybrid_cloud_open_source_and_data_sovereignty/
  88. https://www.apiculus.com/blog/data-sovereignty-in-cloud-repatriation-strategies-for-compliance-and-governance/
  89. https://iquasar-emea.com/blog/uae-hybrid-multi-cloud-data-sovereignty/
  90. https://learn.microsoft.com/en-us/azure/key-vault/managed-hsm/managed-hsm-technical-details
  91. https://wire.com/en/blog/digital-sovereignty-2025-europe-enterprises
  92. https://trginternational.com/blog/cloud-repatriation-business-return-on-premises/
  93. https://destcert.com/resources/data-sovereignty-vs-data-residency/
  94. https://www.macquariedatacentres.com/blog/top-5-low-code-ai-agent-builders/
  95. https://interoperable-europe.ec.europa.eu/eu-oss-catalogue
  96. https://www.mendix.com/blog/quick-guide-to-eu-digital-sovereignty/
  97. https://www.theregister.com/2025/10/27/cispe_eu_sovereignty_framework/
  98. https://www.linkedin.com/pulse/low-code-strategic-enabler-digital-sovereignty-europe-aswin-van-braam-0d8se
  99. https://religentsystems.com/low-code-data-sovereignty-religent-systems-governance/
  100. https://www.openproject.org
  101. https://wire.com/en/blog/sovereign-cloud-eu-providers-challenges-opportunities
  102. https://shiftasia.com/column/dead-or-transformed-the-future-of-low-code-development-platforms-in-an-ai-driven-world/
  103. https://www.dolibarr.org
  104. https://reintech.io/blog/blockchain-and-data-sovereignty-empowering-users
  105. https://mintblue.com/data-sovereignty/
  106. https://prism.sustainability-directory.com/term/blockchain-data-sovereignty/
  107. https://pmc.ncbi.nlm.nih.gov/articles/PMC7701220/
  108. https://www.activemind.legal/guides/nis2-dora/
  109. https://www.scalecomputing.com/resources/data-sovereignty-data-residency-and-data-localization
  110. https://prism.sustainability-directory.com/scenario/the-role-of-blockchain-in-data-sovereignty/
  111. https://s3.cubbit.eu/cubbit-public/16022024/sovereignty_gdpr_nis2_compliance_guide.pdf
  112. https://aerospike.com/blog/edge-computing-what-why-and-how-to-best-do/
  113. https://dev.to/kallileiser/blockchain-and-data-sovereignty-redefining-ownership-in-the-digital-age-1ba1
  114. https://www.nis-2-directive.com