Rethinking the Cloud-First Mandate: Why Modern Enterprises Are Rebalancing Towards On-Prem and Edge

Share:
LinkedIn
X
Reddit

This blog post was originally published on Avassa's blog and is republished with permission. It was written by Stefan Vallin (Avassa) and Cristian Klein (Elastisys).

From an information security perspective, 2025 started rough. First, we had power and communication cable cuts in the Baltic Sea, reminding Finland why it’s a good idea to ensure all essential services withstand connectivity outages. Then, came rising geopolitical tensions which led to Norwegian and Danish government agencies needing a plan to migrate away from US cloud providers. And as if that wasn’t enough, the power outage in the Iberian peninsula impacted critical infrastructure, including emergency services. For anyone whose role description includes “risk management”, these events are a clear trigger for re-evaluation of their data security measures.

In this article, we argue that the era of cloud-first is over and explore why and how to pivot to local-first, and perhaps even offline-first principles. The article is aimed at CTOs, CIOs, senior management, and everyone who is responsible for making sure data critical to the operations of a business stays integral, available, and confidential—in that order.

💡 Throughout the article, we’ll refer to cloud-first as the prevailing model where systems are built to depend on centralized cloud infrastructure. In contrast, local-first design puts data and processing close to where it’s used, and offline-first goes further by assuming no internet access at all. We also touch on digital sovereignty—the idea that nations and organizations must retain control over their digital infrastructure and data, independent of foreign influence or third-party platforms.

The Cloud-Centric Era: A Quick Recap

Over the past decade, public cloud platforms have become the go-to destination for enterprise workloads. The cloud offers agility, scalability, and speed—often at a lower upfront cost.

This architecture made sense—it abstracted away infrastructure-centric details, reduced time-to-market, and allowed companies to double down on innovation.

So, what’s changed?

The Cloud’s Growing Pains

As enterprises leaned into cloud-centric models, they started encountering limitations.

Compliance and Data Sovereignty Headaches

For the past decade, the European Union (EU) has recognized that IT is no longer a support function run by the geeks in the basement: It is the backbone of a democracy. As part of its digital strategy, the EU has come up with many regulations, two of which we’ll explore here: GDPR and NIS2.

GDPR and its impact on data placement

GDPR requires that personal data—meaning any information that can be used to identify a person—must be kept secure. But what does “secure” actually mean? The answer remains unclear. For years, transferring data from the EU to the U.S. was legally uncertain. In 2023, a U.S. Executive Order attempted to address this and, for a while, the legislation seemed easier for enterprises to interpret. But today, many believe that this act isn’t going to address uncertainties around where to store data altogether. Several EU governments are already developing backup plans in case the current framework fails. 

NIS2 and how information security can no longer be neglected

NIS2 regulates information and IT systems needed to provide essential services, such as power distribution, running water, transportation, healthcare, etc. In essence, enterprises need to make informed risk management decisions and—at the very least—fulfill the so-called 10 NIS2 minimum requirements. One of the requirements is business continuity, meaning that enterprises must ensure that essential business processes—such as, supplying electricity to customers or ensuring patients can book a medical appointment—are an integral part of how to make or buy applications. This is truly a signal that data management needs to be taken with utmost seriousness. Why? In Sweden, the proposed implementation of NIS2 (“Cybersäkerhetslagen”) goes as far as to ban a CEO from running their business in case information security is neglected. The message is clear—information security is as important to a business as paying taxes on time.

Connectivity and cloud availability issues

Relying entirely on the cloud for application availability introduces a critical dependency on continuous connectivity—something that cannot be guaranteed in many real-world environments. Network disruptions, whether due to local outages, ISP issues, broader infrastructure failures, or sanctions, can render cloud-dependent systems inoperable. For businesses with distributed operations, this creates unacceptable risk—even brief interruptions can disrupt sales, delay processes, or compromise safety. Ensuring availability requires rethinking architectures to reduce reliance on always-on cloud access.

Vendor Lock-in and Exit Barriers

Public cloud platforms are designed to make onboarding seamless—with integrated services, managed infrastructure, and generous credits to get started. However, this ease of entry often comes at the cost of difficult and expensive exits.

Once critical workloads and data pipelines are deeply integrated with a specific cloud provider’s ecosystem (e.g., proprietary databases, serverless runtimes, identity and access systems), migrating away becomes a complex and costly endeavor. The challenges typically include:

  • High data egress fees: Transferring data out of a public cloud can incur substantial costs, especially when dealing with large volumes of logs, archives, backups, or analytics data.
  • Replatforming overhead: Cloud-native services (like AWS Lambda, Azure Cosmos DB, or Google BigQuery) often have no direct equivalents elsewhere. Migrating means re-architecting applications and workflows to fit alternative technologies, which consumes engineering time and introduces risk.
  • Operational dependencies: Monitoring, identity management, secrets handling, and deployment pipelines are often deeply coupled with cloud-native tooling. Breaking that dependency chain requires rebuilding operational infrastructure.

In short, while public cloud providers offer convenience and scalability, they can also create exit barriers that reduce strategic flexibility and availability. 

Modern On-Prem and Edge: Securing Application Availability and Data Sovereignty

Today’s on-premise edge environments don’t look like they used to. We’re seeing a new generation of platforms that bring cloud-native agility to private infrastructure, without sacrificing control.

Let’s break down what that looks like:

When making or buying the next IT system, the following requirements are must-haves for a future-proof data management strategy.

Built-in Data Sovereignty

When deciding whether to develop IT systems in-house or procure them from external vendors, one critical factor often overlooked is sovereignty—that is, the ability to retain control over data, infrastructure, and strategic dependencies.

Sovereignty in the IT context encompasses more than just data location. It refers to who has legal, technical, and operational control over the systems and data. This includes not only where data is stored, but also under which jurisdiction the processing organization operates, who owns the infrastructure, and who can be compelled to grant access to data.

To ensure sovereignty, enterprises should assess:

  • Jurisdictional exposure: Is the vendor subject to foreign laws such as the U.S. CLOUD Act?
  • Ownership and governance: Is the vendor EU owned and operated, or is control exercised by non-EU entities?
  • Technical autonomy: Can the organization operate the system independently if vendor relationships are disrupted?
  • Geographic resilience: Are the systems hosted in regions with strong protections against natural disasters, power failures, or network outages? 
  • Avoiding illusory protections: Are the technical features really protecting data or are they just “sovereignty theater” which offer the illusion of control? Who can be compelled to disclose encryption keys, weaken encryption algorithms or decrypt data?

Offline-First for Robust Operations 

To meet real-world demands, systems must be designed to operate autonomously—even when disconnected from the cloud. This requires deploying applications closer to where data is generated and decisions are made: at the edge. Ensuring local availability enables uninterrupted operations during network outages and allows seamless synchronization with the cloud once connectivity is restored.

This shift is critical for operational resilience and supporting data privacy, sovereignty requirements, and modern IT strategies that embrace hybrid and edge-first architectures.

To make this possible, platforms must meet the following key requirements:

  • Full local autonomy: Application lifecycle management—including deployment, fail-over, and restarts—must operate entirely without reliance on a centralized control plane.
  • Distributed artifacts and APIs: All necessary components—such as container images, secrets, and configuration—must be available at each deployment location. Applications must not depend on central cloud APIs to function.
  • Operational visibility and control in offline scenarios: The solution must support deployments, monitoring, and troubleshooting even when sites have limited or intermittent connectivity.

Stick to Standards, Avoid Platform-Specific Tooling and APIs

To avoid vendor lock-in and maintain long-term control, organizations can adopt an on-prem-first strategy. This model prioritizes portability, autonomy, and compliance, allowing teams to optimize operations on their terms.

The foundation of this approach is a commitment to open standards. By building on OCI-compliant containers and container registries, organizations ensure their applications can run consistently across private data centers, edge sites, and cloud environments. To strengthen this portability, it’s equally important to adopt industry-standard security protocols and practices, such as OAuth2/OIDC for identity federation and TLS for encrypted communication. These security standards not only improve posture across environments but also prevent entanglement with cloud-specific security models that are hard to replicate elsewhere.

Conclusion

Over-reliance on public cloud infrastructure introduces significant risks, as demonstrated by recent disruptions including cable cuts, geopolitical instability, and power outages. Cloud-centric models pose challenges such as regulatory compliance issues (e.g., GDPR, NIS2), unreliable connectivity, and vendor lock-in. To address these concerns, this article advocates for architectures that ensure application availability and data sovereignty through offline-first capabilities, open standards, and local execution. A balanced approach is recommended—combining cloud-native innovation with resilient edge and on-premises infrastructure.

About Elastisys and Avassa

Companies require remote lifecycle management, outage resilience, and robust security for their edge workloads.

The Avassa Edge Platform is an application management and operations platform built for resource-constrained edge environments. It enables companies to securely and remotely update, monitor, and troubleshoot edge applications and infrastructure, at scale and where connectivity isn’t taken for granted.

Welkin by Elastisys is a secure, cloud-agnostic application platform for software critical to society. Designed to run anywhere – from public cloud to air-gapped environments – Welkin delivers a turnkey solution built to meet the strictest European requirements for security and compliance. Backed by decades of experience and trusted by industry leaders, Welkin enables teams to innovate with speed and confidence, without compromising resilience or control.

With Welkin at the core and Avassa at the edge, enterprises get a hybrid stack that’s agile, secure, and regulation-ready, giving the flexibility to run workloads wherever it makes the most sense.

Share:
LinkedIn
X
Reddit