Posts How enterprise architecture helps with cloud migration

How enterprise architecture helps with cloud migration

Cloud migration has become essential for organisations modernising their digital services, but the process can quickly become complex, costly and slow when not guided by a clear structure. Studies consistently show that cloud transformations fail when organisations lack visibility, governance and coherent decision-making.  

Enterprise architecture solves these challenges by aligning business strategy, technology, data and operations around a unified migration plan. It provides the frameworks, roadmaps and governance needed to move to the cloud in a controlled, secure and cost-efficient way. It offers teams a clear view of what to migrate, when to migrate it and how to deliver the business outcomes expected from cloud. 

In this blog, we explore how enterprise architecture supports cloud migration, the capabilities it provides and how organisations can use it to deliver faster, safer and more value-driven cloud programmes. 

What enterprise architecture means in cloud migration

Enterprise architecture helps businesses understand how their capabilities, applications, data flows and technology platforms fit together so they can smoothly transition to the cloud. It offers clarity across four core areas: 

  • What systems exist today 
  • How they connect and depend on each other 
  • How the future cloud architecture should operate 
  • Which steps are needed to migrate safely and incrementally. 

Without this context, cloud migration can lead to performance problems, security gaps, cost overruns and delays. Enterprise architecture provides the visibility and alignment needed to avoid these issues. 

Resources such as the Microsoft Cloud Adoption Framework reinforce the importance of architectural foundations, landing zones, security baselines and governance when preparing for cloud migration at enterprise scale. 

Why enterprise architecture is essential for cloud migration

Enterprise architecture enhances cloud migration across strategic, operational and technical dimensions through: 

1. Complete visibility across the application estate

Large organisations often lack a single view of their systems, making cloud migration risky. Enterprise architecture documents: 

  • Application inventories 
  • Dependencies 
  • Data flows 
  • Integration patterns 
  • Infrastructure and hosting 
  • Business criticality. 

This visibility prevents migrations that break key services or overlook important interdependencies. 

2. Prioritisation of workloads for migration

Enterprise architecture identifies which workloads should be: 

  • Rehosted 
  • Re-platformed 
  • Refactored 
  • Replaced 
  • Retired

This prevents wasted effort on low value systems and accelerates value by prioritising high impact workloads. 

3. Defining target cloud architecture

A well-defined cloud architecture reduces long term cost, improves resilience and accelerates delivery. Enterprise architecture establishes: 

  • Cloud landing zones 
  • Identity and access management 
  • Networking and security models 
  • Platform engineering standards 
  • Data and integration architecture. 

Cloud providers such as the AWS Well Architected Framework outline best practices that support this approach to achieve secure, efficient and reliable cloud environments. 

4. Strategic alignment to business priorities

Enterprise architecture ensures cloud migration is linked to business priorities, including: 

  • Resilience 
  • Cost optimisation 
  • Customer experience 
  • Regulatory compliance 
  • Agility and innovation 
  • Sustainability targets. 

This turns migration into a strategic programme, not just a technical activity.

5. Strong governance and decision-making 

Enterprise architecture establishes guardrails that: 

  • Remove duplication 
  • Enforce tagging and cost allocation 
  • Standardise cloud patterns 
  • Improve design quality 
  • Ensure compliance with organisation wide standards. 

Frameworks like the Open Group’s TOGAF standard support consistent enterprise architecture governance across the organisation. 

6. Better risk management and security

Enterprise architects plan for: 

  • Secure landing zones 
  • Identity and access control 
  • Encryption and data residency 
  • Compliance requirements 
  • Resilience and disaster recovery. 

Guidance such as the NCSC cloud security collection strengthens these architectural decisions and helps organisations adopt secure cloud services. 

7. Cost control and value realisation

Enterprise architecture is crucial for cloud cost optimisation because it defines efficient architectures that avoid waste. It supports: 

  • Rightsizing decisions 
  • Refactoring choices 
  • Lifecycle governance 
  • FinOps alignment 
  • Workload placement strategies. 

This ensures cloud spend remains predictable and aligned with business value. 

Key enterprise architecture practices that accelerate migration

1. Portfolio assessment and rationalisation

Enterprise architecture evaluates: 

  • Application value 
  • Lifecycle stage 
  • Fitness for cloud 
  • Risk and complexity 
  • Technical debt. 

This prevents migrating applications that should be modernised, consolidated or retired instead. 

2. Cloud readiness assessments

Readiness assessments evaluate: 

  • Code quality 
  • Performance and scalability needs 
  • Security posture 
  • Compliance requirements 
  • Integration and data dependencies. 

These insights inform accurate migration strategies and help teams choose the right approach. 

3. Target state cloud architecture

Enterprise architecture defines the target state, including: 

  • Cloud landing zones 
  • Identity, access and network architecture 
  • Platform engineering 
  • Observability and logging 
  • CI/CD pipelines 
  • Automation standards. 

This ensures consistency across all migration waves. 

4. Business capability alignment

By mapping applications to business capabilities, enterprise architecture ensures migration aligns with organisational goals and modernises the areas that deliver the most value. 

5. Modern data and integration architecture

Cloud migration requires robust integration design. Enterprise architecture helps define: 

  • API-first approaches 
  • Event-driven architecture 
  • Hybrid integration 
  • Data pipelines 
  • Governance and lineage. 

The Google Cloud Architecture Framework offers structured guidance that supports these principles. 

6. Phased migration wave planning

Enterprise architecture supports incremental migration by planning: 

  • Migration waves 
  • Dependency sequencing 
  • Testing and validation 
  • Operational readiness 
  • Change management. 

This reduces risk and improves delivery speed. 

How enterprise architecture reduces cloud migration risks

Enterprise architecture enables organisations to avoid common cloud migration risks, such as: 

  • Downtime, through dependency and impact analysis 
  • Security gaps, by defining robust access and identity models 
  • Cost overruns, by aligning with FinOps and workload sizing 
  • Architecture drift, through strong governance 
  • Integration failures, through complete visibility of data and interfaces 
  • Scope creep, through clear migration sequencing. 

The UK government’s cloud guidance reinforces this structured, architecture-led approach for public sector organisations. 

Enterprise architecture and cost optimisation

Enterprise architecture helps organisations reduce cloud costs through: 

  • Designing efficient cloud architectures 
  • Choosing the right migration pattern 
  • Removing technical debt 
  • Preventing duplication across teams 
  • Optimising data and storage strategies 
  • Enforcing tagging and lifecycle policies 
  • Supporting FinOps capabilities. 

Without enterprise architecture, cloud environments often become fragmented, expensive and difficult to manage. 

Enterprise architecture and AI-ready cloud platforms

AI adoption adds new complexity to cloud estates. Enterprise architecture ensures cloud platforms are AI-ready by defining: 

  • Scalable GPU architectures 
  • Cost efficient AI training environments 
  • Data governance and lineage 
  • Vector database integration 
  • Secure access patterns 
  • Hybrid data strategies. 

This ensures AI is adopted safely, efficiently and sustainably. 

How CACI supports enterprise architecture for cloud migration

CACI delivers robust enterprise architecture and cloud engineering services that accelerate migration while reducing risk, cost and complexity. 

Contact us today to learn more about how our structured architectural approach can help improve your migration quality, accelerate delivery and ensure your cloud investments generate measurable business value.  

AI vs Automation: Finding the right balance in care management software

The care sector is under immense pressure: staff shortages, rising demand and tighter compliance standards have created a perfect storm for providers. In response, many care management software vendors are racing to add artificial intelligence (AI) features, promising smarter decisions, predictive insights and faster outcomes. 

Is outsourcing thinking to algorithms really the answer though? Or does it risk eroding the very foundation of care – trust, safety, and human connection? 

Why the AI rush in care management software?

AI isn’t just a buzzword. It’s being embedded into care management software in ways that sound transformative: 

  • Automatically building care plans by analysing assessments, medical history and wearable data 
  • Predicting risks such as falls or hospital readmissions before they happen 
  • Optimising rosters by matching carers to clients based on skills, continuity and location 
  • Summarising notes and ensuring compliance using natural language processing 

For care providers under pressure to cut admin, stay compliant and deliver person-centred care, these promises are compelling. The narrative is clear: AI will save time, improve outcomes and reduce costs. 

In reality, however, AI in care is still unproven, often opaque and can introduce risks if adopted without clear guardrails. Algorithms are only as good as the data they’re trained on, and in social care, data can be fragmented, inconsistent and context-dependent. When decisions about vulnerable people are delegated to black-box systems, the consequences can be serious: misaligned workflows, compliance gaps and even mistrust among staff and clients. 

The risks of overreliance on AI in care management software 

AI isn’t magic. It’s a set of algorithms trained on data, and in care, that data often comes from fragmented systems, inconsistent records and human interpretation. When decisions about vulnerable people are delegated to unproven tools, the risks multiply: 

Unproven technology

  • Many AI features in care software are still in early stages. Without rigorous testing in real-world settings, outputs can be unreliable, workflows misaligned and operational complexity increased. Care plans built by algorithms may look efficient, but do they truly reflect the individual behind the data? 

Compliance gaps 

  • Regulators like the Care Quality Commission (CQC) emphasise person-centred documentation, accountability and evidence-based decision-making. If AI decisions can’t be explained or audited, providers could face compliance risks. Person-centred care isn’t just a phrase, it’s a legal and ethical requirement that demands human oversight. 

Staff pushback 

  • Care is a human profession. Tools that feel impersonal or difficult to use can create mistrust, lower morale and cause resistance. Technology should empower staff, not alienate them. When carers feel sidelined by algorithms, the essence of care is lost. 

Client experience 

  • Person-centred care is the cornerstone of quality ratings and client satisfaction. Poorly implemented AI can create barriers between carers and clients, undermining trust and connection. A truly person-centred approach means listening, adapting and responding in real time, something no algorithm can fully replicate. 

The missing human element

  • Care isn’t just about tasks; it’s about empathy, intuition and the ability to respond to subtle cues. Experienced carers bring a rich, dynamic understanding shaped by years of hands-on work – something no dataset can capture. Compassion is a uniquely human trait. AI can process information, but it cannot care. 

Automation: The smarter alternative

Instead of chasing hype, CACI believes in automation with accountability – care management software that streamlines admin, reduces errors and frees staff to focus on what matters most: caring for clients. 

Automation works within parameters set by the provider, ensuring transparency and control. It’s innovation without compromise. 

Efficiency without risk 

  • Automated rostering, travel time optimisation and digital care planning reduce admin burden without replacing professional judgment, keeping the person at the centre of every decision.

Compliance built in 

  • Automation ensures accurate records, audit trails and SAF-aligned reporting – critical for inspections and quality assurance. Providers stay in control, not algorithms.

Human-centric design

  •  By removing repetitive tasks, automation gives carers more time for meaningful interactions with clients. Technology should support the relationship between carer and client, not replace it. Person-centred care needs a person. 

Our approach with Certa 

At CACI, we’ve built Certa, our care management software, around three guiding principles: 

Connecting 

  • Bringing people, data and processes together seamlessly. Everyone works from the same trusted source, whether in the office or in the field.

Confirming 

  • Compliance, accuracy and transparency are non-negotiable. Certa helps providers evidence quality standards with ease.

Caring 

  • Technology should never replace empathy. Certa empowers staff to focus on the human side of care. 

From smart rostering and travel optimisation to digital care planning and real-time reporting, Certa automates the complex while preserving the human touch. 

Where does AI get in? 

AI isn’t the enemy. It has a role, but only when it enhances, not replaces, person-centred care. Predictive analytics, for example, can help identify trends in service demand or flag potential compliance risks. However, these tools must be transparent, tested and always under provider control. 

The safest path is a measured one: 

  • Adopt technology that grows with your service 
  • Keep compliance front and centre 
  • Strengthen relationships between carers and clients 

That’s what Certa delivers. 

The bottom line 

The best care management software combines innovation with empathy. It automates the complex, supports compliance and preserves the human connection that defines quality care. 

AI may be part of the future, but rushing in without a clear strategy can lead to wasted investment, unhappy staff and compromised care. A balanced approach will make all the difference. 

Why Certa makes a difference

Certa is designed for care providers who want technology that works for them. Not the other way around. With features like: 

  • Person-centred care planning 
  • Advanced rostering and travel optimisation 
  • Real-time reporting and SAF-aligned compliance tools 
  • Secure-by-design architecture (ISO27001, Cyber Essentials Plus) 

Certa helps you deliver outstanding care while staying efficient, compliant and connected. Get in touch with us today to find out how automation can help your staff focus on what matters most: providing care. 

Cloud migration challenges: A 2026 guide to risks, strategy & tools

Cloud is now firmly mainstream, with roughly 94% of enterprises using cloud services and a growing majority running over half of their workloads in the cloud. Worldwide end-user spending on public cloud was forecasted to reach roughly $723 billion in 2025, underlining just how critical cloud has become to a business’ strategy.  

Yet despite this investment, cloud migration challenges remain stubbornly persistent. One major study found that organisations spend on average 14% more on migration than planned and 38% of migrations are delayed by more than a quarter, driven by complexity, poor planning and skills gaps. Another widely cited report notes that 84% of organisations struggle to manage cloud spend effectively.  

This guide explores the most common cloud migration challenges, why they occur and how to design a migration strategy, tooling approach and operating model that gives you a much higher chance of success. It also demonstrates how CACI’s cloud, engineering and implementation services can support your journey. 

What is cloud migration and why is it so challenging?

Cloud migration is the process of moving applications, data, workloads and underlying infrastructure from on-premises or legacy environments into cloud platforms. It can also include moving between clouds or from one cloud service model to another.

Types of cloud migration

Understanding the main migration patterns is a useful starting point for setting expectations: 
 

  • Rehost (lift-and-shift): Moving workloads with minimal changes. 
  • Replatform: Making modest optimisations (e.g. managed databases) during migration. 
  • Refactor: Re-architecting applications to use cloud-native services. 
  • Rebuild: Rewriting systems from scratch for the cloud. 
  • Replace: Retiring legacy apps in favour of SaaS solutions. 

Most organisations end up using a mix of these approaches across workloads.

Complex deployment models

Modern estates typically combine: 

  • Public cloud for scale and agility 
  • Private cloud for specific compliance or performance needs 
  • Hybrid cloud spanning on-prem and cloud 
  • Multi-cloud using several providers. 

Gartner expects 90% of organisations to adopt hybrid cloud by 2027, reflecting the reality that few businesses are “all in” on a single environment. More choice is valuable, but it amplifies governance, integration and cost-management challenges.

Cloud benefits versus migration risks

The benefits of cloud are well documented: agility, scalability, resilience, innovation, access to AI services and more. IDC’s overview of cloud market trends highlights how cloud is now the foundation for data, automation and AI use cases. 

However, without a structured approach, migrations can lead to: 

  • Higher-than-expected operating costs 
  • Outages and performance issues 
  • Security gaps and compliance risk 
  • Stalled programmes and change fatigue.

This is where understanding the main cloud migration challenges becomes essential. 

Most substantial cloud migration challenges (by phase)

Grouping cloud migration challenges by phase of the journey helps you anticipate issues before they derail your programme.

1. Strategy & business alignment challenges

No clear business case

Many migrations begin with a general desire to “move to the cloud” without defining measurable success criteria. Are you aiming for reduced costs, faster product delivery, better resilience, improved security or all the above?

Lift-and-shift by default

Under pressure to move quickly, organisations often default to lift-and-shift. While appropriate in some cases, this often leads to increased cloud costs and disappointed stakeholders once workloads land in an environment they were not designed for.

Misaligned stakeholders

Finance wants predictable spend, IT wants stability and business units want new features tomorrow. Without a shared roadmap and governance model, priorities can easily clash.

How to mitigate these challenges

  • Define a clear business case with KPIs (e.g. target cost savings, uptime, deployment frequency)
  • Involve IT, finance and business leaders from the outset
  • Use a structured migration framework and consider partnering with specialists such as CACI’s cloud, engineering and implementation services to co-create your strategy.

2. Discovery & assessment challenges

Poor application and dependency visibility

It is not uncommon for organisations to start migration planning and then discover that they do not have a complete, up-to-date inventory of applications, databases, integrations and dependencies. Missing a single critical dependency can cause outages when workloads are moved.

Legacy constraints

Older platforms, bespoke middleware and tightly coupled integrations obfuscate cloud migration. Some systems may be out of vendor support or lack documentation.

Underestimating integration complexity

Hybrid and multi-cloud architectures must integrate cleanly with on-prem systems and SaaS applications. Underestimating integration can lead to brittle connections and security gaps.

How to mitigate these challenges

  • Use automated discovery and assessment tools to build a realistic view of your estate
  • Map dependencies visually and prioritise high-blast-radius systems
  • Classify workloads using a structured model (retain, retire, rehost, re-platform, refactor, replace)
  • Consider a Platform Migration approach with expert support, such as CACI’s dedicated Platform Migration service.

3. Architecture & technical challenges

Choosing the right architecture

The breadth of cloud services is both a blessing and a curse. Teams must choose between virtual machines, containers, serverless, managed databases, message queues, data lakes and more, often with incomplete information and tight deadlines.

Performance and latency issues

Network design, data placement and application architecture all influence latency and throughput. Poor decisions in these areas can degrade customer experience and internal system performance.

Vendor lock-in

Leveraging cloud-native services maximises value but may also increase dependence on specific providers. Regulatory and data-sovereignty discussions, particularly in the UK and EU, are causing many organisations to carefully consider portability and digital sovereignty strategies.

How to mitigate these challenges

  • Define reference architectures and guardrails early
  • Run performance tests in pilot migrations
  • Make conscious choices about where you accept lock-in for higher value and where you prefer portability.

4. Cloud migration security challenges

Security is consistently cited as one of the top cloud migration challenges. Government and industry bodies emphasise that cloud— used correctly— can be more secure than on-prem infrastructure. The UK government’s Cloud First policy and accompanying guidance stress the importance of security-by-design, shared responsibility and robust governance.

Identity and access management (IAM)

Misconfigured IAM, overly broad privileges and lack of role-based access control are a major root cause of cloud incidents.

Data protection

Sensitive data must be encrypted in transit and at rest, with careful key management and robust backup and recovery procedures.

Compliance and shared responsibility

Regulated sectors must demonstrate compliance with standards and regulations in a model where security responsibilities are split between provider and customer.

How to mitigate these challenges

  • Establish an IAM strategy with least-privilege access and strong authentication
  • Implement encryption, key management and robust logging from day one
  • Use security posture-management tools and align with public guidance such as the UK cloud guide for the public sector
  • Build security into your cloud platform as part of solution implementation rather than as an afterthought.

5. Data & integration challenges

Moving large volumes of data

Migrating terabytes or petabytes of data without impacting operations requires careful planning. Complex cutover plans, bulk transfer tools and synchronisation mechanisms are often needed.

Data quality and consistency

Inconsistent schemas, duplication and poor data governance can lead to mistrust in analytics and operational systems post-migration.

Integrating cloud with on-prem and SaaS

APIs, message queues and integration platforms must be carefully designed to avoid fragile, tightly coupled connections.

How to mitigate these challenges

  • Treat data migration as a dedicated workstream
  • Clean and reconcile data before moving it
  • Design integration patterns (e.g. event-driven architectures) aligned to your target operating model
  • Draw on lessons from real-world programmes like CACI’s case study on HMCTS Court Store and Bench’s move to AWS.

6. Cost, governance & FinOps challenges

Cloud is often sold as a route to lower costs, but the reality is more nuanced. In 2025, 84% of organisations struggled to manage cloud spend and cost optimisation remains a top priority year after year.

Bill shock and opaque spend

Without robust tagging, budgeting and monitoring, costs can escalate quickly. Bursty workloads, test environments left running and underused instances are common culprits.

Weak financial governance

Traditional budgeting models are not always suited to variable, usage-based pricing. Cloud makes it easy to spend money, but not to spend wisely.

Unclear total cost of ownership

Many organisations underestimate the ongoing cost of running cloud environments, including observability, security, data transfer and platform teams.

How to mitigate these challenges

  • Adopt FinOps principles early, not after migration. A growing number of organisations are doing this specifically to tackle cloud waste and align spend to business value
  • Tag resources consistently to enable accurate cost allocation
  • Use budgets, alerts and dashboards to track spend against KPIs
  • Consider getting external support from cloud specialists such as CACI’s Cloud Services to design your governance model.

7. People, skills & operating model challenges

Skills gaps

Cloud-native, DevOps and automation skills are in high demand. Internal teams may lack experience in designing and operating cloud platforms at scale.

Operating model friction

Existing ITIL-style processes and siloed teams do not always translate well to cloud environments, where continuous delivery and shared ownership are essential.

Cultural change

Cloud is not just a technology shift, but a cultural one. Teams must embrace new ways of working, from infrastructure-as-code to platform teams and product-centric delivery.

How to mitigate these challenges

How to build a cloud migration strategy that avoids these challenges

A structured cloud migration strategy is your best defence against these pitfalls.

Step 1: Define business outcomes and KPIs

Start with the “why”:

  • Cost optimisation (e.g. target percentage reduction in run-rate costs)
  • Improved resilience (e.g. RPO/RTO targets, availability SLAs)
  • Faster time-to-market (e.g. release frequency, lead time for changes)

Better customer and employee experience.

Step 2: Assess your current

  • Catalogue applications, services, databases and integrations
  • Classify each workload by business criticality, technical complexity and risk
  • Identify “quick wins” and high-risk areas needing more design work.

Step 3: Plan migration waves

Avoid trying to move everything at once. Instead:

  • Group workloads into waves with clear objectives
  • Start with lower-risk, high-learning systems
  • Use pilot migrations to refine patterns and tooling.

Step 4: Design your target cloud architecture

Make conscious choices about:

  • Compute models (VMs, containers, serverless)
  • Data platforms (managed databases, data lakes, warehouses)
  • Networking and connectivity (VPNs, private links, SD-WAN)
  • Platform services for security, observability and CI/CD.

Step 5: Embed security and governance upfront

Step 6: Establish a cloud operating model

Clarify:

  • Who owns the central platform
  • How product and application teams consume it
  • How changes are tested, deployed and supported.

This operating model is where the concept of a cloud-appropriate strategy (rather than “cloud at all costs”) really takes shape.

Step 7: Plan for continuous optimisation

Cloud migration is not a one-off event. After cutover, you should:

  • Right-size resources and use auto-scaling
  • Tune performance and storage tiers
  • Modernise where there is clear value
  • Review costs and security posture regularly.

Cloud migration tools, platforms & frameworks

Choosing the right tools reduces risk and effort at each stage of migration.

Discovery, assessment & dependency mapping

  • Infrastructure discovery tools and CMDBs
  • Application performance monitoring (APM) platforms
  • Dependency mapping and visualisation tools.

Data migration & synchronisation

  • Cloud-native database migration services
  • ETL/ELT tools for structured data movement
  • Bulk transfer technologies for large datasets.

Application migration & modernisation

  • Containerisation and orchestration tools
  • Refactoring accelerators and code analysis tools
  • CI/CD platforms to support new deployment models.

Security, compliance & governance

  • Cloud security posture management (CSPM) and policy-as-code
  • Identity and access management, secrets management and HSMs
  • SIEM and threat-detection tooling.

Observability, performance & FinOps (H3)

  • Monitoring, logging and tracing platforms
  • Cost-management and optimisation tools aligned with FinOps practices.

The specific mix will depend on your chosen cloud providers and operating model, but the categories remain consistent.

Cloud migration best practices

This checklist outlines a practical reference throughout your programme:

Pre-migration

  • Business case and KPIs agreed
  • Application inventory and dependency maps completed
  • Migration patterns decided per workload (rehost/replatform/refactor/etc.)
  • Security and governance baselines designed
  • Cost management and tagging strategy defined.

During migration

  • Workloads migrated in waves, with rollback plans
  • Performance and resilience tested in each wave
  • Security controls verified before go-live
  • Costs monitored against forecasts.

Post-migration

  • Workloads rightsized and tuned
  • Modernisation opportunities assessed
  • Security posture and compliance reviewed regularly
  • KPIs tracked and reported to stakeholders.

Measuring cloud migration success: KPIs & metrics

You cannot improve what you do not measure. Useful KPIs include:

Technical

  • Availability and uptime
  • Latency and response times
  • Error rates and incident frequency.

Financial

  • Monthly cloud run-rate vs baseline
  • Cost per transaction or per user
  • Savings from rightsizing or modernisation initiatives.

Business

  • Release frequency and deployment lead times
  • Time-to-market for new features
  • Customer satisfaction or NPS impact.

Security

  • Number of critical vulnerabilities
  • Mean time to detect (MTTD) and mean time to remediate (MTTR)
  • Compliance audit findings.

These metrics help you demonstrate whether your cloud migration is delivering on its promises or whether strategy and execution need to be re-thought.

Turning cloud migration challenges into an advantages with CACI

Cloud has moved from a novelty to a business necessity, but the real differentiator is how effectively your organisation navigates cloud migration challenges: strategy, security, cost, people and operations.

With the right roadmap, tools and operating model, you can turn those challenges into an advantages: more resilient services, faster innovation and a technology foundation ready for AI and future growth.

If you are ready to move from theory to practice, explore CACI’s Cloud, Engineering & Implementation Services and our dedicated Platform Migration and Solution Implementation offerings. You can also learn from real projects in our article on the actual experience of cloud migration for business.

What is Model Based Systems Engineering (MBSE)? A practical explainer for modern engineering

What is Model Based Systems Engineering (MBSE)? A practical explainer for modern engineering 

Engineering domains like defence, automotive, manufacturing and critical infrastructure have always dealt with complexity. But today that reality is compounded by volatility. One seemingly small change can ripple across an entire architecture: a single component going end of life forces updates to requirements, interfaces and test plans or single regulatory change means revisiting assumptions and evidence across multiple teams.  

Traditional, document heavy engineering methods simply weren’t designed for this pace, scale and level of interdependence. Big static specifications, linear stage gated processes and manual drafting and review cycles are slow, siloed and paperwork driven; they just can’t keep up with environments that depend on fast iteration, shared data, and real-time collaboration. 

Model Based Systems Engineering (MBSE) offers a more coherent way forward. It makes models, rather than documents, the primary way of understanding how a system is put together and how it behaves under change. And while it’s often discussed in abstract terms, its value is practical: clearer decisions, fewer surprises and systems that can evolve with the world around them. 

Understanding Model Based Systems Engineering 

Traditional systems engineering spreads knowledge across separate artefacts: requirements lists, design specifications, interface control documents, test plans and more. Each serves a real purpose, but together they create a fragmented picture that engineers must mentally stitch together. 

MBSE brings this information into a single system model. Instead of navigating isolated, and typically manual, documents, engineers work with a visual, traceable representation of requirements, behaviours, structures and constraints across the system’s lifecycle: from concept and design through to operation and decommissioning. 

This connected view enables teams to: 

  • Simulate and validate designs before physical implementation 
  • Understand the implications of a change across the whole system or system-of-systems 
  • Maintain traceability between requirements, design and testing as the system evolves 
  • Accommodate iterative and Agile delivery without losing architectural coherence 
  • Establish a strong foundation for digital twins and digital continuity 

In short, MBSE replaces a fragmented understanding with a coherent one. By shifting the focus from assembling information to analysing the system as a dynamic whole, it makes decisions clearer and enables swifter action. 

MBSE vs. Enterprise Architecture – what’s the difference? 

As an approach, MBSE is often mentioned alongside or confused with Enterprise Architecture (EA) because both use models to bring structure to a changing, interconnected world. They sit on a continuum, but they don’t do the same job. 

Enterprise Architecture works at the organisational level, the so-called ‘30,000ft view’. It defines the capabilities the business needs, the processes that support them, the information that flows between them and the technology principles that keep everything aligned. EA sets the strategic intent and the architectural constraints within which engineered systems must operate. 

Model Based Systems Engineering works at the system level and, critically, does so visually. It uses graphical models to capture requirements, behaviour, structure and constraints so engineers can see how a system works, how its parts interact and how changes flow across the architecture. MBSE can represent a single engineered system or a “system of systems”, depending on the scale of the environment.  

In plain engineering terms: 

  • EA defines the environment: capabilities, context, constraints.
  • MBSE defines the system: behaviour, architecture, verification.

EA sets the intent; MBSE delivers the model‑based technical design that realises that intent. So, even when a “system of systems” MBSE model approaches EA in scope, it’s still serving a different purpose. Both disciplines tackle the same operational pressures but address them from different vantage points. 

Model Based Systems Engineering in practice 

In practice, MBSE means working from a dynamic system model that brings together the elements that matter most in complex engineering environments. Typically visualised in a dashboard, it provides a traceable, queryable representation of the system as a single point of truth, containing: 

  • Requirements
  • Behaviours and interactions
  • System structure and architecture
  • Constraints and dependencies
  • Lifecycle considerations from concept to decommissioning

The shift from documents to models isn’t cosmetic. Documents age; models evolve. Documents sit in silos; models connect disciplines. Documents tell you what the system was; models show you what the system is — and what it could be as it adapts to new constraints, technologies or missions. 

Most organisations use modelling languages such as SysML and tools like Cameo, Rhapsody or Enterprise Architect. SysML remains the most widely used, giving teams a standardised way to express structure, behaviour and constraints across complex systems. But the tools are only the enablers. The real value lies in the clarity, consistency and shared understanding that modelling brings. 

The operational benefits – why MBSE matters in modern engineering

 MBSE gives teams a coherent view of how a system behaves and how change in one area affects others and, fundamentally, a more honest representation of how systems behave in the real world. That shift enables: 

  • Earlier validation and simulation
  • Clearer communication across disciplines
  • Faster impact analysis
  • Stronger traceability between requirements, design and testing
  • Enhanced collaboration across teams and suppliers
  • Scalability for managing large, multicomponent or “system of systems” architectures

This is why MBSE has become particularly relevant in sectors where systems are large, long-lived and safety or mission critical.  

In defence and aerospace, it supports mission level traceability, interoperability across suppliers and stronger evidence for certification. In automotive, it helps integrate mechanical, electrical and software design in increasingly software defined vehicles. And in digital and critical infrastructure, it provides a way to map dependencies, model resilience and design for long-term adaptability. The common theme being MBSE provides the clarity needed to make confident decisions. 

What good MBSE delivery looks like in practice 

Successful MBSE programmes have less to do with tools and more to do with delivery behaviours. The organisations that get the most value tend to share a few consistent patterns: 

  • Models are treated as living artefacts. They evolve as understanding deepens, rather than being produced once and filed away. 
  • Iteration is normal. Teams model early, test assumptions quickly and refine as they learn, instead of waiting for a single ‘big reveal’. 
  • Commercial and governance frameworks allow change. MBSE only works when contracts, schedules and decision gates accept that things will evolve. 
  • Practitioners lead the work. Systems engineers, architects and domain specialists shape the model, ensuring it reflects real world behaviour rather than abstract theory. 
  • Collaboration is built in. Modelling becomes a shared activity across disciplines, not something done in isolation by a single specialist. 

These principles also shape how CACI deliver MBSE.  

Our teams work iteratively, use models to drive shared understanding and keep architectures traceable as requirements evolve. We focus on the behaviours that make MBSE effective, clarity, adaptability and practitioner led modelling – because these consistently help programmes navigate complexity and make better decisions. 

Why MBSE is becoming essential 

 Recent research finds the number and intensity of system level dependencies is rising across every major engineering domain, increasing the likelihood that local failures propagate far beyond their point of origin. The PanIberian blackout in April 2025 made this clear: the energy disturbance cascaded across two national grids, disrupting transport, healthcare and communications within minutes.  

In this context, MBSE becomes a core competency rather than a niche specialism. But its value depends on how it is delivered, and by who.  

A strong MBSE approach provides clarity, traceability and better decisions. It reduces risk. It helps engineering systems evolve with the environment. And in sectors where the stakes are high like defence, automotive, aerospace and critical infrastructure, that combination is not optional, it’s foundational — and increasingly essential if organisations are to stay ahead of the rising fragility built into the systems they depend on. 

To find out how CACI can help your organisation build the resilience needed to operate effectively in an increasingly volatile, interconnected engineering environment, get in touch with our experts today. 

FAQs about Model Based Systems Engineering (MBSE)

What does “model-based” actually mean in Model Based Systems Engineering (MBSE)?

In Model Based Systems Engineering (MBSE), “model-based” means that system information is stored in a structured, machine-readable model rather than free-text documents. This allows relationships, dependencies and constraints to be queried, analysed and validated automatically instead of being inferred manually.

Is Model Based Systems Engineering only suitable for large or complex systems?

No. While MBSE is most visible in large, complex programmes, it can also be valuable for smaller systems where change is frequent or assurance requirements are high. Even lightweight models can reduce ambiguity, improve communication and prevent rework as designs evolve.

How does MBSE support verification and validation activities?

MBSE enables verification and validation by explicitly linking system behaviours and constraints to verification criteria within the model. This allows teams to assess test coverage, identify gaps early and maintain alignment between design intent and evidence as the system changes.

What skills are required to work effectively with Model Based Systems Engineering?

Effective MBSE requires a combination of systems thinking, domain expertise and modelling literacy. While familiarity with languages such as SysML is useful, the most important skills are the ability to reason about system behaviour, understand trade-offs and communicate across disciplines using models as a shared reference.

How does Model Based Systems Engineering improve decision-making?

MBSE improves decision-making by making assumptions, dependencies and impacts explicit. Engineers and stakeholders can explore “what-if” scenarios, assess trade-offs and understand consequences before changes are committed, reducing the risk of late-stage surprises.

Can Model Based Systems Engineering be applied to legacy systems?

Yes. MBSE can be introduced incrementally to legacy environments by modelling critical parts of an existing system rather than attempting a full re-engineering effort. This approach helps organisations gain insight into dependencies, constraints and risks without disrupting ongoing operations.

How does MBSE fit with safety, regulatory and assurance frameworks?

MBSE supports safety and regulatory assurance by providing a structured way to demonstrate traceability from requirements through design to verification evidence. This can simplify audits, improve confidence in compliance claims and reduce the effort required to respond to regulatory change.

What are common misconceptions about Model Based Systems Engineering?

A common misconception is that MBSE is primarily a tooling or documentation exercise. In practice, its effectiveness depends on how models are used to support collaboration, learning and decision-making — not on the level of detail or the sophistication of the tools alone. 

Cloud Cost Optimisation Strategies for 2026: Unlock Actionable Insights

Cloud adoption continues to accelerate across both public and private sectors, and cloud spending has now reached a scale where cost management is a strategic and board-level concern rather than a purely technical issue.

A Gartner study published in late 2024 projected that global public cloud end-user spending would reach approximately USD 723 billion in 2025, underpinned by sustained double-digit growth driven by digital transformation initiatives, large-scale data platforms and accelerating AI adoption.

As organisations enter 2026, cloud is no longer an experimental or discretionary technology choice. It is a core operational dependency underpinning digital services, analytics, AI delivery and mission-critical systems. As a result, cloud costs now represent a material and recurring component of IT, transformation and operational budgets.

At the same time, there is strong and consistent evidence that a significant proportion of cloud spend does not deliver corresponding business value. IDC estimates that 20-30% of all cloud spending is wasted, even in organisations with established cloud platforms and governance practices.

A 2024 cloud efficiency study referenced by Stacklet found that 78 percent of organisations estimate that between 21 and 50 percent of their annual cloud spend is wasted, with many losing more than USD 75,000 per month due to idle resources, inefficient architectures and weak controls.

In 2026, cloud cost optimisation is therefore no longer about reactive cost cutting or short-term savings. It is about financial sustainability, architectural resilience, responsible AI adoption and long-term operational maturity. Organisations that fail to embed cost optimisation into day-to-day cloud operations risk limiting innovation, constraining AI initiatives and eroding confidence at executive and assurance levels.

This guide sets out practical, execution-focused cloud cost optimisation strategies for 2026, combining industry research, FinOps best practice and real-world delivery experience across complex cloud estates.

A practical cloud cost optimisation roadmap for 2026

One of the most common reasons cloud cost optimisation initiatives fail is a lack of sequencing. Organisations often attempt to optimise everything at once, resulting in fragmented effort and limited impact. Successful programmes instead follow a phased approach aligned to FinOps maturity models and operational reality.

Phase 1: Visibility and accountability (weeks 0–4)

The objective of this phase is to understand where cloud spend occurs and who is responsible for it.

Key activities include:

  • defining a consistent, mandatory tagging standard
  • allocating cloud costs to services, teams and business units
  • establishing baseline dashboards, budgets and alerts

Without this foundation, optimisation efforts lack focus and accountability.

Phase 2: Waste removal and early savings (months 1–3)

Once visibility exists, most organisations can realise rapid savings by addressing obvious inefficiencies.

Typical actions include:

  • identifying idle, unused or oversized resources
  • rightsizing the highest-cost services
  • shutting down non-production environments outside working hours

This phase often delivers visible savings within weeks, helping to build organisational momentum.

Phase 3: Structural and architectural optimisation (months 3–9)

This phase addresses systemic inefficiencies that drive recurring cloud cost.

Key activities include:

  • introducing auto-scaling and demand-based architectures
  • applying savings plans and reserved capacity where usage is stable
  • modernising legacy applications that were lifted and shifted without redesign

Phase 4: Prevention, governance and forecasting (ongoing)

Long-term value comes from preventing waste from re-emerging.

This requires:

  • embedding a FinOps operating model
  • automating cost guardrails and policy enforcement
  • forecasting cloud spend based on business demand rather than historical usage

Why cloud cost optimisation matters in 2026

While cloud growth and waste provide the backdrop, several 2026-specific factors have increased the urgency of cost optimisation.

Cloud spend is now structurally embedded

With global cloud spending measured in hundreds of billions of dollars annually, cloud services now represent a permanent operating cost rather than a variable experiment. In 2026, optimisation must be treated as a continuous operational discipline, not a periodic financial exercise.

AI significantly increases cost pressure

AI and advanced analytics workloads are among the fastest-growing contributors to cloud spend. Model training, inference pipelines, vector databases and large-scale data storage require sustained compute, specialised GPUs and high-throughput data movement. Industry analysis reported by TechMonitor highlights AI adoption as a growing driver of cloud overspend when governance is weak

Visibility and governance remain inconsistent

FinOps Foundation surveys consistently show that more than 40 percent of organisations struggle to accurately attribute cloud spend, particularly across hybrid and multi-cloud estates. Without clear ownership, optimisation initiatives lose traction.

Public sector accountability continues to increase

UK government guidance on cloud usage emphasises transparency, value for money and responsible stewardship of public funds. In 2026, demonstrable control over cloud cost is essential for audit readiness, regulatory compliance and maintaining public trust.

Key cloud cost trends shaping 2026

Across analyst research, FinOps community insights and delivery experience, several structural trends are shaping cloud economics in 2026. These trends explain why cloud costs remain difficult to control, even as tooling, skills and platform maturity improve.

Despite years of investment in cloud platforms, cost visibility tools and FinOps capability, cloud waste remains consistently high. This is not primarily due to technical immaturity, but because cloud operating models still incentivise speed and autonomy over financial discipline. Teams are optimised to deliver features quickly, while the financial impact of architectural decisions often remains abstract or delayed.

In 2026, waste increasingly originates from design-time decisions, such as selecting always-on services for variable workloads, duplicating datasets for convenience, or over-allocating resources to avoid performance risk. This shifts optimisation from a purely operational activity to a design and governance challenge, where cost awareness must be embedded earlier in the delivery lifecycle.

AI and data platforms are redefining what “expensive” means in cloud

Historically, cloud cost growth was driven by general-purpose compute and storage. In 2026, the cost profile will be increasingly shaped by specialised, high-performance services. GPU-backed workloads, vector databases, real-time analytics engines and large-scale data pipelines now dominate spend growth, particularly in organisations scaling AI beyond experimentation.

This trend is significant because these workloads behave differently from traditional applications. They are data-intensive and highly sensitive to architectural choices, meaning small design inefficiencies can have disproportionate cost impact. As a result, organisations are finding that traditional optimisation levers are less effective unless they are complemented by AI-aware financial governance and forecasting models.

FinOps is shifting from insight to intervention

FinOps adoption has moved beyond dashboards and retrospective reporting. In 2026, leading organisations will be using FinOps as an active control mechanism, not just an analytical function. This includes embedding financial signals into delivery pipelines, using cost data to inform architectural trade-offs, and aligning spend decisions with business priorities in near real time.

This shift reflects a broader recognition that cost is a first-class operational metric, alongside reliability, security and performance. As FinOps matures, its value increasingly depends on organisational influence and integration, rather than tooling sophistication alone. The challenge for many organisations is no longer visibility but turning insight into enforceable decisions without slowing delivery.

Multi-cloud complexity is now an economic issue, not just a technical one

Multi-cloud strategies have become standard, driven by resilience, policy, supplier strategy and workload suitability. However, in 2026 the cost implications of multi-cloud are becoming more visible. Differences in pricing models, discount structures, data egress costs and managed services make consistent optimisation across providers difficult.

As a result, organisations are increasingly forced to balance strategic flexibility against economic efficiency. This has elevated the importance of cross-cloud financial normalisation, where spend is compared and governed at a service or capability level rather than by provider. Cost optimisation in multi-cloud environments is therefore becoming a portfolio management challenge, not just a technical exercise.

Public sector collaboration is moving from policy to practice

In the public sector, cloud cost management is evolving from guidance and principle-based frameworks into practical, shared implementation. Departments and agencies are increasingly collaborating on standards for cost transparency, FinOps maturity and data sharing, supported by central initiatives and communities of practice.

This trend reflects growing recognition that cloud cost challenges are systemic, not isolated. By sharing tooling patterns, metrics and governance approaches, public sector organisations aim to reduce duplication, improve comparability and strengthen assurance. In 2026, this collective approach is becoming a key enabler of sustainable cloud adoption, particularly as AI and data workloads expand across government.

These trends manifest in a set of recurring challenges that organisations encounter as cloud estates scale.

Common cloud cost optimisation challenges

Despite growing awareness of cloud economics and wider adoption of FinOps practices, many organisations continue to struggle with the same underlying cost challenges. In 2026, these issues persist not because of a lack of technology, but because cloud cost management is as much an organisational and operating-model problem as it is a technical one.

1. Poor visibility and inconsistent allocation

While most organisations collect cloud cost data, many still lack decision-grade visibility. Costs are often visible at an account or subscription level, but not consistently attributed to business services, products or outcomes. This creates a disconnect between cloud consumption and business value.

In practice, visibility breaks down when tagging standards are inconsistently applied, ownership is unclear, or cost data is interpreted differently by engineering, finance and product teams. In 2026, this challenge is compounded by the rise of shared platforms, managed services and AI pipelines, where multiple teams consume the same underlying resources. Without a common allocation model, cloud spend becomes difficult to explain, challenge or forecast, even when dashboards and detailed receipts exist.

The result is a familiar pattern: cost reports are produced, but they do not meaningfully influence decisions.

2. Idle and over-provisioned resources

Idle and over-provisioned resources remain one of the most visible sources of cloud waste, yet they continue to accumulate in mature environments. This is partly because cloud platforms make it easy to provision capacity quickly, but place relatively little friction on leaving it running indefinitely.

In many organisations, responsibility for decommissioning resources is ambiguous. Development and test environments are created for short-term needs but persist long after projects move on. Capacity is deliberately oversized to reduce perceived performance risk, particularly for customer-facing or data-intensive workloads. Container platforms add another layer of abstraction, where unused capacity is less obvious than in traditional virtual machine estates.

By 2026, the challenge is less about identifying individual idle resources and more about preventing sprawl from becoming the default state of cloud environments.

3. Lift-and-shift migrations

Many organisations still operate a significant proportion of workloads that were migrated to the cloud using lift-and-shift approaches. While this accelerates migration timelines, it often locks in cost inefficiencies that persist for years.

Applications designed for on-premise infrastructure typically assume static capacity, peak sizing and tightly coupled components. When moved unchanged to the cloud, these assumptions translate into always-on resources, limited elasticity and higher baseline costs. Over time, teams compensate by over-provisioning to maintain stability, rather than addressing architectural limitations.

In 2026, the challenge is that these workloads often underpin critical services. Their cost impact is well understood, but the perceived risk and effort of refactoring mean optimisation is repeatedly deferred, even as they consume a disproportionate share of cloud budgets.

4. Limited governance and automation

Cloud environments scale faster than traditional governance models. Where policies, approvals and controls rely on manual processes, they quickly become bottlenecks and are either bypassed or ignored.

In many organisations, governance is still applied after resources are provisioned, rather than embedded into how platforms are built and used. This leads to inconsistent enforcement of standards, reactive clean-up exercises and reliance on individual diligence rather than systemic control.

By 2026, the absence of automation will become a cost challenge. Without automated guardrails, organisations struggle to maintain consistent financial control as teams, workloads and environments grow. The result is a cycle of periodic optimisation efforts that temporarily reduce spend, only for inefficiencies to re-emerge.

5. AI and data gravity

AI and data-driven workloads introduce a distinct set of cost challenges that differ from traditional application hosting. These workloads are inherently data-intensive, often requiring large datasets to be moved, duplicated or processed repeatedly across environments.

As models evolve and pipelines become more complex, storage volumes grow, GPU utilisation increases and data transfer costs become more material. Data gravity exacerbates this effect, making it difficult to relocate workloads without incurring additional cost or performance penalties. In many cases, teams optimise for experimentation speed rather than cost efficiency, particularly in early AI adoption phases.

In 2026, organisations are finding that AI cost challenges are not caused by individual services, but by end-to-end pipeline design, where small inefficiencies compound across storage, compute and data movement over time.

Why these challenges persist

Taken together, these challenges highlight a common theme: cloud cost optimisation fails when it is treated as a periodic clean-up activity rather than a core operating discipline. Without clear ownership, aligned incentives and embedded governance, inefficiencies naturally re-emerge as cloud estates and AI workloads continue to scale.

Cloud cost optimisation strategies and best practices for 2026

1. Improve tagging, allocation and cost visibility

What to do
Building on the visibility foundation outlined earlier, define a mandatory tagging standard covering application, owner, environment, cost centre, data classification and compliance context.

How to implement

  • enforce tagging using cloud-native policy tools
  • validate tags in CI/CD pipelines
  • auto-remediate missing metadata

What good looks like

  • over 90 percent of cloud spend accurately tagged
  • monthly showback or chargeback reporting
  • clear ownership of top cost drivers

Organisations often establish this capability as part of a broader cloud landing zone or cloud engineering programme.

2. Adopt continuous rightsizing

Rightsizing should be an ongoing operational activity rather than an annual review.

Effective approaches include:

  • monthly utilisation reviews
  • thresholds such as CPU below 30 percent or memory below 40 percent for sustained periods
  • removal of unused snapshots and volumes

This approach consistently delivers savings without service degradation.

3. Use auto-scaling and demand-based architectures

Auto-scaling ensures capacity aligns with actual demand.

Best practice includes:

  • horizontal scaling for stateless services
  • defined minimum and maximum capacity limits
  • regular load testing
  • automatic shutdown of non-production environments outside business hours

These patterns are commonly implemented during platform migration and modernisation initiatives.

4. Optimise storage and data lifecycle management

Storage costs grow rapidly, particularly for analytics and AI.

Effective strategies include:

  • tiering infrequently accessed data
  • enforcing retention and lifecycle rules
  • archiving logs
  • reducing unnecessary cross-region transfers

These controls are often embedded within data platform and analytics architectures.

5. Align purchasing models with workload patterns

Savings plans and reserved capacity can reduce long-running workload costs by 30–70 percent when applied correctly.

Best practice includes:

  • committing only once usage patterns stabilise
  • targeting utilisation above 70 percent
  • reviewing commitments quarterly

6. Build a mature FinOps operating model

A mature FinOps model includes:

  • a central FinOps capability
  • real-time dashboards
  • shared accountability across engineering, finance and product teams
  • monthly governance reviews
  • demand-based forecasting

Many organisations formalise this capability as a dedicated FinOps and cost optimisation function.

7. Modernise applications to remove architectural waste

Modernisation often delivers greater long-term savings than pricing optimisation alone.

Cloud-native patterns such as containers, serverless and managed services reduce reliance on persistent infrastructure and scale automatically with demand.

8. Optimise AI and advanced analytics workloads

AI workloads require dedicated optimisation strategies.

Effective techniques include:

  • using lower-cost GPU types for development and testing
  • separating training and inference environments
  • tracking cost per inference and cost per model version
  • pruning unused models and datasets
  • monitoring vector database growth carefully

9. Automate cost guardrails

Automation prevents waste before it accumulates.

Examples include:

  • enforcing tagging automatically
  • shutting down idle environments
  • blocking unapproved high-cost services
  • detecting anomalous spend
  • automatically cleaning up unused resources

Cloud cost optimisation with CACI

In 2026, cloud cost optimisation is about predictability, resilience and sustainable innovation, not reactive cost cutting. CACI supports organisations across the full optimisation lifecycle, from rapid waste reduction to long-term architectural transformation and FinOps maturity.

If your organisation cannot clearly explain who owns cloud spend, why costs fluctuate month-to-month, or how AI growth will be funded sustainably, optimisation opportunities already exist. CACI helps organisations move from reactive cost control to value-driven cloud economics that support growth, innovation and public accountability.

FAQs around cloud cost optimisation strategies

What does a cloud cost optimisation strategy include in 2026?

A cloud cost optimisation strategy in 2026 includes cost visibility, architectural efficiency, governance and forecasting, enabling organisations to control spend while scaling cloud and AI workloads. It focuses on embedding cost awareness into design, delivery and operational decision-making rather than reactive clean-up.

How is cloud cost optimisation different from FinOps?

Cloud cost optimisation focuses on reducing waste and improving efficiency, while FinOps is the operating model that makes those improvements sustainable. FinOps aligns engineering, finance and product teams around shared accountability, governance and forecasting.

When should organisations start optimising cloud costs?

Organisations should start optimising cloud costs as soon as cloud usage begins, not after spend becomes excessive. Early optimisation prevents inefficient patterns becoming embedded and reduces long-term cost growth.

How much can organisations save with cloud cost optimisation?

Most organisations can reduce cloud spend by 20 to 40 percent through effective cost optimisation, depending on estate maturity and governance. Savings are highest where idle resources, over-provisioning and legacy workloads are common.

Why do cloud costs keep increasing even after optimisation?

Cloud costs continue to increase when optimisation focuses on one-off savings rather than ongoing governance and demand-based control. New services, data pipelines and AI workloads often grow faster than financial controls evolve.

How do AI workloads affect cloud cost optimisation?

AI workloads increase cloud costs because they rely on high-performance compute, large datasets and repeated processing, which scale non-linearly. This requires AI-specific cost governance and forecasting to remain sustainable.

Is cloud cost optimisation harder in multi-cloud environments?

Cloud cost optimisation is harder in multi-cloud environments because pricing models, discounts and data transfer costs vary across providers. Organisations increasingly manage costs at a service or portfolio level rather than optimising each cloud independently.

Who should own cloud cost optimisation?

Cloud cost optimisation should be a shared responsibility across engineering, finance and product teams, coordinated by a central FinOps or governance function. This ensures cost decisions align with technical and business priorities.

How often should cloud cost optimisation be reviewed?

Cloud cost optimisation should be reviewed continuously using real-time monitoring, with formal governance reviews conducted monthly. This combination enables early detection of anomalies while supporting strategic oversight.

Top 10 cyber threats facing UK businesses in 2026

The anticipated cyber threats facing UK businesses in 2026 are evolving faster than security teams can adapt. Attackers are using AI to generate convincing phishing attacks, exploit software supply chains, compromise cloud identities and launch highly disruptive ransomware campaigns. 

Recent research highlights the severity of the issue: 

To effectively safeguard your organisation into 2026, understanding how these cyber threats are evolving will be paramount. The key threats to prepare for are expected to be: 

1. AI-powered phishing and social engineering 

Cyber criminals now use generative AI to produce highly convincing phishing emails, cloned voices and deepfake videos. 

According to the National Cyber Security Centre (NCSC), AI will likely continue to “make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.”Approximately £100 million was lost to investment scams driven deepfake videos in the first half of 2025.

Why it matters:

AI removes spelling errors, improves targeting and creates believable voice calls, making phishing harder to detect.

Actions to take:

  • Enable multi-factor authentication (MFA) across all accounts 
  • Train staff using AI-simulated phishing exercises 
  • Introduce payment verification with multi-person approval 
  • Use real-time email threat scanning. 

2. Ransomware as a service targeting UK SMEs 

Ransomware continues to dominate the UK threat landscape. 

Why it matters:

Ransomware groups now target SMEs because they are less likely to have strong incident response capabilities.

Actions to take:

  • Maintain offline backups 
  • Implement zero-trust identity policies 
  • Create and rehearse a ransomware response pla
  • Block admin rights by default 

3. Software supply chain compromise 

Supply chain attacks are now a priority risk area. 

Why it matters:

Compromising one supplier can affect thousands of UK organisations simultaneously.

Actions to take: 

  • Maintain a third-party risk register 
  • Request Software Bills of Materials (SBOMs) from critical suppliers 
  • Apply continuous dependency scanning 
  • Implement zero trust network segmentation. 

4. Cloud misconfiguration and identity-based attacks 

Cloud adoption has surged across UK organisations, but configuration drift and weak identity controls are leading causes of breaches. 

Why it matters:

Most cloud breaches are preventable with strong identity, configuration and policy controls. 

Actions to take:

  • Adopt secure cloud landing zones 
  • Enforce MFA and conditional access 
  • Use policy-as-code to eliminate misconfigurations 
  • Continuously scan cloud environments. 

5. Nation state threats to UK critical infrastructure 

Geopolitical tensions have increased targeting of critical national infrastructure (CNI). 

Why it matters:

Healthcare, energy, transportation and public services remain key targets due to their societal impact.

Actions to take:

  • Implement zero trust across operational technology 
  • Segment networks between IT and OT 
  • Improve visibility with 24/7 threat monitoring 
  • Apply NCSC Cyber Assessment Framework controls. 

6. Deepfake enabled fraud and CEO impersonation

Deepfake technologies are enabling highly sophisticated financial fraud. 

Why it matters:

Deepfakes undermine trust in human-to-human verification processes.

Actions to take: 

  • Introduce strict financial verification processes.
  • Train staff to spot manipulated audio and video.
  • Adopt secure communication channels for executive approvals. 

7. Zero-day exploitation of widely used platforms

Zero-day attacks are escalating in frequency and speed. 

Why it matters:

Complex estates with legacy systems are especially vulnerable.

Actions to take:

  • Prioritise patching for high-risk assets.
  • Monitor for exploitation evidence.
  • Implement virtual patching where possible.
  • Use threat intelligence feeds. 

8. IoT and OT vulnerabilities in connected environments

Manufacturers, utilities, healthcare providers and logistics operations increasingly rely on connected devices. 

Why it matters:

Compromised IoT devices can become pivot points into critical operational systems.

Actions to take:

  • Replace unsupported devices.
  • Apply network segmentation for OT.
  • Block inbound internet access to IoT.
  • Deploy device-level monitoring. 

9. Insider threats amplified by hybrid working

Hybrid and remote work models increase insider risk: 

  • The Ponemon Institute states that insider incidents account for over 25% of data breaches
  • Misconfigurations, accidental data sharing and shadow IT remain serious concerns. 

Why it matters:

Accidental insider threats are far more common than malicious actors. 

Actions to take:

  • Enforce least privilege access.
  • Use behavioural analytics.
  • Implement secure file sharing and DLP.
  • Train staff on emerging threats.

10. API exploitation and automated attacks 

APIs now underpin modern digital services. 

Why it matters:

APIs expose data, identity and business logic if not securely managed.

Actions to take:

  • Authenticate and authorise every API.
  • Implement rate limiting.
  • Continuously test API endpoints.
  • Apply zero trust principles to API gateways. 

What has changed in the last year? 

  • Phishing is now AI-powered 
  • Ransomware involves triple extortion and data auctions 
  • Supply chain attacks now target trust models in AI systems 
  • Cloud attacks increasingly abuse identity, APIs and automation 
  • Deepfake fraud has moved from fringe to mainstream 
  • The threat landscape is faster, smarter and more financially motivated. 
Cyber security monitoring room with high tech equipment

An actionable cyber checklist: What UK organisations should do now 

These are the most impactful security actions UK organisations can take in the next 30 days to reduce exposure to cyber threats in 2026: 

Week 1: Strengthen identity and access 

  • Enforce MFA for all users 
  • Audit all admin and privileged accounts 
  • Enable conditional access across cloud platforms 
  • Remove shared accounts where possible 
  • Rotate any high-risk or stale credentials. 

Week 2: Reduce cloud and configuration risk 

  • Run a cloud misconfiguration scan (AWS, Azure, GCP) 
  • Apply baseline cloud landing zone guardrails 
  • Review API authentication and rate limiting 
  • Disable any unused cloud workloads or exposed endpoints 
  • Validate backup integrity and ensure offline copies exist. 

Week 3: Improve ransomware and supply chain resilience 

  • Conduct a ransomware tabletop exercise 
  • Review supplier risk for your top 10 critical vendors 
  • Update incident response playbooks 
  • Request Software Bills of Materials (SBOMs) where relevant 
  • Validate segmentation between IT and OT networks. 

Week 4: Prepare for AI-enabled and deepfake attacks 

  • Deliver an AI phishing simulation across the organisation 
  • Implement voice and video verification checks for senior leadership 
  • Update payment verification and financial approval processes 
  • Train staff to recognise deepfake and social engineering signs 
  • Review your organisation’s readiness against the NCSC Cyber Assessment Framework

What your board needs to know in 2026 

  • Cyber threats now represent a material business risk, not just IT risk. 
  • AI increases threat volume and reduces detection time. 
  • Cloud identity and configuration security are top failure points. 
  • Regulatory pressure is rising under ICO expectations and NIS2/DORA impacts. 
  • Investment in governance, resilience and people is essential. 

How CACI can help

CACI helps organisations strengthen controls and capabilities through its Network Security and Enterprise Architecture services. Our cloud engineering and implementation services also ensure these controls are embedded from day one.

FAQs around cyber threats facing UK businesses in 2026

What are the biggest cyber threats to UK businesses in 2026?

The biggest threats include AI powered phishing, ransomware, supply chain compromise, cloud misconfiguration, API exploitation and nation-state activity. These attacks are highly automated and increasingly difficult to detect.

Why are UK SMEs at high risk of cyber attacks?

SMEs often have fewer cyber resources, limited monitoring and weaker controls, making them easier targets for ransomware and phishing. Attackers know SMEs are more likely to pay ransoms or fall for social engineering.

How can UK organisations defend against ransomware?

Defence strategies include MFA everywhere, secure backups, endpoint protection, zero trust principles, patching and rehearsed incident response plans. Aligning cloud governance with best practice significantly reduces risk.

How does AI change cyber threats in 2026?

AI increases attack volume and accuracy. Threat actors use AI to generate phishing content, clone voices, create deepfakes and analyse vulnerabilities faster than before. This reduces detection time and increases breach likelihood.

What does the NCSC recommend for improving cyber resilience?

The NCSC recommends MFA, patching quickly, securing cloud identities, conducting supply chain checks, reviewing backups and following the Cyber Assessment Framework. Businesses should ensure governance, risk and controls are regularly tested.

How to strengthen your network security posture

In this Article

When it comes to strengthening your network security posture, doing so is no longer a nice-to-have, but a strategic necessity. The notion of strengthening your network may sound time-intensive and lengthy, however, there are some immediate changes that can lead to quick wins. In this blog, we uncover four key steps IT leaders can take to strengthen network security posture and immediate quick wins that can be achieved upon doing so.  

Four steps to strengthen your network security posture

Security is no longer optional. These four foundational actions will help you reduce risk and build resilience: 

1. Adopt zero trust principles

Zero trust means “never trust, always verify.” Every user and device inside or outside the network must be authenticated and authorised. This approach limits the impact of breaches and is now recommended by the NCSC and leading global providers.  

  • Implement strong authentication for all users and devices.  
  • Segment networks to limit lateral movement.  
  • Continuously monitor for unusual behaviour.  

2. Automate detection and response

Manual processes cannot keep pace with modern threats. Automation can reduce response times by up to 40%, demonstrating its ability to help defenders stay ahead. 

  • Use AI-driven tools for threat detection and alert triage.  
  • Automate patching, backup, and incident response workflows.
  • Regularly test and updated automated playbooks.

3. Operational load

With many IT teams stretched thin, managed network services allow organisations to focus on strategy while experts handle day-to-day operations, monitoring and compliance. 

  • Consider managed firewall, detection and response and vulnerability management services.  
  • Ensure providers offer transparent reporting and clear SLAs.

4. Secure hybrid work

With two-thirds of UK employees working remotely at least part-time, endpoint protection and secure remote access are essential.  

  • Enforce multi-factor authentication for all remote access.  
  • Protect endpoints with up-to-date security software and policies.
  • Educate staff on secure working practices. 

Quick wins: Immediate actions UK IT leaders should take 

Not every improvement requires a major investment or a long-term project. The following actions can quickly reduce risk and strengthen your security posture:  

Enable multi-factor authentication (MFA) 

Multi-factor authentication (MFA) is one of the most effective ways to prevent account compromise, blocking the majority of phishing and credential stuffing attacks.  

  • Enforce MFA for all users, not just administrators.  
  • Use app-based or hardware tokens for stronger protection. 
  • Regularly review and test MFA coverage.  

Read NCSC guidance on MFA  

Patch the basics consistently and quickly

Most breaches exploit known vulnerabilities. Even delays in patching of a few days can be costly.  

  • Maintain an up-to-date inventory of all assets, including cloud workloads and remote endpoints. 
  • Apply critical patches within 14 days, as recommended by the NCSC.  
  •  Automate patch deployment and monitor for failures.  

Back up critical data securely and test your restores

Ransomware is only effective if you cannot recover your data. Secure, tested backups are essential.  

  • Use immutable, offsite or cloud-based backups.  
  • Regularly test restores to ensure data integrity.  
  • Protect backup credentials with MFA and restrict access.

Review firewall rules and access controls

Firewall policies can become cluttered over time with unused or overly permissive rules, creating hidden vulnerabilities.  

  • Schedule regular firewall reviews to remove unused or risky rules.  
  • Align policies with current business needs.  
  • Use automated tools to analyse policies for overlaps and compliance gaps.   

Run a tabletop incident response exercise 

Plans are only effective if teams can execute them under pressure. Tabletop exercises simulate real-world incidents, allowing teams to rehearse roles and identify gaps.  

  • Involve both technical and business stakeholders.  
  • Use realistic scenarios tailored to your organisation.
  • Capture lessons learned and update your incident response plan.  

See NCSC’s guidance on incident response exercises 

How CACI can help enhance your network security

CACI has helped UK businesses protect their networks for decades. From network security to data centre solutions and IT consulting, our expertise delivers secure-by-design architectures, automation, and incident readiness for robust network security.  

Download our 2026 Network Security Survival Guide today to learn more about how your organisation can set its network environments up for success. 

How technology makes commercial real estate greener

In this Article

The property sector is under increasing pressure to deliver on sustainability. Rising energy costs, stricter regulations and growing tenant expectations mean that greener buildings are no longer optional, they’re essential. Technology is at the heart of this transformation, helping owners and investors cut emissions, reduce costs and enhance asset value. Here’s how:

Smart building management systems

Modern building management systems (BMS) integrate heating, ventilation, air conditioning, lighting and power into one intelligent platform. These systems monitor and adjust operations in real time, responding to occupancy and external conditions. Studies show BMS can cut energy use by up to 30% through optimisation and predictive maintenance.

IoT sensors and data analytics

IoT sensors track energy consumption, occupancy and environmental conditions. Combined with analytics, this data helps identify inefficiencies and optimise performance. This supports ESG compliance and reduces waste.

Energy-efficient upgrades

LED lighting with smart controls: LEDs use up to 90% less energy than traditional bulbs.
AI-controlled HVAC: AI-driven systems can reduce HVAC energy use by 8–19%.
Renewable energy integration: Solar panels and heat pumps lower reliance on fossil fuels and cut carbon emissions.

Digital twin and simulation technology

Digital twins create a dynamic, data-driven replica of a building that mirrors real-world conditions in real time. This allows owners to test scenarios before committing to physical changes.

For example, you can simulate the impact of adding solar panels on energy consumption and carbon output, helping you forecast savings and validate ROI before installation.

Green building certifications

Tech-enabled buildings are better positioned for certifications like BREEAM, LEED and WELL, which validate sustainability practices and enhance asset value.

Automation and centralised IT

Automated workflows streamline maintenance and lease administration, reducing labour and energy costs. Centralised IT unifies disconnected systems, such as access control, HVAC and lighting for greater efficiency.

AI and machine learning

AI analyses large datasets to forecast energy demand and recommend retrofits. This enables smarter investment decisions and maximises ROI while reducing environmental impact.

Sustainable construction and circular economy

Sustainability starts with how buildings are designed and built. Digital tools enable low-carbon materials, modular construction and design for reuse, reducing embodied carbon and waste.

Optimising logistics is equally important. CACI’s work with major retailers shows that advanced route planning and transport management can cut supply chain emissions by up to 25%, helping construction projects lower costs and support circular economy goals.

Real-world impact

Smart buildings can reduce energy costs by up to 40% through integrated management systems.
Examples include The Edge in Amsterdam, which generates more energy than it consumes, and The Crystal in London, which achieved BREEAM Outstanding and LEED Platinum certifications.

Ready to make your buildings greener?

Technology is no longer just about efficiency, it’s about future-proofing your assets and meeting sustainability goals. At CACI, we help real estate leaders harness data, digital tools and smart systems to deliver measurable impact.

Get in touch today to explore how we can support your ESG strategy and make your portfolio greener, smarter and more valuable.

 

7 steps to strong cloud security

In this Article

The demand for cloud-based offerings has surged following the uptake of hybrid working and evolving customer expectations and digital infrastructure. Businesses that fail to adapt run the risk of being left behind. Understanding the benefits to determine whether cloud adoption is right for you is therefore critical. 

In our previous blogs, we shared the key advantages of cloud adoption and challenges in cloud security. In our final blog of this series, we share integral steps to strengthen your organisation’s cloud security. 

As more businesses adopt cloud technology, primarily to support hybrid working, cybercriminals are focusing their tactics on exploiting vulnerable cloud environments. Over the last year, a report found that 80% of organisations experienced at least one cloud security breach

This issue has been exacerbated by soaring global demand for tech talent. On a global scale, the demand for cybersecurity professionals reaches well into the millions, which is far beyond the current number of working individuals as is. Hiring and training new talent at pace is impossible with this accelerating demand. 
 
It’s a vulnerable time for enterprise organisations, and cloud security is the top priority for IT leaders. Here we consider the critical steps you can take now to make your business safer. 

1. Understand your shared responsibility model

Defining and establishing the split of security responsibilities between an organisation and its CSP is one of the first steps in creating a successful cloud security strategy. Taking this action will provide more precise direction for your teams and mean that your apps, security, network and compliance teams all have a say in your security approach. This helps to ensure that your security approach considers all angles.

2. Create a data governance framework

Once you’ve defined responsibilities, it’s time to set the rules. Establishing a clear data governance framework that defines who controls data assets and how data is used will provide a streamlined approach to managing and protecting information. Setting the rules is one thing, however; ensuring they’re carefully followed is another. Employing content control tools and role-based access controls to enforce this framework will help safeguard company data. Ensure your framework is built on a solid foundation by engaging your senior management early in your policy planning. With their input, influence and understanding of the importance of cloud security, you’ll be better equipped to ensure compliance across your business. 

3. Opt to automate

In an increasingly hostile threat environment, in-house IT teams are under pressure to manage high numbers of security alerts. It doesn’t have to be this way though. Automating security processes such as cybersecurity monitoring, threat intelligence collection and vendor risk assessments means your team can spend less time analysing every potential threat, reducing admin errors and dedicating more time to innovation and growth activities. 

4. Assess and address your knowledge gaps

Your users can either provide a strong line of defence or open the door to cyber-attacks. Make sure it’s the former by equipping staff and stakeholders access to your cloud systems with the knowledge and tools they need to conduct safe practices, such as by providing training on identifying malware and phishing emails. For more advanced users of your cloud systems, take the time to review capability and experience gaps and consider where upskilling or outsourcing is required to keep your cloud environments safe. 

5. Consider adopting a Zero Trust model

Based on the principle of ‘Never Trust, Always Verify’, a Zero Trust approach removes the assumption of trust from the security architecture by requiring authentication for every action, user and device. Adopting a Zero Trust model means always assuming that there’s a breach and securing all access to systems using multi-factor authentication and least privilege. In addition to improving resilience and security posture, this approach can also benefit businesses by enhancing user experiences via Single Sign-On (SSO) enablement, allowing better collaboration between organisations and increased visibility of your user devices and services. However, not all organisations can accommodate a Zero Trust approach. Incompatibility with legacy systems, cost, disruption and vendor-lock-in must be balanced with the security advantages of Zero Trust adoption. #

6. Perform an in-depth cloud security assessment

Ultimately, the best way to bolster your cloud security is to perform a thorough cloud security audit. Having a clear view of your cloud environments, users, security capabilities and inadequacies will allow you to take the best course of action to protect your business. 

7. Bolster your defences

The most crucial principle of cloud security is that it’s an ongoing process and continuous monitoring is key to keeping your cloud secure. However, in an ever-evolving threat environment, IT and infosec professionals are under increasing pressure to stay ahead of cybercriminals’ sophisticated tactics. 

A robust threat monitoring solution can help ease this pressure and bolster your security defence. Threat monitoring works by continuously collecting, collating and evaluating security data from your network sensors, appliances and endpoint agents to identify patterns indicative of threats. Threat alerts are more accurate with threat monitoring analysing data alongside contextual factors such as IP addresses and URLs. Additionally, traditionally hard-to-detect threats such as unauthorised internal accounts can be identified. 

Businesses can employ myriad options for threat monitoring, from data protection platforms with threat monitoring capabilities to a dedicated threat monitoring solution. However, while implementing threat monitoring is a crucial and necessary step to securing your cloud environments, IT leaders must recognise that a robust security programme comprises a multi-layered approach utilising technology, tools, people and processes. 

Download our Cloud Security Assessment Checklist and discover proven strategies to strengthen your defences in our comprehensive guide.

How effective data foundations and consumer insights drive campaign performance in DTC healthcare and e-commerce

In this Article

A competitive, complex consumer landscape

Competition has never been more intense in the dynamic and growing consumer health and wellbeing sector. 2025 has seen new market entrants like hair loss treatment company Hair + Me, any number of weight loss services like Juniper and SheMed high on social media feeds and supermarket Morrisons in partnership with Phlo moving into the on-demand online healthcare space alongside existing high street giants Boots, Superdrug and Asda.

This new and intense competition also comes with a new reality: increasingly fragmented consumer behaviour that upends traditional marketing assumptions.

Younger age cohorts drive healthcare growth

Our Voice of the Nation (VOTN) survey examining consumer sentiment finds Gen Z and Millennials in the driving seat of the elective healthcare market. Weight-loss treatments like Mounjaro and Ozempic are expected to surge by 40% in 2025 due to these younger age cohorts.

Notably, Gen Z shows equal interest across genders, unlike older age groups where women dominate. Cosmetic treatments are also gaining traction, with well over 10% of Gen Z and female Millennials planning to pay for them, compared to less than 3% among Gen X and Baby Boomers.

While aesthetics is clearly playing a role, other deeper consumer motivations are also emerging.  Notably, survey respondents who consider health a top national issue are significantly more likely to self-fund treatments. Among Gen Z males in this group, 16.2% plan to pay for weight-loss treatments in 2025 — well above the average of 4.9%. And just as importantly, the VOTN data somewhat counterintuitively shows that demand for elective healthcare products and services in general spans both affluent and less affluent groups.

Age-related wellness and health products drive innovation

In short, our VOTN data reveals a complex blend of beauty, wellness, and proactive health management, with younger generations investing in elective healthcare to enhance both how they feel and how they look.

This trend is reflected in the innovation and increasingly digital activation seen in the fertility and female health space relevant to these age cohorts. Period care pioneer Daye is launching a new at-home hormone testing service for a host of biomarkers like reproductive hormones, thyroid function and Vitamin D. Male fertility company, testhim, which provides consultations, testicular scans, sperm DNA and other diagnostic testing, is also launching specialist fertility supplement testhim M+and a groundbreaking online monthly support group.

Complex, demanding consumers require sophisticated, multi-layered segmentation

So, with Gen Z and Millennials increasingly self-funding weight loss, cosmetic treatments and holistic wellness products and services of all kinds, DTC and e-commerce healthcare brands must truly rethink how they engage with this increasingly data-savvy, image-conscious audience. Informing integrated campaigns that blend social commerce, influencer marketing, paid advertising, organic and direct marketing content. Our VOTN survey also found that nearly two-thirds of Gen Z consumers (63%) have purchased goods and services via a social media platform like TikTok Shop and Instagram, making this a crucial channel for healthcare businesses to understand and potentially utilise.

But to do that effectively in practice, DTC and e-commerce healthcare brands need more than just surface-level insights. They need robust, layered data foundations that help them target the right consumer with the right kind of message at the right time in the right place. Even with first-party consumer data, it’s a significant challenge. Without it, reaching existing or identifying potential customers is almost impossible for brands.

You can see an example of this in our VOTN survey, which showed that for weight loss treatments, there appears to be greater levels of demand both at the more affluent end of our Acorn segmentation spectrum *and* at the least affluent end, potentially for differing reasons.

This requires integrating geodemographic, behavioural, lifestyle, and attitudinal data to move beyond ‘off-the-shelf’ consumer segments and into understanding consumers in a deep way that understands the likelihood of them engaging with specific healthcare products and services and why – enabling brands to drive efficient spend on the right customers – and remove disinterested or low-value ones – in a market with such broad appeal

It’s also only by taking this multi-layered data approach healthcare brands can build strategic data-driven campaigns that resonate on a genuinely personal level in the manner desired by younger generations. Critically, delivering on the perennial, somewhat paradoxical Gen Z demands for high levels of privacy, but also similarly high levels of personalised products and brand messaging.

Turn insights into activation for D2C and e-commerce health campaign success

But as we know, data, in isolation, holds limited value. Its real power is unleashed through activation – the transformation of insight into strategy. And in a world where consumer expectations are rising and attention spans are shrinking, the ability to deliver timely, relevant, and meaningful engagement is an outright competitive advantage. And it can only be achieved through a deep, data-driven understanding of people.

For D2C and e-commerce health brands, this understanding and successful activation requires them to:

  • Identify high-value customer segments for targeted acquisition and retention
  • Predict churn and retention patterns within subscription-based models
  • Inform campaign messaging with real-world consumer behaviours and motivations
  • Develop nuanced personas reflecting not just demographics, but attitudes, values, and lifestyle choices
  • Personalise content across relevant digital channels, from email to in-app experiences
  • Build lookalike audiences for acquisition campaigns on platforms like Meta and Google
  • Optimise digital spend by measuring performance and refining segmentation over time

This is where the transformation power of comprehensive datasets, such as CACI’s Ocean database, which offers over 700 variables at an individual and household level, comes in. Ocean includes everything from financial situation, media consumption and digital behaviours to lifestyle preferences like veganism and exercise to whether consumers have a smart watch or fitness band.

When combined with geodemographic tools like Acorn – segmenting over 1.6 million UK postcodes using more than 800 variables – and supported by bespoke data analysis, brands can unlock a truly multidimensional view of their audiences wherever they are.

This approach allows brands to move beyond generic targeting and into a space where campaigns are not only more relevant but also more respectful of consumer expectations – a win-win for younger cohorts who dislike intrusive and irrelevant brand messaging but demand personalisation nonetheless!

Data insight for a dynamic healthcare future

As healthcare consumers’ expectations evolve and the consumer health and wellbeing market with it, so must the strategies brands use to engage them. Success for D2C and e-commerce healthcare brands doesn’t just hinge on understanding who consumers are today — it’s about being able to anticipate who they’re becoming even as new healthcare technologies, products and devices become available. By being able to able to identify and engage high-lifetime value customers as early as possible, brands also have a greater chance to capture markets as they evolve.

The effectiveness of multi-layered segmentation in improving marketing precision now – and as AI becomes more integrated – is well established. CACI’s ability to deliver on this today with our consumer data and bespoke strategic segmentation capabilities ensures brands are future-ready

Data isn’t just a tool – it’s a strategic asset. Brands that invest in sophisticated segmentation and activation today will be best placed to drive sustainable growth tomorrow.

Speak to our healthcare consumer segmentation specialists today.

A New Face of Health – How Younger Consumers Are Reshaping Location Strategy

In this Article

From weight loss medication and fertility support to cosmetic and hair loss treatments, the traditional elective healthcare landscape is being radically transformed.

Historically, elective healthcare might have been associated with older demographics or specific medical needs. However, recent insights from CACI’s Voice of the Nation (VOTN) consumer survey reveal a compelling shift: a surge in demand for elective treatments from wellness and image-conscious Gen Z and Millennial consumers.

This fundamental change has profound implications for healthcare providers and their location strategies to ensure they are precisely aligned with the evolving demographics of their target audiences and meet their notably high expectations around convenience, accessibility, and experience. This offers a huge challenge – and opportunity – for healthcare brands to adopt a more agile, data-informed approach to their physical presence.

A complex convergence of health, wellness and beauty consumption

Our Voice of the Nation (VOTN) survey reveals that weight-loss treatments like Mounjaro and Ozempic are projected to grow by 40% in 2025, with Millennials and Gen Z leading the charge. And Gen Z show equal interest among both male and female respondents unlike all other age cohorts, where women predominate.

And while 4.9% of respondents overall say they plan to pay for cosmetic treatments in 2025, this number rises above 10% for Gen Z respondents and female Millennials but remains below 3% for anyone older than 45.

It might be tempting to assume that this trend is purely about aesthetics. And there’s some truth in this. For those planning to increase their beauty spending in 2025, nearly 14.9% planned to pay for cosmetic treatment against 4.9% generally. Similar jumps were seen for hair loss treatment (10.2% versus 3.3% generally) and weight loss services (14.9% against 6.4% generally).

Yet other issues are also in play.

For Gen Z respondents who considered ‘Health’ as a top three issue facing the UK, there was also a distinct increase in planned treatments: 16.2% of Gen Z males concerned about health were planning weight loss treatments versus 9.7% in general, for example.

In addition, when we look at the Voice of the Nation sentiment data through our in-depth Acorn geodemographic segmentation, demand for all types of healthcare treatments spans both affluent and less affluent groups.

Brought together, the data shows there’s a truly complex mix of motivations — from aesthetics and wellness to proactive health management – across and within different age cohorts. But all with the clear underlying message that Millennials and Gen Z with their growing spending power are seriously invested in elective healthcare treatments that make them feel and look better.

Why Traditional Site Selection Falls Short

Historically, location planning in healthcare has relied on broad demographic assumptions or legacy performance data. But in today’s market, that’s not enough, no longer can site selection be based on general market trends. A clinic with the wrong treatment offer, placed in the wrong area – too far from its target audience, or in a location with low footfall – can struggle to gain traction, regardless of the quality of care it offers.

But critically, it’s not just about what these consumers want — it’s about where they are. Something that after Covid-19 has also changed. Traditional assumptions about most people working from the office every day no longer hold.  Our VOTN research found that on average people now only spend on average 2.5 days in the office – Gen Z spending 12.5% more time in the office than Gen X and Baby Boomers.

Younger generations are more likely to live in urban centres, commute via public transport, and expect services to fit seamlessly into their daily routines. This is reflected in our VOTN data which finds Gen Z less likely to have products delivered to home and far more likely to have purchases delivered to a pick-up/drop off point like a locker or local convenience store (39% versus 23% in general) or delivered to a convenient location like their office (27% versus 14% in general). 

And as we’ve already noted younger generations are also more likely to pay for treatments where the NHS is not offering what they need or on the timescale they want it.

What’s needed is a more granular, predictive approach to location choice: one that considers not just who your customers are, but how they move, spend, and engage with products and services.

A data-driven bespoke approach to Location Strategy

Healthcare treatment providers looking for physical locations that will help them match this complexity of demand and grow their business should be looking to modern location analytics that combine powerful human behavioural and geographical insights. Rather than taking an ‘off-the-shelf’ approach that can obscure what’s really going with the new healthcare consumer, a bespoke approach allows you to:

  • Map journey-time decay to understand how far patients are willing to travel for different services
  • Define profile catchments using lifestyle segmentation like our CACI Acorn that can leverage over 700 economic, behavioural and social variables to identify over-represented consumer types in particular areas
  • Deeply understand who your ideal customers and their locations are through powerful location intelligence tools like InSite
  • Overlay footfall and spend data to assess the commercial viability of potential sites
  • Unlock ‘white space’ by using location analytics such as provided by our Location Dynamics to identify areas with unmet demand and minimal competition
  • Rank postcode sectors by indicators like Private Medical Insurance coverage or self-pay propensity

The future of healthcare location strategy

The rise of younger, self-directed healthcare consumers who value their physical and mental wellbeing is clearly not a passing trend – it’s a structural shift. To stay relevant, providers must meet these audiences where they are – both physically and emotionally, including as they ‘age’ into other services like fertility, hair loss treatments and joint replacements.

To truly capture this growing segment, providers must harness advanced data to pinpoint and predict high-potential areas where these new consumers live, work, and spend. This ensures that every new clinic, every re-evaluated existing site, is positioned for maximum impact, catering directly to the evolving needs and preferences of a generation redefining elective health. That means rethinking location strategy as a core part of business planning, not just an operational detail. With the right data and tools, healthcare brands can build a physical location footprint that’s not only efficient for today, but also future-proof.

To find out how we can help you find the healthcare consumers best suited to your services, get in touch with us.

The 9 biggest challenges in cloud security

In this Article

The demand for cloud-based offerings and cloud adoption has accelerated, with the importance of flexibility and agility now being realised. Without adapting, businesses risk being left behind. What are the benefits, however, and how do you know if it’s the right solution for you? 

We shared the key advantages of cloud adoption in our previous blog. This time around, we identify the biggest challenges of cloud security. 

Cloud adoption has become increasingly important in recent years, with 64% of all enterprises now regarding cloud security as a pressing security discipline. Despite its integral role, more than half of all enterprises find securing cloud environments to be more complex than securing on-premises venues. 

As cybercriminals increasingly target cloud environments, the pressure is on for IT leaders to protect their businesses. Here, we explore the most pressing threats to cloud security you should take note of. 

Limited visibility

The traditionally used tools for gaining complete network visibility are ineffective for cloud environments as cloud-based resources are located outside the corporate network and run on infrastructure the company doesn’t own. Furthermore, most organisations lack a complete view of their cloud footprint. You can’t protect what you can’t see, so having a handle on the entirety of your cloud estate is crucial. 

Lack of cloud security architecture and strategy

The rush to migrate data and systems to the cloud meant that organisations were operational before thoroughly assessing and mitigating the new threats they’d been exposed to. The result is that robust security systems and strategies are not in place to protect infrastructure. 

Unclear accountability

Pre-cloud, security was firmly in the hands of security teams. In public and hybrid cloud settings, however, responsibility for cloud security is split between cloud service providers and users, with responsibility for security tasks differing depending on the cloud service model and provider. Without a standard shared responsibility model, addressing vulnerabilities effectively is challenging as businesses struggle to grapple with their responsibilities. This not only obfuscates incident response, but increases the likelihood of risks and misconfigurations. 

Misconfigured cloud services

Misconfiguration of cloud services can cause data to be publicly exposed, manipulated or even deleted. It occurs when a user or admin fails to set up a cloud platform’s security setting properly. For example, keeping default security and access management settings for sensitive data, giving unauthorised individuals access or leaving confidential data accessible without authorisation are all common misconfigurations. Human error is always a risk, but it can be easily mitigated with the right processes. 

Data loss

Data loss is one of the most complex risks to predict, so taking steps to protect against it is vital. The most common types of data loss are: 

  • Data alteration – when data is changed and cannot be reverted to the previous state. 
  • Storage outage – access to data is lost due to issues with your cloud service provider. 
  • Loss of authorisation – when information is inaccessible due to a lack of encryption keys or other credentials. 
  • Data deletion – data is accidentally or purposefully erased, and no backups are available to restore information. 

While regular back-ups will help avoid data loss, backing up large amounts of company data can be costly and complicated. Nonetheless, ransomware attacks swelled by 126% earlier this year, reiterating the necessity for businesses to conduct regular data backups.  

Malware

Malware can take many forms, including DoS (denial of service) attacks, hyperjacking, hypervisor infections and exploiting live migration. Left undetected, malware can rapidly spread through your system and open doors to even more serious threats. That’s why multiple security layers are required to protect your environment. 

Insider threats

While images of disgruntled employees may spring to mind, malicious intent is not the most common cause of insider threat security incidents. Worryingly, the frequency of insider-led incidents is on the rise. According to a report published this year, nearly half of the organisations surveyed noticed an increase in the frequency of their insider threats. The financial repercussions of this increase have led to costs increasing by 109% between 2018 to 2024, posing serious financial risks to affected organisations. 

Compliance concerns

While some industries are more regulated, you’ll likely need to know where your data is stored, who has access to it, how it’s being processed and what you’re doing to protect it. This can become more complicated in the cloud. Furthermore, your cloud provider may be required to hold specific compliance credentials. 

Failure to follow the regulations can result in substantial legal, financial and reputational repercussions. Therefore, it’s critical to handle your regulatory requirements, ensure good governance is in place and keep your business compliant. 

API vulnerabilities

Cloud applications typically interact via APIs (application programming interfaces). However, insecure external APIs can provide a gateway, allowing threat actors to launch DoS attacks and code injections to access company data. 

In 2020, Gartner predicted API attacks would become the most frequent attack vector by 2022. With over half of all enterprises reporting an increase in direct attacks to compromise infrastructure as of 2025, this prediction has become a reality. Addressing API vulnerabilities will therefore be a chief priority for IT leaders in 2025 and beyond. 

Check out our comprehensive guide to cloud security for more insights on overcoming these challenges and safeguarding your business against evolving threats.

Cloud innovation trends: Why optimisation must come first

In this Article

Cloud innovation trends: Why optimisation must come first

In the race to modernise, many businesses make a critical mistake: innovating before optimising their cloud infrastructure. It’s an easy trap to fall into – new technologies promise speed, agility and competitive advantage. However, without a solid foundation, those promises can quickly unravel.

So, what difference will optimisation make to cloud innovation? How do complex hybrid environments affect optimisation and what are the repercussions of innovating too soon?

Why optimisation should come first

Cloud optimisation isn’t just a technical exercise – it’s a strategic imperative. Before you invest in AI-driven tools, advanced analytics or multi-cloud deployments, you need to ensure your existing environment is efficient, secure and cost-effective. Otherwise, innovation becomes a gamble rather than a growth driver.

How the complexity of hybrid environments affects optimisation

Modern IT landscapes are rarely simple. Most organisations operate in hybrid environments, combining:

  • Cloud-native workloads
  • Semi-native applications
  • Containerised services
  • Legacy systems migrated via IaaS.

This mix introduces complexity that can quietly erode ROI and performance. Without optimisation, you risk inefficiencies that undermine every future initiative.

Common pitfalls of innovating too soon

When businesses rush to innovate without first optimising, they often encounter:

Duplicated workloads

Hybrid setups frequently lead to duplication of environments or services, especially when containerised and legacy systems overlap with cloud-native tools. This consumes bandwidth and burdens IT and DevOps teams with managing multiple versions of the same workload.

Latency issues

Poor workload distribution across cloud environments increases latency, slowing response times and masking compliance or security issues. For customer-facing applications, this can directly impact user experience and brand reputation.

Security saps

Unoptimised containerised and legacy workloads are vulnerable to governance and compliance risks. Differences in data storage and flow between environments complicate tracking, while unresolved legacy issues can carry over post-migration.

Mounting costs

With up to 30% of cloud spend wasted, inefficiencies inflate monitoring and security costs, draining budgets that could fund innovation.

Why this matters now

Cloud strategies are under pressure to deliver more – faster, cheaper and greener. Without optimisation, businesses risk inefficiency, higher costs and vulnerabilities that stall progress. In an industry where every second counts, building on shaky ground isn’t just risky, it’s expensive.

How to get started

Before chasing the next big trend in cloud innovation, take time to:

  • Audit your current architecture: Maintain visibility by understand what’s running, where and why.
  • Identify duplicated workloads and inefficiencies: Determine whether any services or resources are the cause behind draining budgets.
  • Align resources with business priorities: Ensure any spending on cloud innovation drives value for the business.
  • Implement governance and security best practices: Establishing best practices early on will ensure that innovation is scaled effectively.

This foundation ensures innovation is sustainable, not just a short-term fix.

The CACI approach: Building a cloud that enables innovation

Ready to build a cloud foundation that enables innovation?

Don’t leave your cloud strategy to chance. Our specialist cloud architects and optimisation experts have helped leading organisations modernise, streamline and unlock innovation without compromise. Contact us today to start your cloud optimisation journey.

Is your attitudinal segmentation delivering the value you need?

In this Article

As attitudinal segmentations are usually based on surveying a smaller sub-group and not based on data which can be easily applied to customers on your database, bridging attitudinal segmentations can be a challenge and is not always a straightforward process. However, it is a great way to provide a consistent customer experience.

So, what is attitudinal segmentation and what considerations should an organisation have when it comes to their approach for bridging an attitudinal segmentation?

What is attitudinal segmentation & how to bridge an attitudinal segmentation

Attitudinal segmentations are typically created using data from quantitative surveys. They can be a powerful tool for delivering rich insights into customer and prospect mindsets and provide a valuable framework for organisations to engage customers effectively through an in-depth understanding of their needs, attitudes and motivations.

Being able to treat customers consistently throughout the marketing funnel helps to establish a relationship with them and deliver resonating messages that will drive increased engagement. Once someone becomes a customer, they will expect to see the same messages that originally struck a chord with them reflected and developed in their ongoing journey with you.

The economic and social disruption since the pandemic has permanently changed consumers and their expectations of brands, so ensuring your online messaging aligns with these changes is increasingly important. We consistently see organisations that are personalising messaging for their customers increasing their market share, net promoter scores, return on investment and profitability. With this in mind, being able to make your attitudinal segmentation actionable on your database should be a key part of your customer engagement strategy.

Key questions to address the challenges of bridging an attitudinal segmentation onto your customer base

There are no two ways about it – data is key to tackling this challenge and making it actionable. To achieve this, you should ask the following five questions to get started:

  • Where and who created the segments? Were the segments created by your organisation or a media/research partner? This is pertinent to understanding if you can get to the raw data or in understanding the level of granularity of data you can obtain.
  • What data is there? Do you have access to the responder level data or tables by segment or Pen Portraits? The data you can reach will determine the method of bridging that can be used.
  • Were questions only posed to your customer base or to the wider population? What types of questions were asked and were they personal to the organisation or more generalised? This can impact the resulting solution.
  • Are there any behavioural traits reported within the data that were part of the same survey? Wider data beyond pure attitudes can be helpful to model this back to the database.
  • Were any demographic questions asked or was postcode captured? This can help the process of creating the link between segments and customer base.

While bridging an attitudinal segmentation can be challenging, these questions will help identify how simple or complex the solution will be.

Key techniques for bridging attitudinal segmentation

Depending on the granularity of the data your organisation has access to, the following techniques can be leveraged:

  • Responder level data: As this is the most granular form of data, it produces the most accurate results. Techniques here include modelling each of the segments by using a mix of the responder data and CACI’s own data to score this up against a customer database before validating this against the responder panel.
  • Tables by segment: We can compare each customer’s results to the segment averages based on a combination of multiple data points. Validation is key through profiling and sense checking the segment distribution.
  • Pen Portraits: Here we would use a rules-based approach to recreate segments based on high-level views of the segment to capture the different blend of information that you have to bridge the data. As before, the final step of validation is key to ensuring the solution’s accuracy.

If raw data is inaccessible or unavailable, the following alternative methods can support:

  • Adding golden questions to market panels: This will provide more demographic and behaviour traits which support the bridging process.
  • Surveying the whole customer base with golden questions: Responses can often be skewed to particular segments, however, and some consumers may be more inclined to answer than others.

Considerations at the start of an attitudinal segmentation journey

Including key customer traits

When beginning an attitudinal segmentation, our first recommended consideration would be to include some key customer traits. Including additional questions such as demographic markers (postcode, gender and age band) will support segmentation mapping on to the database.

Cross-team engagement

Cross-team engagement will be invaluable to ensure the segmentation meets goals and drives value. This will help flesh out what the segmentation will be used for now and in the future, as well as gauging what you need from the segmentation and building it accordingly. It is also pertinent in getting buy in as early as possible to ensure teams are engaged when the solution is rolled out.

Backing segmentations with research

Another solution would be to build the segments first and then use research to enhance them with attitudinal values. This solution can work well with one of the benefits of running focus groups to bring life to the segments rather than using the attitudes to drive the segmentation.

Ultimately, it is about finding the right balance that works for your organisation based on wants and needs. Attitudinal segmentations can bring excellent insights but are limited in their applications across a database. Fundamentally, it is a process of ensuring that through engaging the whole organisation, your solution is optimised to meet strategic aims.

How CACI can help

CACI is in a unique position with a UK-wide dataset on all adults, encompassing over 800 variables that we can use to profile and create proxy variables to support the possibility of a successful bridging exercise. We help solve the challenges associated with bridging attitudinal segmentation for leading organisations many times each year.

To learn more about getting the most out of your segmentation and how CACI can support you through this journey, get in touch and we can discuss your challenges in more detail.

The top 6 business benefits of cloud adoption

In this Article

Cloud adoption is no longer seen as a means for storage, but a foundation for intelligent business capabilities. Businesses that have adopted the cloud are able to reap benefits far beyond cost savings, enhancing operational flexibility, enabling faster disaster recovery and much more. In the first blog of our cloud security series, we explore the key advantages of cloud adoption. 

Flexibility

Cloud infrastructure is the key to operational agility, allowing you to scale up or down to suit your bandwidth needs. The pay-as-you-go model offered by most cloud service providers (CSPs) also means that you pay for usage rather than a set monthly fee, making IT spending a more manageable operational expense. The ability to scale resources according to demand also ensures performance will be optimal during peak times and eliminate waste during downtime. 

Reduced cost

Kind to your cash flow, cloud computing cuts out the high hardware cost. The availability of the aforementioned pay-as-you-go models can significantly cut costs. Not to mention the cost-savings of reduced resources, lower energy consumption and fewer delays.  

Disaster recovery

From natural disasters to power outages and software bugs, if your data is backed up in the cloud, it is at a reduced risk of system failure as the servers are typically far from your office locations. You can recover data anywhere to minimise downtime by logging into the internet’s cloud storage portal. 

Accessibility

We’ve all heard that the office is dead. Workers want the ability to work anytime, anywhere. With cloud (and an internet connection), they can. The cloud enables workforces to be distributed through secure access to data and applications from any location, which is critical in today’s hybrid working world. 

Greater collaboration

Cloud infrastructure makes collaboration a simple process, changing the parameters of how and where teams can work. The cloud can drastically improve workplace productivity, from online video calls to sharing files and co-authoring documents in real-time. It offers a centralised, secure and real-time working environment that bolsters communication and helps streamline workflows. These cloud-native applications are designed to make our lives more efficient through greater collaboration.  

Strategic value

Ultimately, businesses that have adopted the cloud typically experience greater cost efficiencies, faster speed to market and enhanced service levels. Adopting the cloud not only reimagines business models and builds resilience but also enables organisations to be agile and innovative. For example, adopting DevOps methodologies can be an essential element for businesses looking to get ahead of their competitors. 

But what about security? Earlier this year, a reported 61% of organisations felt security and compliance were their primary barriers to cloud adoption. Rushed application and the resulting lacklustre security have only intensified security concerns as cybercriminals increasingly target cloud environments. 

Download our comprehensive guide to cloud security and start securing your cloud today.

Why Hybrid Cloud Infrastructure is Here to Stay

In this Article

Hybrid cloud isn’t just a transitional phase – it’s the reality for most businesses. While the promise of cloud-native infrastructure is appealing, the complexity of legacy systems, on-prem dependencies and non-cloud-native workloads means hybrid cloud infrastructure is often the most feasible and flexible option. However, it doesn’t come without its challenges.

So, what does your business need to know to future-proof your hybrid cloud infrastructure? How can the complexities of a hybrid technology stack be navigated with the help of a trusted data partner?

Hybrid cloud isn’t going anywhere (and why that’s okay)

Most businesses aren’t ready (or suited) for full cloud-native infrastructure. This is why the flexibility of hybrid cloud infrastructure, especially for workloads that perform better outside of cloud-native environments, can be especially beneficial.

Beyond flexibility, some of the compelling reasons to retain hybrid setups include:

  • Feasibility of full migration
  • Performance of certain workloads
  • Configurability of services.

In essence, hybrid isn’t a compromise; it can be a strategic advantage. Many businesses find that hybrid infrastructure gives them the best of both worlds: the scalability of cloud with the control and compliance of on-prem. When done intentionally, hybrid can reduce costs and improve efficiency.

Addressing the “lift and shift trap” & hidden complexity

Despite the promise of hybrid cloud infrastructure, the “lift and shift” concept and other hidden complexities should not be ignored. Amidst the rush to move on-prem workloads to the cloud without rearchitecting them, “lift and shift” often replicates inefficiencies, leading to higher infrastructure costs without the expected savings in maintenance or total cost of ownership (TCO).

Instead of reducing costs, businesses may find themselves paying premiums for cloud infrastructure while still managing the same maintenance overhead. Without a strategic approach, cloud migration can become a costly exercise in replication.

Furthermore, maintaining a hybrid stack introduces networking and security challenges. Data must pass through multiple domains, increasing latency, management overhead and the risk of data loss. Hybrid environments also often require more complex connectivity and governance, which can strain IT resources and reduce security posture.

Making hybrid cloud infrastructure work for innovation & transformation

Intentionality is key in the realm of innovation and transformation within hybrid cloud infrastructure. Hybrid may be here to stay, but it should be a strategic and practical choice for businesses, not a default. Businesses must assess which workloads belong where, understand the trade-offs and build a roadmap that balances performance, cost and security. With the right strategy, hybrid can deliver the flexibility, performance and cost-efficiency needed to support innovation and transformation.

The CACI Approach

With deep expertise across on-prem, cloud-hosted and cloud-native environments, CACI brings clarity to complexity, helping clients navigate and make intentional decisions about their hybrid cloud infrastructure. From rearchitecting legacy workloads and systems to optimising cloud-native deployments and scaling new digital services, we work with businesses to build hybrid strategies that unlock innovation, reduce TCO and accelerate transformation.

Whether you’re modernising infrastructure, improving security posture or enabling new digital services, CACI ensures your hybrid environment is not just functional and maintained, but optimised for the future.


With the right partner, hybrid doesn’t have to be complex – it can be your competitive edge. Contact us today to find out more.

How CACI helped Merry Hill assess the benefits of an M&S refurbishment

In this Article

Merry Hill is one of the largest regional malls in the UK, encompassing over 200 shops such as major flagships Primark, M&S and Next. Sovereign Centros from CBRE were appointed asset managers of the former Intu asset in 2022, and have since expanded the retail, F&B, and leisure offering, with recent high profile openings including Hollywood Bowl and national debuts for Harvey Norman and XF Gym.

When Merry Hill chose to invest in renovating the M&S flagship store, they needed to quantify the impact changes would have on performance. This required a robust simulation of the future turnover and resulting footfall. In this blog, we uncover the steps that CACI took to help Merry Hill understand the impact of refurbishing M&S and gain investors’ approval to execute it.

How CACI evidenced outcomes of refurbishing Merry Hill’s M&S

CACI compiled a report covering an overview of M&S’ current performance, the impact of a refurbishment on the retailer’s turnover and the cross-shopping potential it could bring across Merry Hill. The report also considered factors such as benchmark centre sales where M&S had already been upgraded, annual trips to Merry Hill should the refurbishment not take place, and potential customer loss to Bullring & Grand Central mall where a new M&S was due to open.

The data sources included in CACI’s report were:

  • Transactional Spend Data: Derived from real-world debit card spend data from multiple sources, Transactional Spend Data is a fully consented view of spending patterns. It offers granularity into how different groups interact and how customers engage through an analysis of spend by product category.
  • Acorn: CACI’s consumer segmentation model combines geography with a variety of demographics and lifestyle data sources, grouping the entire population into 6 Categories, 18 Groups and 62 Types. It supplies insights into the role that demographics plays in impacting the performance of a location and helps identify key users of a site.
  • Location Dynamics: CACI’s machine learning tool predicts the retail, grocery and leisure catchments of over 6,000 destinations in the UK. It considers underlying population and spend, competitive landscapes and accessibility to each destination to model overlapping catchments. In this context, Location Dynamics was used to predict the centre’s performance, and overlap with Birmingham city centre, allowing for a comparison to actual sales to understand where and how the centre could grow turnover.
  • Brand Dimensions: CACI’s benchmarking tool tracks the performance of 300 major brands over time. In this instance, it examined M&S spend performance nationally and at benchmarked locations.

What value would refurbishing Merry Hill’s M&S bring?

Having been at Merry Hill for three decades, investing in a refurbishment of M&S would solidify its continued commitment to the centre.

Increase in average spend, dwell time & turnover

CACI uncovered that centres with a refurbished M&S store have seen an increase in average spend per head in benchmark centres by 2.2%, which could help generate a substantial turnover at Merry Hill. With M&S accounting for 11% of centre floorspace at Merry Hill, improving its appearance could impact the ambience of the rest of Merry Hill and contribute to an uplift in dwell time, retail spend, and catering for the wider centre. Refurbishing Merry Hill’s M&S would also accelerate turnover at both the store and across the centre, as refurbishment is cited as a key factor for increasing sales.

Appealing to younger, more affluent demographic

Our research has shown that refurbished stores tend to attract younger, more affluent shoppers. While Merry Hill’s diverse shopper profile of Executive Wealth, Mature Money, and Steady Neighbourhoods Acorn groups is well aligned to key shoppers for M&S, key groups have all under performed versus catchment expectation. A refurbished M&S would appeal to these underperforming visitors.

A reported 82% of M&S shoppers also go on to spend in other stores at Merry Hill. Therefore, the new footfall that a refurbished M&S would attract would benefit other tenants in the centre.

Sales growth from new & existing shoppers

Within this project, we were able to quantify the number of new Merry Hill visitors that would be generated as a result of the refurbished M&S, with considering factors including their potential spend in M&S and their spill-over expenditure across the wider centre.

Graeme Jones, Executive Director at Sovereign Centros from CBRE: “M&S has been a big part of Merry Hill for several decades, so any decision about their future is one that needed to be made with real consideration of the potential impact on the destination. When we decided that we wanted them to introduce their latest shop fit, while consolidating from two units into one to create new opportunities, we started to create a proposal for M&S that would make the best possible case for a significant investment commitment. The data and insight from CACI was a crucial element of that business case, emphasising the rationale from a visitor, brand, and landlord perspective. It helped achieve a positive outcome for all parties, and the new M&S store is already beating commercial targets, and has had a big impact on Merry Hill and its visitor numbers.”

Ellie Brettell, Senior Property Consultant at CACI: “We’re increasingly being asked to support decisions like this one, where significant investment is involved and multiple parties need reassurance that the right choice is being made. Our objective, data-driven approach helps provide that clarity. Our contribution to this fantastic deal for Merry Hill was possible because of our expertise working for brands and owners of places – we understand the goals and potential impacts on both sides and can therefore create a report that rationalises a decision for all parties. Our evidence base made it clear that this deal would create positive outcomes for everyone involved, so naturally we’re proud that our work has helped to deliver such tangible success.”

How CACI can help

The insights provided through CACI’s report instilled both internal and external stakeholders with the necessary confidence to make significant investments in the refurbished M&S. To learn more about our products and data available from key partners to generate a single view of the UK property market, contact us today.

Why Security and Compliance Must Be Built into Your Cloud Strategy from Day One

In this Article

Cloud computing continues to be the engine of digital transformation for organisations across the UK. It enables agility, scalability and innovation, but it also introduces new risks. As cloud adoption accelerates, many IT leaders are discovering that overlooking security and compliance early in the journey can have serious consequences. 

For IT Directors, Digital Transformation Leads, Heads of Innovation and CTOs, embedding security and compliance from the outset is no longer a technical preference – it’s a strategic necessity. 

Cloud security & compliance: More than just technical checkboxes

Security and compliance are often treated as items to be ticked off once workloads are live, but this reactive approach can leave organisations exposed. From GDPR violations to data breaches and operational downtime, the risks of neglecting these areas are significant. 

Regulatory frameworks are becoming more complex and digital sovereignty is increasingly under scrutiny. If sensitive data is stored in the wrong region or accessed without proper controls, the fallout can be severe – both financially and reputationally. Security and compliance must be considered as foundational elements of cloud architecture, not optional extras. 

How cloud security & compliance gets overlooked in the rush to innovate

In many cases, cloud security failures aren’t the result of negligence – they’re the by-product of speed. Teams move quickly to deploy new services, often bypassing governance in favour of agility. This can lead to misconfigured resources, overly permissive access controls and a lack of visibility into where data resides and who can access it. 

Shadow IT is another common issue. When departments provision their own cloud tools without central oversight, it becomes difficult to enforce consistent security policies. Over time, this decentralised approach creates a fragmented environment that’s hard to monitor and even harder to secure. 

Architecting for security from the start

A secure cloud environment begins with a well-defined architecture. At CACI, we use frameworks like AWS’s Well-Architected and Microsoft’s Cloud Adoption Framework to guide organisations in building resilient, compliant cloud foundations. These frameworks are informed by thousands of real-world deployments and help define what “good” looks like in cloud security. 

Whether migrating legacy workloads, building cloud-native applications or operating in a hybrid model, the architecture must reflect the unique risks and requirements of each scenario. Security isn’t one-size-fits-all: it must be tailored to the workload, the data and the business context. 

Shift left: Embedding security into the development lifecycle

One of the most effective ways to reduce risk is to integrate security early in the development process – a practice known as “shifting left.” By embedding security into CI/CD pipelines, teams can identify vulnerabilities before workloads reach production, reducing rework and accelerating delivery. 

This proactive approach ensures that workloads are secure by design, not just secure by default. It also fosters a culture of shared responsibility, where developers, architects and security teams collaborate from the beginning rather than retrofitting controls later.

Defence in depth & limiting blast radius

Modern cloud threats require layered protection. Defence in depth introduces multiple safeguards across the environment, so if one control fails, others remain intact. This approach is particularly important in multi-cloud or hybrid environments, where complexity can increase exposure. 

Equally critical is the concept of limiting blast radius, which ensures that if one asset is compromised, it doesn’t jeopardise the entire environment. Segmenting workloads, applying fine-grained access controls and enforcing least privilege principles all help contain threats and reduce lateral movement. 

Even small missteps like sharing credentials or resetting machines without proper controls can introduce vulnerabilities. Architectural discipline is key to maintaining a secure posture. 

Landing Zone Accelerators: Secure foundations at speed

For organisations looking to move quickly without compromising security, Landing Zone Accelerators (LZAs) offer a fast-track to secure cloud environments. These pre-configured environments provide guardrails, segmentation and automated policy enforcement from day one. 

Rather than granting broad permissions to “just get things working,” LZAs encourage incremental, secure buildouts that maintain architectural integrity. They help teams avoid the temptation to open everything up and instead focus on building with security embedded throughout. 

Cloud security & compliance are continuous disciplines

Security and compliance aren’t one-time tasks – they’re ongoing disciplines. Cloud environments are dynamic, with new workloads, users and integrations added regularly. Each change introduces potential risk, which is why continuous monitoring, automated patching and regular reviews are essential. 

Tools like AWS Security Hub, GuardDuty and Inspector can help maintain visibility and enforce policies across the workload lifecycle. However, tools alone aren’t enough.

Organisations need a strategy that combines automation with governance and cultural alignment.

The CACI approach: Secure by design, resilient by default

At CACI, we help organisations build secure, scalable cloud environments that support long-term growth. Our approach is grounded in architectural best practices, automation and real-world experience. We start by understanding your current environment, identifying risks and designing frameworks that embed security and compliance from the outset. 

We don’t just implement tools; we build strategies. From governance frameworks to workload segmentation and continuous optimisation, we provide the support needed to stay secure, compliant and resilient in a fast-moving digital landscape. 

Want to explore how your organisation can build a secure cloud foundation that enables innovation? 
Speak to our cloud architecture specialists today. 

Crafting a Network Automation strategy aligned with C‑Suite goals

In this Article

In the first blog of this two-part series, we explored the business impact of network automation and how to build a compelling case for investment. In this follow-up, we focus on practical strategies to keep the C‑suite engaged and the common mistakes to avoid when shaping your automation roadmap.

How to keep C-Suite interested

Long-term network automation strategies will only be successful if the C-suite has consistent buy-in on its implementation and maintenance. This can be achieved through:   

  • Providing progress updates: Sharing network automation progress updates with C-suite staff will help quantify its impact on the business and keep momentum high in terms of maintaining it. 
  • Highlighting ROI for the business: Cost reductions, increased capacity or resources and overall performance are all high interest to C-suite staff. Ensuring the C-suite is aware of how network automation affects these will be critical. 
  • Demonstrating alignment with the business’ strategic goals: Highlighting the ways in which network automation consistently aligns with the business’ strategic goals will help C-suite staff visualise the long-term business outcomes. 
  • Adapting to changes: C-suite members’ business priorities are likely to change over time. Remaining flexible and willing to re-align to changing priorities as needed will ensure long-term success of network automation within the business.
  • Adhering to Environmental, Social and Governance (ESG) priorities: Despite the technical nature of network automation, there has been increased emphasis for C-suite members to encourage wider organisations to drive energy efficiencies, leverage sustainable hardware, optimise access and align to governance standards.  
  • Futureproofing via AI: For C-suite members, AI is more than just embracing technology and maintaining a competitive advantage. AI-readiness means meeting customers’ evolving expectations, navigating operational complexities with ease and automating at scale. 

It is often the case where organisations’ focus on network automation, while well-intended, results in them biting off more than they can chew rather than breaking down more tactical, low-hanging fruit. Despite this having an immediate impact, it can be less visible to senior executives. In general, network automation should be applied to try and achieve two key areas for immediate impact:  

  1. Improve the consistency of network deployment  
  2. Reduce noise within network operations.  

6 common mistakes to avoid when developing a network automation strategy

Some of the common mistakes we see that diverge these two key aims include:

Trying to do too much too soon 

The key with any automation in winning over detractors is incremental consistency over widespread adoption. We often find that small, tactical, lower-level automations with well-scoped outcomes for low-hanging fruit can exceptionally impact the overall consistency of deployment for this element and kickstart the incremental flywheel of trust. This is due to lower-level engineers and operations staff seeing the immediate benefit of automation and beginning to organically adopt these approaches within other higher-value, business-impacting tasks. 

Successfully adopted and maintained automation efforts nearly always look like bottom-up, grassroots endeavours, where buy-in through adoption and proven time efficiency or consistency outcomes have been recognised by low-level engineering resources closest to the network who can advocate for the approach to other peers on their level to the wider organisation. Quantifiable results which prove IT’s ability to deliver are key in achieving grassroots adoption which flows up the organisational hierarchy, rather than trying to mandate this as a top-down approach. Human psychology is as big a factor in network automation’s success in an organisation as technical prowess, given the personal friction many engineers will have to automation as something which could affect their personal wellbeing or circumstances.  

Focusing on the wrong use cases (selection bias)

Use cases which resonate with the business context faced by your organisation are pivotal in creating network automations that are immediately impactful and reap actual business rewards. Executive-led automation efforts can focus too intently on senior IT leaders’ specific issues that may be perceived as higher-affecting but are often more niche and low-scale than more commodity – but wider-scale – issues as seen by engineering and deployment resources.   

Network automation should focus on the daily toil rather than the aspirational state. For example, more dividend will be yielded by automating a firewall rule request process which several of your engineers unknowingly gatekeep as a bottleneck to your application development and implementation projects than would be from, for example, automating network configuration backups, which will likely already be catered for by a disaster recovery process, no matter how human-intensive that may be.   

Tool-led strategy adoption

Network automation is a complex area of abstractions and principles built atop chains of other abstractions or fundamentals. For this reason, it can be tempting to lean on the lowest common denominator within the field – often the “lingua franca” of the tooling and framework buzzwords such as Terraform, Ansible, IaC, YAML, YANG and so on.   

While countless types and competing network automation tools exist, this doesn’t always mean they’re developed for or relevant to your business’ specific issues. It’s also worth being mindful of “resume-driven development” here– while the “new shiny” might look great to your engineering and architecture teams, it doesn’t always mean it’s best for your business context, budget or other regulatory constraints.   

Automation in isolation of process review and improvement

There’s a reason “garbage in, garbage out” is a phrase– automating the garbage to go faster doesn’t get rid of its existence. Automation often lives in the space between process and technology, so improvements in one can feedback into the other. Automation tends to inform improvements to existing business processes through its installation than for static business processes that were perfect all along.   

The mere act of undergoing an automation journey can also be an exponential value-add when focusing on and improving business processes which would otherwise not have been explored. This ensures a double win from both optimising the business process itself and enables an extended reach of that into the network and IT plane, speeding up the process and improving its efficiency. This virtuous flywheel can often become a force-multiplier that tremendously benefits the organisation for relatively little upfront effort. 

Targeting only one component within Environmental, Social and Governance (ESG) priorities

Environmental, Social and Governance (ESG) priorities are meant to be holistic rather than siloed, and network automation can address each component if carefully designed. Organisations may accidentally place too much emphasis on optimising one of the three components, however. To avoid this, the focus should remain on all-encompassing initiatives that enable reliable network access, enforce governance best practices and encourage operational efficiencies.

Avoiding AI limitations through design, blind spots or scalability

Network automation strategies can face limitations when integrating AI if the design inhibits workflow and ultimately decision-making, if blind spots through siloed or inaccurate data arise or if future planning hasn’t been considered. Futureproofing AI is critical for organisations to avoid wasting resources, costly errors or shaky foundations into the future. 

How can CACI help?

CACI’s expert team comprises multidisciplined IT, networking infrastructure and consultant and automation engineers with extensive experience in network automation. We can support and consult on every aspect of your organisation’s network from its architecture, design and deployment through to cloud architecture adoption and deployment, as well as maintaining an optimised managed network service. 

To learn more about the impact of network automation and how to sell its value to the C-suite, please read our e-book “How to sell the value of network automation to the C-suite”. You can also get in touch with the team here.  

 

Network Automation in 2025: How it drives competitive advantage

In this Article

This blog kicks off a two‑part series on the business value of network automation and how to win C‑suite buy‑in. Part two will share proven tactics for sustaining executive engagement and highlight common pitfalls to avoid when building your automation strategy.

Why is network automation critical for businesses in 2025?

Network automation orchestrates how you plan, deploy and operate network services across data centres, clouds and the edge. Done well, it lifts service reliability, reduces change risk and compresses time‑to‑value by removing repetitive, manual tasks that are prone to error. The business case has only strengthened in the AI era, as AI‑assisted operations and modern application traffic put new pressure on network scale and agility. Recent global studies show leaders expect automation to underpin this shift, with 60% planning AI‑enabled predictive network automation across domains within two years.

Adoption is accelerating. Gartner forecasts that by 2026, 30% of enterprises will automate more than half of their network activities, up from under 10% in mid‑2023. This trend reflects how Infrastructure & Operations teams are using analytics, AIOps and intelligent automation to boost resilience and service velocity. At the same time, market evidence still shows significant headroom. Independent community surveys and analyst research indicate many organisations have automated less than half of day‑to‑day network tasks, citing skills, organisational and technology barriers as the top obstacles.

The ROI picture is also clearer than ever. Prior research from EMA found that around half of data‑centre network automation projects achieved ROI within two years, and more recent enterprise networking studies highlight how a modernised, automated network directly improves customer experience, employee productivity and revenue growth. Meanwhile, Cisco’s 2025 networking research quantifies the cost of inaction: 77% of organisations report major outages over the last two years, with the impact of a single severe disruption extrapolated to $160B globally, underscoring the value of automation for risk reduction.  

How to create a successful business case

Step 1: Lead with evidence 

According to an article by Enconnex, the weakest link in data operations tends to be humans, with human error accounting for ~80% of all outages. Existing pipelines in businesses tend to operate sequentially and manually, increasing the probability of human error through the involvement of multiple individuals in the chain of events.   

Step 2: Outline a strategic software development process  

Ensuring each step of the operational process from integration to delivery is tested and accounted for and outlining this in a cohesive plan for the C-suite level will help earn their trust. Developing a process flow that outlines a long-term strategy and what the business will achieve through network automation will further encourage this crucial buy-in. A visualisation tool or platform to convey this can significantly enhance their understanding. 

Step 3: Stage a production deployment in a test environment 

Unlike application testing, network testing is often difficult because the network itself doesn’t exist in isolation and is nearly always the lowest level of the technical stack. This makes performing tests complex. While the applications within a development or pre-production environment are often considered non-production, the underlying network to these application test environments is nearly always considered “production” in that it must work, in a production-like, always-on, fault-free state for the applications atop it to be tested and fulfil their function. Replicating complex enterprise, data centre or even cloud networks often come at a price. Organisations can typically only duplicate or approximate small proportions of their network estate. As a result, staging looks more like unit testing in software development by making small but incremental gains and applying them exponentially to the production network looking to be automated.   

While many organisations may opt for a waterfall, agile or other project management approach, we nearly always find that an agile-like, iterative, unit-tested approach to developing network automations – such as scripts, runbooks, playbooks and modules — are more beneficial in pushing automation both into the organisation and into wider adoption than any other approach.  

Step 4: Prove that benefits will be reaped through the staged production 

One of the benefits of modern network engineering is quickly leveraging the commoditisation of the vertically integrated network hardware stack the industry has embarked upon over the last decade. It is now easier – and cheaper – than ever before to spin up a virtual machine, container or other VNF/NFV-equivalent of a production router, switch, firewall, proxy or other network device that will look, feel, act and fail in the same way that its production network equivalent device would. When combined with software development approaches like CI/CD pipelines for deployment and rapid prototyping of network automation code, this can be a winning combination to rapidly pre-test activities within ephemeral container-like staging environments and maintain dedicated staging areas which look like production. 

How can CACI help?

CACI’s team comprises multidisciplined IT, networking infrastructure and consultant and automation engineers with extensive experience in network automation. We can support and consult on every aspect of your organisation’s network from its architecture, design and deployment through to cloud architecture adoption and deployment, as well as maintaining an optimised managed network service. 

To learn more about the impact of network automation and how to sell its value to the C-suite, please read our e-book “How to sell the value of network automation to the C-suite”. You can also get in touch with the team here

 

Top network automation trends in 2025

In this Article

Network automation has become increasingly prevalent in enterprises and IT organisations over the years, with its growth showing no signs of slowing down.  

In fact, as of 2025, the Network Automation Market size is estimated at USD 31.02 billion (GBP 23.30 billion), expected to reach USD 84.69 billion (GBP 63.60 billion) by 2029. By 2028, a growth rate of nearly 30% is predicted in this sector in the UK. Within CACI, we are seeing a higher demand for network automation than ever before, supporting our clients in NetDevOps, platform engineering and network observability.  

So, how is the network automation space evolving, and what are the top network automation trends that are steering the direction of the market in 2025? 

Hyperautomation

With the increasing complexity of networks that has come with the proliferation of devices, an ever-growing volume of data and the adoption of emerging technologies in enterprises and organisations, manual network management practices have become increasingly difficult to uphold. This is where hyperautomation has been proving itself to be vital for operational resilience into 2025.  

As an advanced approach that integrates artificial intelligence (AI), machine learning (ML), robotic process automation (RPA), process mining and other automation technologies, hyperautomation streamlines complex network operations by not only automating repetitive tasks, but the overall decision-making process. This augments central log management systems such as SIEM and SOAR with functions to establish operationally resilient business processes that increase productivity and decrease human involvement. Protocols such as gNMI and gRPC for streaming telemetry and the increased adoption of service mesh and overlay networking mean that network telemetry and event logging are now growing to a state where no one human can adequately “parse the logs” for an event. Therefore, the time is ripe for AI and ML to push business value through AIOps practices to help find the ubiquitous “needle” in the ever-growing haystack. In the network realm, this not only includes automating devices, but orchestrating workflows across multi-domain and vendor environments that AI helps make possible.  

Through the ability to analyse real-time network data, patterns or issues, AI helps networks transform intelligently. Enterprises shifting towards hyperautomation this year will find themselves improving their security and operational efficiency, reducing their operational overhead and margin of human error and bolstering their network’s resilience and responsiveness. When combined with ITSM tooling such as ServiceNow for self-service delivery, hyperautomation can truly transcend the IT infrastructure silo and enter the realm of business by achieving wins in business process automation (BPA) to push the enterprise into true digital transformation.  

Increasing dependence on Network Source of Truth (NSoT)

With an increasing importance placed on agility, scalability and security in network operations, NSoT is proving to be indispensable in 2025, achieving everything the CMDB hoped for and more.  

As a centralised repository of network-related data that manages IP addresses (IPAM), devices and network configurations and supplies a single source of truth from these, NSoT has been revolutionising network infrastructure management and orchestration by addressing challenges brought on by complex modern networks to ensure that operational teams can continue to understand their infrastructure.

It also ensures that data is not siloed across an organisation and that managing network objects and devices can be done easily and efficiently, while also promoting accurate data sharing via data modelling with YAML and YANG and open integration via API into other BSS, OSS and NMS systems.  

Enterprises and organisations that leverage the benefits of centralising their network information through NSoT this year will gain a clearer, more comprehensive view of their network, generating more efficient and effective overall network operations. Not to mention, many NSoT repositories are much more well-refined than their CMDB predecessors, and some – such as NetBox – are truly a joy to use in daily Day 2 operations life compared to the clunky ITSMs of old. 

Adoption of Network as Service (NaaS)

Female engineer in network server room with rows of servers connected together with glowing wiring representing networks

Network as a Service (NaaS) has been altering the management and deployment of networking infrastructure in 2025. With the rise of digital transformation and cloud adoption in businesses, this cloud-based service model enables on-demand access and the utilisation of networking resources, allowing enterprises and organisations to supply scalable, flexible solutions that meet ever-changing business demands.  

As the concept gains popularity, service providers have begun offering a range of NaaS solutions, from basic connectivity services such as virtual private networks (VPNs) and wide area networks (WANs) to the more advanced offerings of software-defined networking (SDN) and network functions virtualisation (NFV). Instances where AI-powered NaaS is possible offer even faster onboarding, more effective operations and enhanced connectivity, all of which can be automated at scale. 

These technologies have empowered businesses to streamline their network management, enhance performance and lower costs. NaaS also has its place at the table against its aaS siblings (IaaS, PaaS and SaaS), pushing the previously immovable, static-driven domain of network provisioning into a much more dynamic, elastic and OpEx-driven capability for modern enterprise and service providers alike. 

Network functions virtualisation (NFV) and software-defined networking (SDN)

A symbiotic relationship between network functions virtualisation (NFV), software-defined networking (SDN) and network automation is proving to be instrumental in bolstering agility, responsiveness and intelligent network infrastructure as the year is underway. As is often opined by many network vendors, “MPLS are dead, long live SD-WAN”; which, while not 100% factually correct (we still see demand in the SP space for MPLS and MPLS-like technologies such as PCEP and SR), is certainly directionally correct in our client base across finance, telco, media, utilities and increasingly government and public sectors.  

NFV enables the decoupling of hardware from software, as well as the deployment of network services without physical infrastructure constraints. SDN, on the other hand, centralises network control through programmable software, allowing for the dynamic, automated configuration of network resources. Together, they streamline operations and ensure advanced technologies will be deployed effectively, such as AI-driven analytics and intent-based networking (IBN).  

We’re seeing increased adoption of NFV via network virtual appliances (NVA) deployed into public cloud environments like Azure and AWS for some of our clients, as well as an increasing trend towards packet fabric brokers such as Equinix Fabric and Megaport MVE to create internet exchange (IX), cloud exchange (CX) and related gateway-like functionality as the enterprise trend towards multicloud grows a whole gamut of SDCI cloud dedicated interconnects to stitch together all the XaaS components that modern enterprises require. 

Intent-based networking (IBN)

As businesses continue to lean into establishing efficient, prompt and precise best practices when it comes to network automation, intent-based networking (IBN) has been an up-and-coming approach to implement. This follows wider initiatives in the network industry to push “up the stack” with overlay networking technologies such as SD-WAN, service mesh and cloud native supplanting traditional Underlay Network approaches in Enterprise Application provision. 

With the inefficiencies that can come with traditional networks and manual input, IBN has come to network operations teams’ rescue by defining business objectives in high-level, abstract manners that ensure the network can automatically configure and optimise itself to meet said objectives.

Network operations teams that can devote more time and effort to strategic activities versus labour-intensive manual configurations will notice significant improvements in the overall network agility, reductions in time-to-delivery and better alignment with the wider organisation’s goals. IBN also brings intelligence and self-healing capabilities to networks— in case of changes or anomalies detected in the network, it enables the network to automatically adapt itself to address those changes while maintaining the desired outcome, bolstering network reliability and minimising downtime. 

As more organisations realise the benefits of implementing this approach, the rise of intent-based networking is expected to continue, reshaping the network industry as we know it. The SDx revolution is truly here to stay, and the move of influence of the network up the stack will only increase as reliance on interconnection of all aspects becomes the norm. 

How CACI can support your network automation journey? 

CACI is adept at a plethora of IT, networking and cloud technologies. Our trained cohort of network automation engineers and consultants are ready and willing to share their industry knowledge to benefit your unique network automation requirements. 

From NSoT through CI/CD, version control, observability, operational state verification, network programming and orchestration, our expert consulting engineers have architected, designed, built and automated some of the UK’s largest enterprise, service provider and data centre networks, with our deep heritage in network engineering spanning over 25 years. 

Take a look at Network Automation and NetDevOps at CACI to learn more about some of the technologies, frameworks, protocols and capabilities we have, from YAML, YANG, Python, Go, Terraform, IaC, API, REST, Batfish, Git, NetBox and beyond. 

To find out more about enhancing your network automation journey, get in touch with us today.  

SASE, SSE, ZTNA — why remote-access VPNs aren’t enough anymore 

In this Article

Call it Secure Access Service Edge (SASE), call it Secure Services Edge (SSE), call it Zero Trust Network Architecture (ZTNA), even call it the Service Edge — whatever the label, modern secure access looks nothing like the SSL/IPsec VPNs you’ve used for years. That’s because the application landscape has changed: apps live in multiple clouds, SaaS dominates, teams are distributed, and users expect fast, secure access from anywhere. VPNs were designed for a world where the data centre was the centre of everything. That world is gone. 

From “castle and moat” to cloud-native access 

Historically, enterprises kept most apps on-prem and routed remote users through a small number of VPN concentrators. That model tolerated wasteful backhaul, brittle firewall changes, and long change cycles because traffic and users were predictable. When remote work went mainstream, the limitations became obvious: VPN concentrators saturated, latency spiked, and IT teams were buried in firewall change tickets and routing problems. 

SASE/SSE/ZTNA solve that by making access app-centric instead of network-centric. Instead of extending a user into your LAN (Layer-3 network extension), ZTNA authenticates and authorises each user-to-app session and only opens the exact access required. The heavy lifting is done in cloud PoPs close to the user or at app locations, reducing latency, avoiding backhaul, and enabling consistent policy enforcement across cloud, on-prem and branch. 

What actually changes 

  • Performance — traffic to SaaS or cloud apps exits locally (closest PoP), not via an overloaded corporate gateway. That reduces latency and frees WAN circuits. 
  • Security — micro-segmentation and per-session access reduce lateral movement; policies are applied at the application layer, not by blunt network tunnels. 
  • Scale & resilience — providers run global PoPs and elastic control planes; you gain capacity without building a global VPN fabric. 
  • Operational simplicity — fewer firewall rule churns, fewer emergency change requests, and a centralised policy model that spans clouds and branches. 

Why it matters in practice 

SASE is not just “VPN in the cloud.” It’s a new architecture: orchestration + control plane + distributed enforcement. It transforms remote access from a brittle network extension into an auditable, programmable security service that aligns with modern app architectures and business needs. 

Practical migration advice

Move in phases. Start with low-risk SaaS apps and pilot ZTNA connectors close to your cloud workloads. Run hybrid models during migration: keep legacy VPNs for stateful or non-cloudable apps while shifting web and SaaS traffic to SSE. Test legacy application behaviour (authentication, session stickiness, IP expectations) early — those are the usual blockers. Use PoVs to validate user experience, telemetry and failover behaviour before a full rollout. 

How CACI can help you transition to SASE and SSE

Making the move from legacy VPNs to modern secure access isn’t just a technology shift — it’s an architectural transformation. At CACI, we specialise in designing and deploying SASE and SSE solutions that fit your business model, application landscape and security posture. From initial assessments and phased migration planning to PoC validation and full-scale rollout, our experts ensure performance, resilience and compliance at every stage. Whether you need ZTNA for SaaS, hybrid models for legacy apps or global PoPs for distributed teams, we’ll help you build a secure access strategy that scales with your future.

Ready to start your transition? Get in touch with CACI today to discuss your secure access roadmap.

How the Network Source of Truth is replacing the CMDB

In this Article

 

Modern networks are dynamic: multi-vendor, multi-cloud, API-driven and constantly changing. The old configuration-management playbook – manual discovery, Excel exports and a static CMDB – can’t keep up. The result is stale data, fragile automation, slow incident response and risk that compliance asks remain theoretical, not operational. 

A Network Source of Truth (NSoT) solves this by becoming the canonical, machine-readable representation of your network estate: devices, topology, configurations, policies and relationships. Unlike a traditional CMDB, an NSoT is designed to be updated continuously by automated collectors and to be consumed directly by automation pipelines, orchestration systems and analytics engines. This is not “one more database” — it’s the operational spine for an automated, auditable network. 

Diagram showing how a Configuration Management Database (CMDB) connects data sources, services, and business outcomes. It is shown with a less than sign pointed at a circular diagram for DevOps, showing all the different elements of the DevOps journey.  The diagram suggests that a CMDB is becoming less relevant for DevOps journey, with

Out with the CMDB, in with the Source of Truth 

The CMDB was built for a world of physical assets, servers, printers, desktops. It struggles with today’s logical constructs: nested virtualisation, container overlays, service meshes, and sidecar proxies. Its rigid data model and legacy structure make it a poor fit for modern IT. 

CMDB’s rigid data model and legacy data structure has opened the door to a series of contenders within the space, largely grouped together under the umbrella of “Source of Truth”. Some notable examples in the NetDevOps and DevOps spaces include:  

  • NetBox – An open-source NSoT platform that models network infrastructure and integrates with notable automation tools to gain accurate, real-time data  
  • Ansible – An open-source automation engine supporting IT functions including configuration management, application deployment and orchestration  
  • MAAS – An open-source solution offering the self-service provisioning of operating systems and implementation of all public cloud standard features. 

Instead of CMDBs, many organisations are now turning to Source of Truth practices. This is often a repository or database used to store configuration data for an organisation’s IT environment.  

Source of Truth is a DevOps practice 

The key “why” behind all this can be easily summarised when contrasting the strengths and weaknesses of the CMDB against the NSoT further. In short, the Source of Truth is a DevOps practice that seeks to simplify configuration management by listing all configuration items and their relationships in a single location. This one version of truth can then be used for deployment automation, infrastructure management and much more. 

Another key attribute of the SoT is the use of data-driven, structured data models such as YANG, which naturally integrates with well-used DevOps data structures such as YAML and JSON for frictionless flow between the ITSM process and the intended infrastructure outcome required.  

Integration Integration in the age of disaggregation 

Increasingly, we see IT departments stretched with their ITIL-based approaches and ITSM systems which were designed for singular, homogenous deployments of IT network infrastructure within the confines of the on-premises data centre – unable to cope as increasing amounts of their application workload estate migrates off-premises into the various public cloud PaaS, SaaS and hybrid cloud models of today.

As Network Consultants and Deployment Engineers, we see first-hand the issues that CMDB-based approaches create and frustrations throughout. Contrast this with a NSoT-led approach, where we might instead see the ability to: 

  • Simplify configuration management: By using a single source of truth, organisations can avoid the complexity and cost of managing multiple CMDBs across their hybrid IT network, compute, storage and application estate. 
  • Improve collaboration: Using a central repository for configuration data helps improve collaboration between development and operations teams (hence why they call it DevOps). 
  • Enable automation: With a centralised source of configuration data, it becomes easier to automate repetitive tasks such as deployment and testing, freeing up valuable development and operations resource time away from undifferentiated heavy lifting tasks. 
  • Facilitate auditing and compliance: A centralised repository of configuration data also makes it easier to track changes and ensure compliance with IT security standards such as SOC2, HIPAA, NIST, PCI-DSS, CESG and DORA. 

How CACI can help bolster your configuration management journey

Along with a strong heritage in Network Infrastructure Engineering and Consulting, we have a strong set of ITSM Consultants available to help with your CMDB migration programmes – across the spectrum from service design, project and programme management and through to data and solution architecture.  

Let us help and see how we can unlock the value of the CI data you have to bring you closer to the point of application observability over just plain asset visibility. 

Why Cloud-native telco networks must rethink their OSS/BSS in 2025

In this Article

The telecommunications industry is steadily moving towards the public cloud for mission-critical backend systems, particularly Operational Support Systems (OSS) and Business Support Systems (BSS). These platforms underpin the business and revenue models of modern telcos. With pioneers such as Totogi and the rise of cloud-native architectures, the management plane of a telco network is increasingly interacting with cloud service provider offerings.

So, what is driving this rethink and how can telcos stay ahead?

Pressure to maximise revenue through increased agility

Legacy, monolithic OSS/BSS stacks are struggling to keep pace with growing service diversity across 3G, 4G, 5G, edge and IoT, rising customer expectations and competitive pressure from MVNOs and hyperscalers. Agility is now the key differentiator. Telcos need to launch, adapt and monetise services quickly, something traditional systems cannot deliver.

Disaggregation and open APIs

The old vertically integrated model is giving way to disaggregated architectures powered by open APIs. This shift matters because vendor lock-in is no longer sustainable in a cloud-first world. Composable OSS/BSS enables faster innovation and easier integration with third-party ecosystems, while standards such as TM Forum Open APIs are accelerating interoperability and reducing time to market.

Automation and intelligence

Managing sprawling, hybrid networks with manual processes is no longer viable. Operators are adopting advanced analytics and automation for predictive maintenance and anomaly detection, network automation to reduce operational overhead and smarter orchestration to optimise performance and resource allocation.

Cloud-native OSS/BSS

Cloud-native principles such as microservices, containerisation and orchestration are transforming telco operations. These approaches enable elastic scalability for unpredictable demand, lower total cost of ownership through pay-as-you-go models and faster feature deployment without disruptive upgrades.

Monetising the network with data

Telcos hold vast amounts of data but need modern analytics to unlock its value. This includes dynamic pricing and personalised offers, churn prediction and retention strategies, and real-time policy enforcement for fair usage and quality of service.

How CACI can support your move towards a connected industry 

We help telcos modernise OSS/BSS without costly rip-and-replace programmes. Our expertise in cloud-native architectures, open API integration and network automation enables operators to modernise the network for agility, monetise assets through data-driven insights and reduce costs while improving resilience.

With a strong track record in telecoms and enterprise transformation, we can help you future-proof your network and unlock new revenue streams, get in touch today.

How to regain control of cloud sprawl and hidden costs

In this Article

Cloud computing has become the backbone of digital transformation for organisations across the UK and beyond. As cloud adoption accelerates, however, many IT leaders are facing a new challenge: cloud sprawl. Understanding what cloud sprawl is, why it happens and, crucially, how to prevent it, is now essential for IT Directors, Digital Transformation Leads, Heads of Innovation and CTOs who want to control costs, reduce risk and unlock the full value of their cloud investments. 

What is cloud sprawl?

Cloud sprawl happens when cloud resources, such as applications, services and infrastructure grow unchecked across an organisation. It usually starts with the best intentions from teams wanting to move quickly and creating new environments and services as a result. Over time, this leads to a patchwork of workloads, platforms and tools, many of which are underused, duplicated or simply forgotten.

Why is cloud sprawl a problem?

Cloud sprawl can quietly drain your budget, increase security risks and complicate everyday operations. Some of the most common issues include:

  • Rising costs: Idle or underused resources, redundant SaaS subscriptions and forgotten cloud instances all add up. Industry analysts estimate that up to 30% of cloud spend is wasted due to sprawl
  • Security and compliance risks: Untracked assets can become vulnerabilities, especially if they aren’t patched or monitored. Data may be stored in regions without proper regulatory controls. 
  • Operational complexity: IT teams are stretched thin managing a maze of platforms, permissions and integration points. 

How does cloud sprawl happen?

Cloud sprawl is rarely intentional and more often the by-product of rapid digital transformation, decentralised decision-making and the result of the ease with which anyone can now provision infrastructure at the click of a button. Common causes include:

  • Multiple teams or departments adopting cloud independently, often with different providers or platforms. 
  • Lack of governance or clear policies around provisioning, tagging and decommissioning resources. 
  • Shadow IT, where business units bypass central IT to get things done quickly. 
  • Mergers, acquisitions or legacy migrations that bring in new cloud estates with little integration.

How to prevent cloud sprawl: practical steps

Preventing cloud sprawl doesn’t require a complete IT overhaul, but it does demand clearer oversight and smarter consolidation. To start regaining control, consider:

1. Conducting a cloud inventory 

A comprehensive inventory is the foundation for effective management, so beginning by auditing your current cloud landscape, including which apps and services are active, who owns them and the value they deliver will be pertinent.  

2. Establishing cloud governance policies  

Good governance is the backbone of cloud control. Set clear rules for cloud procurement, usage and approval. Define who can spin up resources and under what conditions. Standardise on approved tools and platforms to reduce duplication.  

3. Consolidating and standardising 

Where teams are using similar tools, consolidate onto a single platform. For example, unify file-sharing or collaboration tools across departments to reduce complexity and simplify cost management. 

4. Implementing monitoring and alerts 

Visibility is critical for preventing waste, so using cloud management tools to monitor spend, detect idle resources and track usage trends will be critical. Setting automated alerts to flag anomalies or unexpected spikes in usage will further support this.  

5. Educating and aligning your teams 

Most cloud sprawl happens with good intentions. Equip your teams with guidance on approved tools and platforms and make it easy for them to do the right thing. Regular training and communication help reduce shadow IT. 

6. Reviewing and optimising regularly 

Cloud environments are dynamic and require ongoing attention. By scheduling regular reviews, you can identify and decommission unused resources, right-size workloads, and renegotiate contracts where needed. Leveraging best practices such as the AWS Well-Architected Framework can help ensure your cloud setup remains secure, efficient, and cost-effective. The savings you unlock through optimisation can be reinvested to fuel your next wave of innovation. 

7. Embedding security and compliance from the start 

Every new cloud resource is a potential risk if not properly secured. Build security and compliance into your provisioning process, not as an afterthought. Automate patching, monitoring, and reporting to maintain a secure posture, and implement preventive and detective guardrails to enforce policies and catch misconfigurations early. Ensure you have clear visibility into where sensitive data resides and who has access to it, so you can act quickly if issues arise.

The CACI approach: practical, proven and partnership-led

At CACI, we see cloud as an enabler, not an end in itself. Our approach is grounded in practical experience, helping organisations regain control, reduce waste and build a foundation for sustainable innovation. 

We start by understanding your current environment, mapping out where sprawl and hidden costs are lurking. We then work with you to design governance frameworks, implement visibility tools and optimise your workloads. Our partnerships with leading cloud providers mean we can offer best-in-class solutions tailored to your needs. 

We recognise that cloud is never “done” but is an ongoing journey. We provide ongoing support, regular reviews and continuous optimisation, so you can focus on what matters: innovation.

Want to explore how your organisation can reduce cloud waste and regain control? 

Speak to our cloud optimisation specialists today.