SASE & SSEs are not just another remote access VPN

SASE & SSEs are not just another remote access VPN

Call it Secure Access Service Edge (SASE), call it Secure Services Edge (SSE), call it Zero Trust Network Architecture (ZTNA), even call it the Service Edge). You might be forgiven for thinking talk about VPN is everywhere at the moment, and wondering why everyone is Cloudwashing what you’ve known as remote access VPN for many years. 

The answer, bluntly, is that they aren’t the modern business application landscape has vastly changed, and remote access needs to change with it. 

Deep dive into the history of SASE & SSEs 

The time was when all your business applications would be on-premises (on-prem) and you’d sometimes need to grant employees access from home. Let’s say you’re a typical enterprise shop with: 

  • A few on-premises data centres from various UK managed services providers (MSPs), or even colocation (Colo) providers 
  • A few thousand employees scattered across the UK 
  • A hundred or so business applications 
  • A few internet gateway data centres with web proxies, reverse proxies and some diverse Direct Internet Access (DIA) connectivity 
  • Two hundred or so branch/office locations across various UK towns, cities and the odd factory/out-of-town location
  • Dipping your foot into public cloud providers such as Amazon AWS, Microsoft Azure and Google Cloud (GCP) 

Your environment might’ve looked a little like this: 

Dissecting the Current Mode of Operation (CMO) 

Most of your users were in the office most of the time, and when they needed to, they would use their VendorCo VPN client to dial into https://vpn.yourco.com which would non-deterministically load balance between the Manchester VPN concentrator and the London VPN concentrator. This happened with complete disregard of where the user was geographically located (“Look, it was too hard to explain to the MSP what Anycast DNS is, OK?”). Their productivity files (Word Documents, Excel Spreadsheets, etc) were mostly still on-prem and located on file servers in their usual office locations; nearly all the remaining 80% of the business support system (BSS)/business applications they need to perform their roles were also in your BSS data centres. 

You were running some proof of concept (PoC) work in the public cloud/cloud service providers (CSPs) but weren’t too sure if this would really take off in Big Enterprise or suit your needs. Most of your business apps had a good heritage and came from a time before React/Vue/Node JavaScript Frameworks existed, and largely before it was thought that a Web UI could be used for anything useful. You’ve got many Fat Clients, Oracle and IBM middleware layers, and people generally accept that your ERP application looks like someone threw up with a paint can and flunked out of Data Input class in college (“Checkboxes? Dropdowns? Multi-selects? Has this Vendor never heard of Input Validation!”). 

Add in some forward (Web) proxies because Facebook won’t read itself (and you have some legitimate web applications your employees need to get to). Sprinkle on some reverse proxies because you had some applications which – while hosted in your BSS/backend data centres (that weren’t natively setup to be internet-accessible) – over the years you’ve found suppliers, systems integrators (SIs) and partners all need access to them. Sadly, these apps didn’t generally have a notion of an API, so you’ve had to set up a reverse proxy or two in your internet gateway data centre to allow inbound internet access to them from other systems (M2M). 

Most of your staff thought the remote access VPN was clunky, slow and cumbersome to launch (“Ever noticed https://internal-intranet doesn’t load on the VPN if you left Internet Explorer open before you took your laptop home for the day, off the VPN?”), but because they rarely used it, they would quietly tolerate the issues it posed. You thought it was nice and secure because you only had two entry points into your network via the WAN. 

COVID-19 Mode of Operation

Things rapidly change during lockdown. Suddenly, most of your users are firmly working from home and struggling to use your remote access VPN because: 

  • It doesn’t have the bandwidth 
  • It’s slow to load (your Scottish users are going via London; your Watford users are going via Manchester; you lament not putting in Anycast DNS or Geo-based DNS GSLB) 
  • It’s backhauling everything through two overloaded internet gateways (which have their DIA and MPLS pipes in a constant state of “on fire” in terms of network capacity utilisation) 
  • It’s not allowed through half the MSP firewalls/ACLs you have on your on-prem environment (“I thought the VPN was 10.99.0.0/24; when did they change it to 10.98.0.0/23 on us? It’ll take weeks to raise these ITIL firewall change requests with MSPCo to get it done!”) 
  • Your lesser-technical staff don’t realise they have to enable it to access some on-prem applications and disable it to access the applications you’ve quickly lift-shifted to public cloud. 

In short, it’s not really working out for you. But can you blame it? Remote access SSL/IPsec VPN comes from an era before cloud distributed computing existed; it’s a moat expecting there to be a castle where everything is inside. Ditto, because the applications of its era operated at the lower levels of the OSI Model, it needs to give your client a network-level (Layer 3) IP address and effectively act as an extension of your corporate WAN “just in case” the business application in question needs that functionality to work. 

It’s a solution no longer fit for the current application landscape of public cloud, commodified applications and geo-distributed workloads. 

The 80/20 has flipped now, more of your business applications have found themselves outside your Network Edge castle and beyond the security afforded by your moat. The drawbridge is firmly down, and the people are fleeing your castle-and-moat WAN architectures. 

Assessing the Future Mode of Operation (FMO)

Let’s level-set a bityou‘re still a Big Enterprise, so digital transformation isn’t going to be quick. You’ve still also got legacy and on-premises workloads like mainframes that can’t move to the public cloud for several sensible financial, compliance and business reasons. However, you’ve now started to (more aggressively than you’d like to thanks to the Covid-19 pandemic): 

  • Move commodified applications (i.e. all the apps that were never specific to YourCo PLC in the first place collaboration, document hosting, ERP and CRM, etc.) to the public cloud. 
  • Typically, as a software as a service (SaaS) offering (in which case it may as well not exist to your WAN/public internet; “You only need a web browser and internet connection to access it”). 
  • Sometimes, as something cobbled together with a hybrid of SaaS and platform as a service (PaaS) because of a niche workload requirement you have (but it’s still likely to bias for at least the front door of the app to be “Just access with your web browser and internet”, even if the backend/M2M interaction doesn’t). 
  • Deploy SASE connectors (i.e. Zscaler ZPA Connectors) as close to the applications as possible, and in multiples (unlike the SSL-VPN concentrators you only had a few of)  
  • Deploy SASE connectors into your branch offices 
  • Deploy SASE connectors (or accept the native-internet front door into) your public cloud-hosted apps 

You’ve come to terms with the fact that most of your staff work well remotely, and that the “full time return to office” is unlikely to happen. Your staff are much happier using the SASE (i.e. Zscaler, Cato Networks, Cloudflare, Netskope), and unexpectedly, your DIA and MPLS pipes are actually quieter because of this. 

Your security posture has improved, and you’ve found that you have less perimeter breaches, but you’re not too sure why. You also find that users report the same legacy applications (that you’ve not touched since pre-pandemic) are more performant when they’re at home than on the remote access VPN prior. 

So, what’s the momentous change that’s happened then? Why are most things much better, it’s just SSL-VPN tunnels, right? 

SSE is not an OSI layer 3 network extender

To understand the performance and security gains that an SSE with ZTNA brings to the table, we need to compare what a SASE/SSE is doing versus what a traditional remote access VPN is doing. It’s perhaps easier to see this in a comparable format: 

By far, the main difference in the SASE model is that of the cloud-based LAN. It’s the “magic” that stitches together multiple SSL tunnels to allow the solution to efficiently only use the SSL tunnels required for a given end-to-end flow, rather than having to “waste” internet gateway DIA/MPLS bandwidth for a flow that may not be to/from an on-prem system or location. This is also what supplies the main benefits of SASE over remote access VPN as highlighted above, as the cloud-based LAN acts as the “piggy in the middle” (MITM) to any given application flow, and can therefore enforce security, bandwidth and other controls at higher levels of the OSI Model than a layer 3-constrained remote access VPN can. 

When compared to the business application world, which is doing mostly the same thing (the cloud is the “world’s computer” and has multiple attachment points/PoPs that are regionally close to the user as the centre of the universe; not focused on the app as the centre of the universe), SASE can be seen as a better fit. In much the same way the public cloud uses global points of presence (PoPs) to lessen the latency of an application and serve it as close to the user as possible, so too does a SASE use the closest “VPN concentrator” to a given user. The OpEx-driven model allows a SASE provider to do this cost-effectively for you as a singular customer, whereas trying to build your own “globe-spanning cloud LAN” would cost significantly more outlay than you may be able to afford. Therefore, using the SASE provider’s reach and expertise can only ever make sense over a more cumbersome remote access VPN. 

SASE is doing to the enterprise WAN what SD-WAN did to the enterprise MPLS Network or the Enterprise MPLS Network did to the Enterprise Leased Line Mesh that preceded it abstracting away point-to-point SSL tunnels into a fabric of dynamically-run, point-to-multipoint SSL tunnel flows that are created and destroyed on-demand. 

How CACI can support your move to SSE or SASE?

At CACI Network Services, we’ve seen countless customer environments – from heritage, through legacy into microservices modernity – and innately understand the architecting, deploying and optimising of a variety of SSE and SASE security access solutions.  

Get in touch and let us help you untangle the complex web of secure access into your Network Edge and demystify the web of Zero Trust for your WAN. 

Cost optimisation: The new capacity management

Cost optimisation: The new capacity management

Capacity management has been an IT Service Management (ITSM) staple for years, often historically associated with practices such as Just In Time (JIT) hardware provision to achieve network, storage or compute low watermarks which sustain Service Level Agreements (SLAs). However, as the move to commodified on-demand workload prevails – as enabled through Cloud and DevOps provisioning practices – capacity management becomes less optimal as a practice to sustain the delicate balance of cost versus performance for an IT system. 

With the increased adoption of Cloud native giving rise to OpEx, rental-based workload hosting over CapEx and ownership-based workload hosting, we’re seeing a new contender for managing performance against cost in the IT Services space: cost optimisation. 

What are the benefits of cost optimisation?

Cost optimisation is the process of identifying and reducing sources of wasteful spending, underutilisation, or low return in the IT budget. The goal is to reduce costs while reinvesting in new technology to speed up business growth or improve margins. Unlike capacity management, cost optimisation focuses on system architecture improvements and modifications and tweaking computer, storage and network levers to achieve its goal. 

In the Cloud context, cost optimisation often looks like a FinOps team, process or mandate, tasked with the following to achieve cost reductions: 

  • Identify and reduce mismanaged or excess resources 
  • i.e. Identify Azure VM Scale Sets with overprovisioned and unused Azure Virtual Machine members 
  • Take advantage of advance purchase discounting options where lifecycle is prolonged 
  • i.e. Purchase AWS EC2 Reserved Instance pricing for a three-year term to achieve a >60% cost saving against on-demand EC2 pricing 
  • Take advantage of Cloud-complimentary licensing schemes for IaaS 
  • i.e. Utilise Azure Hybrid User Benefit (HUB) to achieve ~70% cost saving of an Azure SQL instance using existing on-premises Microsoft SQL Licensing 
  • Right size compute, storage or network workloads to specific requirements 
  • i.e. Swap-out an underutilised, low-IO Azure Premium SSD for an Azure Standard SSD

Much of this runs contrary to traditional capacity management practices, as the main vehicles of achieving these are often to rearchitect elements of how the IaaS or PaaS components operate to achieve improvements in upper-layer application workloads.  

Ideally, you want your Cloud infrastructure costs to go flat or increase only marginally as your client or installed application workload base grows, but if your costs rise faster than – or as quickly as – you onboard customers, you may have a problem. Cost optimisation can therefore be a great way to build a scalable, modern infrastructure that meets the demands of your workloads without going over budget.

Shifting away from capacity management  

Capacity management has traditionally focused on ensuring that IT systems have the resources they need to perform optimally against a backdrop of on-premises IT where Baremetal was king and OpEx was reserved purely for Software Licensing concerns. However, this approach is no longer sufficient in today’s complex IT environments, where ephemeral workloads and infrastructure elasticity mean it can be difficult to keep track. 

By shifting from capacity management to cost optimisation, organisations can better align their increasingly limited IT spending with business priorities. This approach involves identifying areas where costs can be reduced without sacrificing the important Observability Pillars of Performance or Reliability. 

Best practices for achieving cost optimisation

Being successful with cost optimisation initiatives requires a cultural shift – as enabled via DevOps and Agile Project Management practices – towards these best practices: 

Align initiatives with business priorities 

Cost optimisation initiatives should be aligned to the business’ overall priorities to avoid “penny-wise and pound-foolish” behaviours (i.e. skimming pennies off AWS EBS disk size, when an exponentially-more-cost-effective AWS S3 bucket would be a better fit for an object storage challenge instead). 

Identify and right-size over-provisioned resources  

Using the right tools (such as Zesty, Spot, nOps and Harness) to identify, optimise scale-down and turn off overprovisioned infrastructure, including automated optimisation. 

Be accountable for infrastructure costs using chargeback, recharge or show-back 

Cost accountability must be clearly articulated and factored into the RACI for required stakeholders to be aware of their business unit’s impact on overall IT infrastructure spend.  

Take action to optimise spend

Much like “security by design”, spend by design should be an upfront factor in the IT infrastructure or network design process from Day 0 design rather than a Day 2 operations afterthought. 

Use the right tool for the right job

When architecting a system, care should be given to tooling selection and “on-premises bias”. For instance, AWS S3 is a fantastic cost-efficient choice for static asset (Image/CSS/etc.) storage which doesn’t exist on-premises, so might otherwise be approximated with a more costly solution like an AWS EBS or AWS EFS Network-based file store via AWS EC2 to achieve the same at a much higher cost. 

Overall, cost optimisation isn’t a one-shot activity. It is a continuous process to enable optimal business operations and garner cost efficiency. Cloud waste reduction should always be the goal to finance growth in desired USP and differentiators. 

Continuing the efficiency 

Gartner predicts Cloud spending to grow to almost $600 billion in 2023, with earlier trends showing the Cloud TAM (Total Addressable Market) having increased year-on-year at a rate of 20-30%. In 2026, Public Cloud expenditure is estimated to be as much as half of all enterprise IT budget spending. Unlike on-premises infrastructure, which can be amortised and deprecated over time, Cloud spend tends to be ongoing operational expenditure (OpEx) and can often increase over time for a given workload as its use or client base grows. 

If you want those flat or diminished infrastructure costs, you must integrate cloud optimisation into your business processes and daily workflow. Cloud optimisation isn’t a “one and done” endeavour. To be truly mature in cloud optimisation, we would advocate for FinOps practices to be engrained into the CI/CD pipelines and ITSM governance gating processes – such as Design Board, CAB and ITDB – to treat it as a first-class citizen amongst other competing factors within the system operation and design. 

Cost optimisation should exist to answer these questions in returning value to the business: 

  • How much does each product feature or enterprise application cost to operate over time? 
  • What is the unit cost of infrastructure compared to gained functionality? 
  • How high is our utilisation cost per customer (for a SaaS Company) or application (for an enterprise) per end user? 
  • Can we reduce the performance tiers of compute, storage or network components to cheaper variants without reducing perceived SLA and OLA commitments? 
  • Do our development workloads operate to sensible development-sized smaller or less-performant IaaS or PaaS components? 
  • Can we proof of concept (PoC) under-provisioning against enterprise application workload system requirements to measure the actual impact – if any – on the SLA and OLAs our end users are signed up to? 
  • Can we utilise economies of scale to make more infrastructure components cost less on aggregate? 
  • Think along the lines of “Domino’s Promo-Codeconomics” – adding a 49p dip to your Domino’s order to push it over £50 so the “20% off £50” promo code takes affect 

How CACI can help you finesse your FinOps

CACI Network Services is well versed in using cost optimisation techniques to provide the best value for countless types of enterprise and application workload architectures, through our strong heritage in Network Infrastructure Engineering and Consulting. 

Get in touch and let our experts work alongside your FinOps teams to tame your Cloud bills and impart our cloud optimisation know-how and learnings from industry. 

Steps financial companies must take to achieve DORA & NIS2 compliance

Steps financial companies must take to achieve DORA & NIS2 compliance

 

To achieve DORA & NIS2 compliance, financial companies must prioritise the protection of sensitive financial data and infrastructure security. The critical steps that companies are strongly advised to take to reach compliance are as follows: 

Step 1: Perform a gap analysis and maturity assessment 

To effectively navigate DORA requirements, it is imperative for your company to conduct a thorough assessment of your current digital resilience and operational practices. This assessment should involve evaluating your governance structure, internal practices, maturity level and the complexity of your operations. By identifying gaps and areas of non-compliance in line with DORA requirements, you can lay the groundwork for targeted improvements and strategic alignment.  

Step 2: Bring the right people and talent together

Assemble a capable team with the necessary skills to oversee the implementation of DORA and drive operational resilience within your company. As outlined in DORA, the team should form senior and third-party risk managers, communications leads, ICT risk managers, internal auditors and media and crisis managers. 

Step 3: Understand DORA requirements 

Ensure your team understands the five pillars of DORA and review your company’s risk management framework, policies, controls and risk assessment activities in line with DORA’s requirements.  

Step 4: Reshape your digital and operational resilience strategies 

Revise and enhance your digital and operational resilience strategies to align with the principles and focus areas highlighted in DORA while also considering emerging technologies and evolving threats.  

Step 5: Implement DORA requirements  

Select a comprehensive framework that eases the systematic implementation of DORA’s requirements. This framework should accurately identify all obligations and translate gap analysis results into specific tasks. By breaking down the implementation process into sub-projects aligned with each pillar of DORA, you can specifically address its requirements. Remember to stay flexible in your implementation approach, as additional rules in the form of regulatory technical standards (RTS) will be introduced within the two-year implementation window.  

Step 6: Prepare for the future UK DORA-equivalent legislation 

The UK government has hinted that they will legislate for a UK equivalent of DORA in the next parliamentary year. Ensure your team remains proactive and up to date on the latest news so you can be prepared to adapt operations and compliance practices to meet any forthcoming requirements. 

How can CACI help? 

With over 20 years’ experience in helping deliver effective IT and security strategies to financial companies, CACI can help you navigate the changes and challenges brought on by DORA. Our experienced security and compliance experts can bolster your understanding of your network assets, help you conduct maturity assessments, address compliance gaps regarding the fulfilment of DORA implementation requirements, and much more.  

To learn more, please read our recent whitepaper “Compliance with DORA and NIS2: Essential steps for UK financial companies”, which explores the impact of DORA and NIS2 on financial companies in the UK, key considerations for senior management and best practices for achieving compliance. You can also get in touch with the team here.

How the Network Source of Truth is replacing the CMDB

How the Network Source of Truth is replacing the CMDB

For years, the Configuration Management Database (CMDB) has been an integral part of IT service management (ITSM) for organisations. It has been the go-to tool for managing the configuration items (CI) of an organisation’s IT environment, including hardware, software and relationships between them. Indeed, this is to the extent that most people raising change requests even call them “CIs” without necessarily knowing what that stands for. But no longer. 

More recently, the rise of DevOps practices such as Network Source of Truth (NSoT), service bus, event-driven architecture and continuous integration/continuous deployment (CI/CD) have led to a decline in the use of the CMDB and IT Infrastructure Library (ITIL) practices. In this blog, we’ll explore why the CMDB is becoming less relevant as organisations mature their DevOps journey and contrast some of the disadvantages of the CMDB compared to the NSoT. 

Out with the CMDB, in with the Source of Truth 

The traditional CMDB model, as mentioned above, is used to manage configuration items, track changes and support key ITIL processes such as incident management, problem management and capacity management. Unfortunately, it also dates from a time when many of these were directly correlated with physical assets – the baremetal server, the office printer; the desktop PC – that don’t deal well with logical or conceptual models prevalent in modern IT workloads – nested virtualisation, container network layers, side car proxies and so on. 

CMDB’s rigid data model and legacy data structure has opened the door to a series of contenders within the space, largely grouped together under the umbrella of “Source of Truth”. Some notable examples in the NetDevOps and DevOps spaces include: 

Instead of CMDBs, many organisations are now turning to Source of Truth practices. This is often a repository or database used to store configuration data for an organisation’s IT environment. 

Source of Truth is a DevOps practice 

The key “why” behind all this can be easily summarised when contrasting the strengths and weaknesses of the CMDB against the NSoT further. In short, the Source of Truth is a DevOps practice that seeks to simplify configuration management by listing all configuration items and their relationships in a single location. This one version of truth can then be used for deployment automation, infrastructure management and much more.  

Another key attribute of the SoT is the use of data-driven, structured data models such as YANG, which naturally integrates with well-used DevOps data structures such as YAML and JSON for frictionless flow between the ITSM process and the intended infrastructure outcome required. 

Integration in the age of disaggregation 

Increasingly, we see IT departments stretched with their ITIL-based approaches and ITSM systems which were designed for singular, homogenous deployments of IT network infrastructure within the confines of the on-premises data centre – unable to cope as increasing amounts of their application workload estate migrates off-premises into the various public cloud PaaS, SaaS and hybrid cloud models of today. As Network Consultants and Deployment Engineers, we see first-hand the issues that CMDB-based approaches create and frustrations throughout. Contrast this with a NSoT-led approach, where we might instead see the ability to: 

  • Simplify configuration management: By using a single source of truth, organisations can avoid the complexity and cost of managing multiple CMDBs across their hybrid IT network, compute, storage and application estate. 
  • Improve collaboration: Using a central repository for configuration data helps improve collaboration between development and operations teams (hence why they call it DevOps). 
  • Enable automation: With a centralised source of configuration data, it becomes easier to automate repetitive tasks such as deployment and testing, freeing up valuable development and operations resource time away from undifferentiated heavy lifting tasks. 
  • Facilitate auditing and compliance: A centralised repository of configuration data also makes it easier to track changes and ensure compliance with IT security standards such as SOC2, HIPAA, NIST, PCI-DSS, CESG and DORA. 


How
CACI can help bolster your configuration management journey

Along with a strong heritage in Network Infrastructure Engineering and Consulting, we have a strong set of ITSM Consultants available to help with your CMDB migration programmes – across the spectrum from service design, project and programme management and through to data and solution architecture.  

Let us help and see how we can unlock the value of the CI data you have to bring you closer to the point of application observability over just plain asset visibility. 

DORA & NIS2: Key considerations for senior management

DORA & NIS2: Key considerations for senior management

 

In our increasingly digital world, safeguarding the digital infrastructure and information systems that uphold financial companies is now critical. Two key regulatory frameworks, DORA and NIS2, have emerged as essential regulations designed to enhance the protection of financial companies’ operations and systems.

My first blog of the four-part DORA and NIS2 blog series introduced the new financial regulations in-depth. In the second blog, I explained how these new regulations will impact UK financial companies. This blog will explore the key considerations around DORA and NIS2 for senior management.

In light of DORA and NIS2 taking effect, it is integral that senior stakeholders within financial companies are aware of the considerations that must be taken to effectively comply with these regulations and adhere to them accordingly. A few of the key considerations for senior management to be aware of are as follows:  

Navigate the cost of compliance 

It is important for senior management within certain financial companies to consider that complying with regulations may accrue significant financial costs. This is particularly likely in small and medium-sized enterprises (SMEs). Becoming digitally resilient and implementing the necessary measures to meet DORA requirements may require a hefty investment in technology, resources and expertise. This may, however, prove small in comparison to the cost of a breach, incoming fine, loss of reputation or even customers.  

Carefully assess maturity and capabilities 

The maturity and complexity of a financial company’s governance and internal practices will affect the challenges it faces in complying with DORA. Companies with lower maturity profiles may need to invest more resources and effort to meet DORA’s requirements. At every maturity level, it is vital for senior management to conduct thorough evaluations of the current state, identify any existing gaps and allocate the appropriate resources for compliance.  

Turning requirements into actions can be complicated

DORA introduces new compliance obligations and expectations for financial companies. It requires them to embed digital resilience throughout their operations, develop a Digital Resilience Strategy, implement a Digital Resilience Framework and address areas such as operational resilience testing, threat intelligence sharing and third-party risk management. Senior management must prepare themselves for the likely challenging undertaking of understanding the specific requirements and translating them into actionable steps across the wider business.  

Ensure third-party service providers’ compliance

Financial companies often rely on third-party ICT service providers to support their operations. DORA also applies to these service providers, imposing additional compliance obligations and oversight requirements. Therefore, it is critical for senior management to verify that third-party providers adhere to the prescribed standards and align with DORA’s requirements, which may involve renegotiating contracts or conducting due diligence to ensure compliance.  

Adhere to the compliance timeline 

While the European Parliament has approved DORA, it is only set to enter into force in 2025. Conducting a thorough gap assessment, developing a roadmap and implementing the necessary changes can be time-intensive, particularly due to the complexity of the requirements and potential need for significant operational adjustments. Therefore, senior management must plan compliance efforts and resources accordingly to align with the designated timeframe. 

How can CACI help? 

With over 20 years’ experience in helping deliver effective IT and security strategies to financial companies, CACI can help you navigate the changes and challenges brought on by DORA. Our experienced security and compliance experts can bolster your understanding of your network assets, help you conduct maturity assessments, address compliance gaps regarding the fulfilment of DORA implementation requirements, and much more.  

For more information, please read our recent whitepaper “Compliance with DORA and NIS2: Essential steps for UK financial companies”, which explores the impact of DORA and NIS2 on financial companies in the UK, key considerations for senior management and best practices for achieving compliance. You can also get in touch with the team here.

Free NetDevOps training to learn network automation with Cisco U

Free NetDevOps training to learn network automation with Cisco U

Cisco have recently complemented their various Training and Learning Platforms (including Cisco Digital Learning, Cisco Learning Network and Cisco Live) with a new user-friendly offering: Cisco U. While some of the content is pricy, we’ve found some completely free-of-charge network automation courses that we think you should know about. 

What are the Cisco U pricing plans?

Unlike other Cisco training platforms, no special user account is required– just use the same Cisco Connection Online (CCO) account you currently use as a NetEng to login to the main Cisco Portal for activities such as Cisco Certification Tracker, Cisco Software Downloads and Cisco TAC Case Manager. 

Cisco U currently has three Pricing Plans: 

  1. Cisco U Free – £0 per year 
  2. Cisco U Essentials – ~£1,200 per year (or 15 Cisco Learning Credits) 
  3. Cisco U All Access – ~£3,800 per year (or 48 Cisco Learning Credits) 

Generally, the difference between the subscription levels is around the specialism of content available. For more interactive and live tutorial content, higher subscription levels are required. As a summary of the differences: 

Note: Cisco Learning Credits (CLC) are prepaid vouchers that you may already have an allowance of if you act as a Cisco VAR (value-added reseller) or have a large enough Cisco contract. 

What free content is available

Although some of those prices may seem a little steep for individuals, there is a great deal of free content available within the network automation realm, especially for those looking to learn about topics such as: 

  • Practices: NetDevOps, pipelines, pull requests, version control, CI/CD 
  • Data Structures: YANG, YAML, JSON, XML 
  • Coding: Python, PIP, PyPI, Netmiko, pyATS, Genie, EXPRESSO, Bash 
  • Tooling: Ansible, Terraform, Cisco NSO, Vim, Linux, API 
  • Protocols: gNMI, RESTCONF, NETCONF, REST API 

Here’s a curated summary of some of our favourite and completely free of charge courses: 


What does t
he future of Cisco Learning look like? 

This looks like a promising step in consolidating all the disparate Cisco Learning platforms, systems and content into one centralised, easily-searchable and visually-appealing place. Sure, the pricing may seem steep at present – and as an individual general network engineering learner, you’d get better bang for your buck going via CBT Nuggets, INE, Pluralsight or Udemy – but for free network automation-specific content, Cisco U is a surefire winner for anyone wanting to break into the world of NetDevOps. 

How CACI can supplement your network automation efforts

In the midst of a network automation initiative and struggling to get the right NetDevOps-qualified professionals to help drive your latest automation, dashboard or observability project forward? Let us help and see how your business can fully utilise our talented NetDevOps NRE, SRE, developer, automation and coding experts to cut through your engineering backlogs. 

How do DORA and NIS2 impact UK financial companies

How do DORA and NIS2 impact UK financial companies

In our increasingly digital world, safeguarding the digital infrastructure and information systems that uphold financial companies is now critical. Two key regulatory frameworks, DORA and NIS2, have emerged as essential regulations designed to enhance the protection of financial companies’ operations and systems.

In the first of our series of blogs, we introduced the topic of DORA and NIS2 and explained the new financial regulations. Here I will be exploring how these regulations will impact UK financial companies.

DORA applies to a range of financial institutions including banks, investment companies, payment service providers and critical third-party service providers that operate within the EU. UK-based operators that service the EU market must therefore comply with DORA and NIS2.  

Companies that fall under this scope will be impacted in the following ways: 

Broader compliance requirements 

UK-based financial companies that service the EU must comply with the new requirements set out by DORA and NIS2 that intend to improve operational resilience and cybersecurity. These requirements include third-party security management, supply chain risk, vulnerability disclosure practices, risk management measures, incident reporting and more. The stiffened regulatory oversight and supervision as a result of this causes UK companies to have to reassess their operational processes and reporting mechanisms and develop a risk management framework.  

Harmonising cybersecurity measures

NIS2 aims to harmonise cybersecurity measures across the EU, including UK operators that service the EU market, to maintain a consistent level of cybersecurity and resilience. This harmonisation will align UK companies with the cybersecurity standards and practices of other EU member states. UK financial companies may need to create incident response plans or revisit their existing reporting mechanisms to adhere to this.  

Standardising and strengthening operational resilience DORA prioritises the maturity of cyber, operational and technology resiliency in financial companies. It consolidates regulatory initiatives and aligns with the Bank of England, Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) requirements, strengthening operational resilience in the financial services sector. UK financial companies will need to create extensive testing programmes to guarantee the resilience of their systems and perform gap analyses to align with DORA’s latest requirements.  

Varying impact on distinct types of firms 

DORA’s impact will vary based on the size and maturity of financial companies. For example, established multinational banks with existing operational resilience strategies may face minor impact. On the other hand, smaller banks, fintech companies, insurance firms, fund management firms and wealth management firms may require substantial strategy changes and a redistribution of resources to meet DORA’s requirements.  

Promote information sharing 

DORA encourages a collaborative culture among financial companies by promoting the exchange of cyber-threat information and intelligence. This proactive approach strengthens the overall resilience of the financial sector.  

Impact on ICT third-party service providers

DORA not only applies to regulated financial companies, but also has implications for the ICT third-party service providers that support them. Providers of cloud computing services, software, data analytics and data centres must comply with DORA, ensuring a level playing field for all. UK financial companies must align with each ICT service partner to assess and document any potential associated risk and ensure their contracts include all key elements.  

Incident reporting & response management

DORA mandates UK financial companies to report any major ICT-related incidents to local authorities. It also stipulates the reporting of any cyber threats on a voluntary basis, and to inform customers of incidents. With this in mind, UK financial companies will need to revisit their supplier contracts to ensure they meet all incident response requirements including identifying and recording all incidents, reporting to regulators within designated timeframes and pursuing remediation action.  

Impact of NIS2 on US financial companies 

While NIS2 is a regulation specific to EU member states, its impact can still be felt in financial companies across the US. Compliance with regulations in the US is overseen by agencies including the Securities and Exchange Commission (SEC), the Consumer Financial Protection Bureau (CFPB) and more. NIS2’s implementation means that certain US companies operating within the EU or that conduct business with EU member states will need to align their cybersecurity and information security practices to ensure NIS2 compliance is maintained. Compliance is not only mandatory, but is strongly encouraged for financial companies that wish to retain their customers’ and investors’ trust. 

How can CACI help?

With over 20 years’ experience in helping deliver effective IT and security strategies to financial companies, CACI can help you navigate the changes and challenges brought on by DORA. Our experienced security and compliance experts can bolster your understanding of your network assets, help you conduct maturity assessments, address compliance gaps regarding the fulfilment of DORA implementation requirements, and much more.  

For further insight into the impact of DORA and NIS2 on financial companies in the UK, key considerations for senior management and best practices for achieving compliance, please read our whitepaper “Compliance with DORA and NIS2: Essential steps for UK financial companies”. You can also get in touch with the team here.

Why Cloud-native telco networks must rethink their OSS/BSS

Why Cloud-native telco networks must rethink their OSS/BSS

With businesses looking to cut costs and increase efficiencies, it should be no surprise that the telecommunications industry is (slowly) moving towards the public Cloud to run some of its missioncritical backend systems, chiefly those provided by Operational Support Systems (OSS) and Business Support Systems (BSS) which underpin the business and revenue-generation model for a modern telco. With pioneers such as Totogi, the management plane of a modern telco network is bound to interact with some form of Cloud Service Provider (CSP) offering. So, what major pressure points are telco networks facing and how are they overcoming them? 

Pressure to maximise revenue through increased agility

Increased customer demand, competition and inefficient legacy monolithic OSS/BSS systems means that telco operators are struggling to maximise their Revenue Per User (RPU) in a world of constantly-growing network access technologies (3G, 4G, 5G, Edge, IoT) and increased consumer bandwidth, service appetite and choice between the newer MVNOs (Mobile Virtual Network Operators) in the market. Agility – both for Service Provision, Service Operation and Charging Model changes – is increasingly becoming the de facto differentiator between telcos in increasingly saturated marketspaces. 

Where telco’s geographic footprint once held supreme, the public Cloud provider is now increasingly encroaching on their traditional business. This is evidenced through innovative offerings such as Microsoft Azure Private 5G Core and AWS Private 5G starting to displace the decades-old practices of using costly, inflexible, vertically-integrated network vendor solutions for more Cloud-native alternatives. 

Additionally, the amount and complexity of nodes and technology types present are leading to the old “pets” approach human-fetered operation leaning more towards a “cattle” approach, with Operator Assistance tooling like ML, AIOps and Network Automation to ensure the same – or in many cases, reduced – headcount to run an ever-growing network boundary. 

Selling more data with data

Telcos often possess much of the data (analytics) that they need to empower them to sell more data (allowance) to their subscriber base, but are often not empowered with agile or Cloud-native tooling or frameworks to launch retention-focused offerings such as the Totogi churn prediction service or provide dynamic Fair Usage Policy management based on actual usage. For telcos moving towards the “as a service” offering spectrum – such as charging as a service, 5G RAN core as a service and others – Cloud-native approaches and tooling are required alongside more modern OSS/BSS Cloud-led approaches to fully exploit this. 

To do this, telecommunications providers must focus on the following: 

  • Modernising the network 
  • Monetising the network 
  • Reducing the TCO of the network 

Cloud-native is key to all three, particularly with regard to unlocking small customer wins which otherwise can’t afford CapEx-intensive, slow-delivery approaches as afforded by older monolithic cores within older OSS/BSS packages and the like. Ditto, monetisation of assets becomes easier when data can be processed on an ad-hoc, sub-hourly basis with compute billed accordingly, akin to Functions as a Service (FaaS) compute. 

NFV, VNF, CNF, vRAN, AI… oh my!

While discussions around artificial intelligence (AI) such as GPT-4 are new, related improvements to telco networks such as these are very much not, and have been in the industry zeitgeist for many years: 

  • Network Function Virtualisation (NFV) 
  • Virtualisation of Network Functions (VNF) 
  • Cloud-native Network Function (CNF) 
  • Virtual Radio Access Network (vRAN) 

“Carrier Grade” and “Cloud” previously haven’t been mentioned in the same sentence; not because the latter isn’t capable of the former, but because the “Carrier Grade” has always effectively meant large, monolithic applications and network operating systems deployed on large vertically-scalable, vertically-integrated platforms – such as the Network Chassis or VLR/HLR Controller. Internally, while similar in concept to the Twelve-Factor App, the proprietary nature of the internal northbound and southbound communications busses has led to a lack of innovation or integration possible with these devices. Added on this, the cost to vertically-scale (read: buy a new, much bigger unit) is cost prohibitive to new entrants in the space. 

With the IT industry advent of increased abstraction and minification of compute and programming into Containers, IaC and Orchestration (Kubernetes et al), so too has the telco industry effectively copied its IT peer – with abstraction and componentisation of telco network function into NFV, VNF and CNF, the rise of the vRAN and “RAN as a Service” has begun. Many of these are the underpinning of public Cloud providers telco 5G offerings and are only possible because the wider telco industry is reaching a level of maturity with Cloud and related coding discipline that it previously had eschewed. 

How CACI can support your move towards a connected industry 

We see similar across our enterprise and public sector clients with the rise of IoT Networks, industry 4.0 and the like. Previously monolithic approaches both in hardware and software are giving way to more agile, nimble Cloud-led approaches, tools, techniques and “as a service” offerings. Telcos can now leverage the benefits of public Clouds without having to rip and replace their current systems. This enables scalability, flexibility, cost-effectiveness and access to AI capabilities and machine learning prediction models that can produce powerful insights from world-class analytical products via open API-based services. 

CACI Network Services has a rich heritage in the telco sector; get in touch to see how we can help your telco, MVNO or OSS/BSS product fully exploit the agility and opportunity that Cloud can provide to increase the return on your RPU. 

What are DORA and NIS2?

What are DORA and NIS2?

This blog is the first of a four-blog series which will be exploring many aspects of the newly implemented DORA and NIS2 financial regulations.

In our increasingly digital world, safeguarding the digital infrastructure and information systems that uphold financial companies is now critical. Two key regulatory frameworks, DORA and NIS2, have emerged as essential regulations designed to enhance the protection of financial companies’ operations and systems.

What is DORA? 

DORA, or the Digital Operational Resilience Act, aims to enhance the operational resilience of the financial sector in the context of digitalisation. It is part of the Digital Finance package, which includes measures to enable and support digital finance while mitigating associated risks.1 Effective from 16th January 2023, the regulation applies to all financial services companies and their external vendors within the EU Market, mandating compliance by 17th January 2025.  

DORA is underpinned by five pillars that address various aspects of digital operational resilience, including:  

  1. ICT risk management: Effectively managing ICT and security risks, including establishing resilient ICT systems, identifying risks, implementing protection measures, promptly detecting anomalies and having robust business continuity plans in line with recognised standards and best practices.

  2. ICT-related incident reporting: Establishing and implementing a management process to monitor and log ICT-related incidents. It involves classifying incidents based on regulation-defined criteria and reporting major incidents to competent authorities.
     
  3. Digital operational resilience testing: Financial entities periodically testing ICT risk management capabilities and functions for preparedness and identification of weaknesses, deficiencies or gaps and the prompt implementation of corrective measures.
     
  4. ICT third-party risk: Managing risks associated with third-party service providers, including assessing the criticality of third-party providers, outlining key elements in contracts, conducting due diligence and managing risks.

  5. Information sharing: Emphasising the exchange of cyber threat information and intelligence between entities within trusted financial communities. The objective is to raise awareness of new cyber threats, share reliable data protection solutions and optimise operational resilience tactics. 

What is NIS2? 

Network and Information Security Directive 2 (NIS2) aims to establish a higher level of cybersecurity and resilience within the EU, strengthen incident response capabilities and eliminate divergences in cybersecurity. It entered into force on 16th January 2023, and EU members must transpose its measures into national law by 17th October 2024.  

With the company’s maturity and current market conditions taken into consideration, companies must prioritise the following areas to safeguard their infrastructure and effectively comply with NIS2 regulations:  

  1. Providing sufficient training and resources: Employees must be up to date on their cybersecurity knowledge by receiving adequate training and resources that will promote a security-conscious culture across the organisation.

  2. Streamlining incident reporting: Efficient processes for reporting and managing security incidents must be established, including prevention, detection and response measures.

  3. Strengthening overall security posture: Companies can improve their security posture by carrying out proper security controls, technologies and best practices.
     
  4. Funding for cybersecurity: Companies can enhance their cybersecurity measures and protect critical assets by supplying sufficient resources and funding. 

How can CACI help? 

With over 20 years’ experience in helping deliver effective IT and security strategies to financial companies, CACI can help you navigate the changes and challenges brought on by DORA. Our experienced security and compliance experts can bolster your understanding of your network assets, help you conduct maturity assessments, address compliance gaps regarding the fulfilment of DORA implementation requirements, and much more. 

To learn more about the impact of DORA and NIS2 on financial companies in the UK, key considerations for senior management and best practices for achieving compliance, please read our whitepaper “Compliance with DORA and NIS2: Essential steps for UK financial companies”. You can also get in touch with the team here.

How CACI helped optimise utilisation of storage capacity across the MoD estate

How CACI helped optimise utilisation of storage capacity across the MoD estate

BACKGROUND

The MoD is the UK’s second largest landowner, in possession of 1.1m acres across the UK (2% of the country’s landmass). Land Usage and Management is crucial to the MoD’s operation and CELLA is paramount in carrying out these critical activities. In 2020, the Warehousing and Distribution Optimisation (WDO) began a programme to better understand and manage the estate. 

The WDO needed a central repository that would supply a consolidated view of the wider MoD landscape to bolster planning and decision-making. The solution needed to offer evidence and operational data for the MoD’s ability to function, with a high standard of consistency, accuracy and analytical reporting. This was crucial to the WDO’s objective of increasing efficiency and using reliable evidence to manage efforts, storage capabilities and limitations. 

THE CHALLENGE

The MoD struggled to easily understand the space it maintains and the attributes of its various storage locations. This restricted their planning and decision-making ability for military operations or civil requirements, particularly storing PPE reserves during the Covid-19 pandemic. Warehousing and other storage facilities were managed on an individual basis without a centralised database or management system, and data storage was often highly siloed.  

The WDO decided upon a bespoke solution delivery approach, as there was no readily available COTS tool that met their complex requirements. The system needed to prompt users (via email) to provide updates, and the frequency of updates needed to be configurable according to need (e.g. data for short-term storage facilities should be updated more often than longer-term facilities.) 

The data held on the solution needed to be comprehensive, including everything from basic facility type to state of repair and security factors. 

THE SOLUTION

We worked with the WDO to optimise and classify their raw data and understand usage and users’ needs. Using Mood’s no-code software we rapidly deployed an integrated solution that put relevant information at the fingertips of decision makers. 

A one click/two click navigation supplied an optimal user experience and encouraged the MoD’s goal of promoting self-sufficiency. Standardised terms and references enabled users to search the entire system, additional permissions-based access could be granted for those users who require it, and automated e-mail reminders encouraged efficient action. 

The Mood platform we delivered allows the input data to be filtered and combined in multiple ways to supply answers to operational and planning challenges, such as readiness for military or civilian emergency operations. Functional aspects such as frequency of automated prompts is configurable to meet local conditions. Data can be exported for secondary reporting, e.g. using Power BI tools.  

The frequency of email reminders can be configured on a site-by-site basis to ensure data is updated for sites where storage requirements change regularly while keeping effort low for sites used for long-term storage. 

THE RESULTS

Over 300 buildings are now listed within this tool, and the WDO users are working towards achieving comprehensive estate coverage. The WDO’s understanding of their operational assets and storage buildings has improved, helping them decide whether particular locations or sites are right for particular actions. This ultimately allows them to optimise their estate. Since delivering the first phase of this programme, we have extended the CELLA capability by adding: 

  • Self-sufficiency: As the solution was being handed over to a central support desk, we ensured that wherever possible, users could maintain the solution themselves and not rely on CACI. 
  • User-specific information: Users’ home screens were updated to showcase information immediately, allowing them to focus on what needed to be done. 
  • Streamlined administration: Users can now access all the information they need about storage sites in one place without having to telephone individual sites and speak to multiple people. Stored items can be located and, just as importantly, storage opportunities can be identified quickly. 

THE FUTURE

The CELLA Data Management Tool (CDMT) team are now working their way around the country to load further sites and hardstandings into the tool. This will give the MoD greater insight into their estate of buildings that is expanding in the face of global instability. 

Since transferring to Joint Support in April of this year, the CELLA team have secured multiple storage successes, which will already see savings to the MOD of several million pounds over the next few years. As CELLA continues to mature and understanding of its potential grows, these figures will undoubtedly continue to rise.

Wing Commander Duncan Serjeant, Ministry of Defence (MoD) Joint Support

Read the full customer story here >>