This blog is the first of a two-part series that will uncover the value that network automation can bring to a business and how to effectively persuade the C-suite of its value. Part two explores strategies for keeping the C-suite interested in pursuing network automation and mistakes to avoid when developing strategies.
Why is network automation critical in a business?
Network automation allows you to automate planning, deploying and optimising your business’ operations, networks and services. It is a game changer because it enhances the efficiency, reliability and capacity of your business’ management of its network infrastructure. This minimises the risk of human error, maximises scalability and helps you maintain a competitive edge in the market.
With an increasing focus on digital services and data connectivity, ensuring that network automation becomes commonplace in a business has become paramount to long-term operational success. The importance and prevalence of network automation in businesses has skyrocketed in recent years, despite a reported 77% of technology professionals seeing room for improvement in their data centre network automation strategies.
This, coupled with the expectation from 45% of organisations expecting their data centre network automation investments to earn an ROI within two years stresses the need for businesses to get the C-suite on board with network automation and ensure they invest in a network automation strategy. But how do you go about effectively and strategically selling the value of network automation to the C-suite?
How to create a successful business case
Step 1: Lead with evidence
According to an article by Enconnex, the weakest link in data operations tends to be humans, with human error accounting for ~80% of all outages. Existing pipelines in businesses tend to operate sequentially and manually, increasing the probability of human error through the involvement of multiple individuals in the chain of events.
Step 2: Outline a strategic software development process
Ensuring each step of the operational process from integration to delivery is tested and accounted for and outlining this in a cohesive plan for the C-suite level will help earn their trust. Developing a process flow that outlines a long-term strategy and what the business will achieve through network automation will further encourage this crucial buy-in. A visualisation tool or platform to convey this can significantly enhance their understanding.
Step 3: Stage a production deployment in a test environment
Unlike application testing, network testing is often difficult because the network itself doesn’t exist in isolation and is nearly always the lowest level of the technical stack. This makes performing tests complex. While the applications within a development or pre-production environment are often considered non-production, the underlying network to these application test environments is nearly always considered “production” in that it must work, in a production-like, always-on, fault-free state for the applications atop it to be tested and fulfil their function. Replicating complex enterprise, data centre or even cloud networks often come at a price. Organisations can typically only duplicate or approximate small proportions of their network estate. As a result, staging looks more like unit testing in software development by making small but incremental gains and applying them exponentially to the production network looking to be automated.
While many organisations may opt for a waterfall, agile or other project management approach, we nearly always find that an agile-like, iterative, unit-tested approach to developing network automations – such as scripts, runbooks, playbooks and modules — are more beneficial in pushing automation both into the organisation and into wider adoption than any other approach.
Step 4: Prove that benefits will be reaped through the staged production
One of the benefits of modern network engineering is quickly leveraging the commoditisation of the vertically integrated network hardware stack the industry has embarked upon over the last decade. It is now easier – and cheaper – than ever before to spin up a virtual machine, container or other VNF/NFV-equivalent of a production router, switch, firewall, proxy or other network device that will look, feel, act and fail in the same way that its production network equivalent device would. When combined with software development approaches like CI/CD pipelines for deployment and rapid prototyping of network automation code, this can be a winning combination to rapidly pre-test activities within ephemeral container-like staging environments and maintain dedicated staging areas which look like production.
How can CACI help?
CACI’s Network Services team comprises multidisciplined IT, networking infrastructure and consultant and automation engineers with extensive experience in network automation. We can support and consult on every aspect of your organisation’s network from its architecture, design and deployment through to cloud architecture adoption and deployment, as well as maintaining an optimised managed network service.
As the year comes to an end, the networking space is showing no signs of slowing down. Networking is continuing to show remarkable advances, marked by emerging technologies such as AI and network-specific LLMs, with changing business demands that are paving the way for a more secure and connected future.
Businesses and industries that recognise the power of adopting these evolving networking technologies and best practices in improving their performance will set themselves up for unparalleled future growth, solution scalability and competitiveness. Those that don’t are increasingly getting left behind.
So, what are the main networking trends that we have seen in 2024?
Rollout of 5G
As 5G becomes readily available and increasingly adopted, it has become recognised as one of the most significant trends to come out of the year so far, recognised for its unmatched speed and capacity. Additionally, unlike its 4G and 3G predecessors, the availability of industrialised, private 5G offerings – acting as a more-capable, longer-reaching alternative to wi-fi in specific building scenarios – is leading to the global 5G services market is set to reach an annual growth rate of 59.4% by 2030.
Network services have had to make way for the increased bandwidth and low latency that has come from the rollout of 5G, ensuring a smooth and responsive user experience and the ability to connect even more devices within a small area without compromising on performance. These capabilities have augmented the likes of IoT devices and virtual reality (VR) applications, which require speedy transfer and real-time communication. We expect trends such as VR and augmented reality (AR) – such as the Apple Visio Pro – to accelerate the dependence on not only bandwidth (speed) in networking, but also in latency (lag); the latter of which has often been neglected by many enterprise networking technologies.
5G is ultimately positioning itself to strengthen the economy and help transform businesses through speed and interconnectedness.
Edge computing migration
Despite its industry presence for years now, edge computing has been gaining prominence in 2024 as a means to support organisations with processing their data closer to the sources of data or users—at the edge of the network. What’s old is in many ways new again, with the content delivery network (CDN) coming back to the fore as a primary on-ramp into public cloud and other aggregated network ecosystems and walled gardens. Both edge and CDN minimise latency and enhance real-time processing capabilities that are not possible purely via the public cloud. By processing data at the edge of the network, the strain on network bandwidth is also alleviated.
Edge computing will continue influencing network architecture design and redefining the parameters of data processing with the development of smart cities, IoT and AI-powered applications that rely on data processing, with businesses strongly encouraged to migrate workloads to edge computing.
Multi-cloud networking and environments
With an increased demand for flexible and scalable solutions in networking, multi-cloud networking (MCN) and environments have become essential for businesses to keep up. Multi-cloud networking and environments consist of many tools and solutions that enable networking and connectivity across cloud environments. They mitigate the limitations that come with using traditional network architecture by allowing for seamless integration across multiple cloud environments.
The key challenge we see in our customerbase with multi-cloud networking is the sheer amount of complexity and same-but-different solutions within constructs such as cloud networking, underlay networking and overlay networking. Many customers will have multi-cloud through necessity rather than strategy – for instance, using Microsoft Cloud for Office365 collaboration, alongside AWS for developer-led public cloud and likely a smattering of other PaaS and SaaS cloud offerings. We’re increasingly seeing the rise of cloud exchange gateways as an alternative to Internet exchange (IX), bringing the same complexity of IX management – such as peering management, route policy and the like – down from the ISP domain and into the enterprise domain.
By 2031, the global market size of multi-cloud networking is projected to reach $19.9 billion USD (£15.7 billion) and grow at a rate of 23.3%. Businesses that embrace multi-cloud networking and environments will find themselves connecting and managing workloads across diverse cloud environments and establishing a secure, high-performance network that will carry out operations as efficiently as possible, steadily flow data between clouds to reduce data silos, optimise data transmission speeds for faster response times and improve customer experiences by evolving along with users.
AI networking
Of all the trends unfolding in the networking space this year, AI is proving to be a substantial one. Networking solutions have become increasingly reliant upon artificial intelligence (AI) for optimisation, maintenance and analytical purposes. AI networking has also bolstered capabilities within industries like network services to develop robust and efficient networks that will continue to support operations.
Trends such as network observability and network telemetry mean the amount of logging, traces and metrics required to be analysed is becoming untenable for any one human. AIOps is becoming a necessity to augment overworked and often under-tooled network operations staff in delivering, maintaining and optimising increasingly agile, complex and demanding enterprise networks.
By continuously influencing how networking infrastructure is built and integrating into network automation tools to enhance decision-making and analyses, AI is proving to be a game-changer when it comes to networking. We’re finding several amazing use cases where the use of an AI tool, such as GPT, enables us to grok an API with a contextually-specific use case, or quickly glean through pages of troubleshooting documentation to find the exact nuance of bug, CVE or PSIRT we’re in the midst of fixing or coding.
To learn more about the impact of AI on networking through 2024, take a look at our blog on the top network automation trends.
Intent-based networking (IBN)
Intent-based networking (IBN) has been a groundbreaking networking advancement thanks to its ability to use automation and artificial intelligence (AI) to simplify network management. This technology has rapidly grown in popularity for networking-oriented businesses, as it allows administrators to define a network’s intent and automatically translate and implement these intentions across the wider network infrastructure to optimise its performance, security and scalability.
IBN eliminates the need for manual configuration—often a requirement of traditional networks– through its automated processing that is based on real-time analytics, ultimately improving efficiency while decreasing the margin of error and revolutionising the ways in which businesses can streamline their network management.
While still not mature, the concepts of IBN are finding their way into mainstream NMS, OSS – and increasingly even ITSM – products, and matching the “as a service” patterns application development teams are used to from the public cloud world.
How CACI can support your networking journey
At CACI, our trained cohort of network automation engineers, network reliability engineers (NREs) and consultants are well versed in a plethora of IT, networking and cloud technologies, ready and willing to share their industry knowledge to benefit your unique networking requirements.
We act as a trusted advisor to help your organisation drive better experiences by enabling more effective use of technology and business processes. Our in-house experts have architected, designed, built and automated some of the UK’s largest enterprise networks and data centres. From NSoT through CI/CD, version control, observability, operational state verification, network programming and orchestration, our expert consulting engineers have architected, designed, built and automated some of the UK’s largest enterprise, service provider and data centre networks, with our deep heritage in network engineering spanning over 20 years.
Take a look at Network Automation and NetDevOps at CACI to learn more about some of the technologies, frameworks, protocols and capabilities we have, from YAML, YANG, Python, Go, Terraform, IaC, API, REST, Batfish, Git, NetBox and beyond.
Digitisation of Joint Service Manuals (JSM) for Defence Equipment and Support Defence Equipment & Support (DE&S) are the procurement arm of the UK Ministry of Defence. They have a pivotal role in fulfilling equipment requests from across the Front-Line Commands, Executive Agencies and At Arm’s Length Bodies such as the Submarine Delivery Agency (SDA). Their remit ranges from straightforward equipment procurement to the development of new technologies and ensuring the UK Armed Forces can maintain availability and readiness for a fleet of over 400 different platforms.
DE&S summarised the overall task as “to develop the Joint Service Manual (JSM) concept and codify the Receipt, Inspection, Issue, Storage & Maintenance (RIISM) Service Category”. CACI’s main task was to digitise the JSM by bringing them into the “COMPASS for Land” digitised group of capabilities. In fact, CACI were able to go beyond digital transformation of the RIISM manual by adding 3 other important manuals.
DE&S prioritised a solution that not only digitised JSMs but also facilitated improved compliance and included interactive features to enhance suppliers’ understanding of and adherence to JSMs, making the process easier for them.
THE CHALLENGE
The commercial documentation is complex, lengthy, and sometimes didn’t keep pace with the evolution of processes over time. Because of this:
Compliance wasn’t high enough.
Interpretations of the commercial documentation sometimes resulted in incorrect actions.
DE&S needed a better way to support all actors in the procurement processes to save time, reduce individual differing interpretations, and improve compliance overall.
DE&S required a digitised version of its current paper-based product. This would ensure that JSM information could be found, searched, accessed, and condensed for ease of absorption, with clear explanations. The search function was important, aiming to make it easier to look across several JSMs for information and links.
THE SOLUTION
CACI created digitised JSMs with a flexible search facility, explanations of roles and responsibilities, and relevant dependencies involved in delivering items for DE&S. The solution enables searching across multiple JSMs, for instance a search for “quality” can be set to bring back all quality references in all JSMs. A user can bookmark favourite sections for repeat reference and can make suggestions in the solution for future enhancements of the functionality.
Mood was employed for the document digitisation aspect of the project. From a delivery perspective, this was an example of the CACI Mood team working alongside colleagues from other suppliers and within the Defence industry in a single delivery team under the overall management of Equinox, DE&S’s private sector programme delivery partner. This type of “Rainbow Team” approach worked well. Not only is Mood easy to integrate within a wider process that uses other software tools, but bringing different suppliers together into one team with a single leadership reduces barriers in communications and speeds up delivery.
Bringing the JSMs into the overall Mood Compass for Land solution brought extra benefits of a pre-existing sign-in apparatus, admin functions and feedback loops.
THE RESULTS
Users are reporting:
It’s much easier to find the instructions they need.
They have confidence that these are up to date.
Fewer issues relating to process are arising.
Efficiency is increasing.
Communications between parties in an end-to-end process have been improved.
Agreement is reached on actions faster, and with less debate.
In addition to the day-to-day operational benefits, the new digitised JSMs are supporting highly beneficial business analysis and root cause identification of areas for improvement.
THE FUTURE
CACI will continue to digitise JSMs as and when they are prioritised for action and will continue to make enhancements as required.
Although this Case Study focuses on a Defence context, the challenges outlined in this case study will be replicated across multiple industries and operations where adherence to instructions is critical, sometimes even for the preservation of life and limb, but the quantity, complexity and changing nature of those instructions mitigate against compliance.
*Compass for Land is a Mood software solution that digitised the Common Support Model
Defence Digital provide a set of global personnel across the Ministry of Defence with core IT services vital to their role. Either on the front line, or in the back office, enabling mobilisation, modernisation and transformation, Defence Digital are the digital lynchpin in the MOD’s operations.
Many teams from all commands across the MOD, as well as industry partners, have a need to create architectural models of operations and processes or create interactive solutions for their people to aid them in coordinated, efficient, and safe working. These needs can be short or long-term, be planned well in advance or an unplanned urgent requirement, they could be for 1 person or over a thousand users.
Defence Digital needed a software product and dedicated support that would be flexible enough to meet these multiple requirements, was scalable, and instantly available without commercial delays or constraints.
THE CHALLENGE
A solution was required that could be easily learned, deployed swiftly, that enabled rapid building of models and operational solutions, but would be technically sophisticated enough to tackle a wide range of tasks.
The MOD required a service that would deliver:
Flexible functionality in a single platform.
Speed in deployment and training of users.
A responsive support function.
The opportunity to influence the future development of the software in partnership with the supplier.
Build and maintenance of the IT infrastructure to support the software platform.
The software platform needed to:
Be a no-code / low-code software platform.
Give the ability to build architectures from which stakeholders could gain business insights.
Be architecture framework agnostic.
Deliver the ability to create digitised operating frameworks.
Enable analysis and presentation.
THE SOLUTION
Mood software is a great fit and CACI have proudly supported Defence Digital for around 8 years.
CACI agreed an enterprise licensing approach that means anyone in the MOD can request a Mood license online or by telephone and is given access to a new Mood repository, the same day. Training is provided on request, speedily and a short course is all that’s needed to start working productively with the software.
The support service provides help and guidance, making sure users get the most benefit from Mood, and CACI run regular user forums to help the MOD Mood User community share great ideas.
Many people in the MOD have become expert creators of material in Mood, and, because of the excellence of the presentation layer, many of the solutions built with it have hundreds of regular users who view and work with outcomes rather than building in Mood.
Mood Business Architect (MBA) software provides a no code/low code Enterprise Architecture tool for developing and maintaining models. The product is extremely flexible and enables users to define data structures and relationships as required to model their problem space. The software utilises a SQL Server database, and network hosting enables multiple architects to access and contribute to the model. A powerful permissions model with the MBA tool enables administrators to protect and restrict access as applicable.
Once developed, models can be shared with a wider stakeholder base via Mood Active Enterprise (MAE). Models are presented in a web browser and tools are available to make the user experience fully interactive, for instance, providing opportunities to update data, apply filters, drill down/up into lower or higher levels, etc.
THE RESULTS
There are between 60 and 100 individual repositories built in Mood at any one time, all supported through the Managed Service. Here are some of them:
GEAR, the Guide to Engineering Activities and Reviews, is a mandated source of guidance to the defence engineering community. Built originally by contractors and now maintained by the MOD personnel using Mood software, it replaces an unwieldy set of previous materials with fully digitised guidance, with unlimited user access. Widely and frequently used, GEAR has around 22,000 logins per year.
DLF, the Defence Logistics Framework is a one-stop shop for defence logistics policy, digitising for the first time a comprehensive set of documents, and supporting re-authoring. DLF has over 52,000 logins a year.
Maritime, Air and Land Defence Frameworks are all Mood-based high-level capability models of the domains. These provide a vital overview and breakdown of defence capabilities in their respective domains.
The reference frameworks save significant time for staff officers new in post, and ensure consistency is maintained within the FLC’s.
Support Chain Information Service Architectural Repository (Formerly LNECA: Logistics Net Enabled Capability Architecture) holds information on all of the logistics systems and has been in operation since 2008. It’s continuously updated and developed and is the intelligence source for briefings to senior managers. If deleted, it would have to be re-built as it’s vital to strategic and operational thinking.
THE FUTURE
The Managed Service (MS) provided to Defence Digital continues to support users throughout the MOD, giving users access to licensing and support via the MS desk. This provision underpins a wide range of operational capability through the Ministry and provides continued value through the delivery of comprehensive support and advice for enterprise architects and users of Mood applications.
As the Mood software platform develops, so do the capabilities it supports across the MOD, providing new functionality and performance so that MOD personnel can use the tool to drive operational efficiency in their respective areas of interest. The Mood Managed Service is a solution for Enterprise Architecture at the MOD and continues to drive collaboration throughout the organisation, providing the ability to model core business functions and processes, as well as their interdependencies MOD-wide.
As the MOD continues to modernise, the importance of such a function is clear and CACI remain committed to delivering a robust and capable Managed Service that continues to demonstrate Mood’s capability as the Enterprise Architecture tool of choice.
The Managed Service is excellent. It provides a means for anybody in the MOD to take advantage of the flexibility of Mood in the development of online applications or architectural models. In this it is second to none. In addition, the CACI team provide the support and management of the service to ensure that the end user has a trouble-free experience.
The responsiveness of this service to requests, issues and queries, in general, is quick, efficient and effective. The software is kept up to date in a controlled and risk-free way. This does not mean that there aren’t issues, but these are generally dealt with quickly and, in the main, transparently. Our experience of the Service in the development of Lighthouse (Defence Exercise Programme) and Lantern (Observation Capture and Analysis) for JW and Astral (Collective Training Objectives Management) for Air has enabled us to deliver solutions that are effective and adaptive to changing user needs, offering significant cost-effective benefits to their end user communities.
Alan Payne, Managing Director, Agilient International
The Submarine Delivery Agency (SDA) is part of Defence Equipment & Support (DE&S). Its purpose is to procure and project-manage the construction of future Royal Navy submarines and support those in-service working with Navy Command and the Defence Nuclear Organisation (DNO). The In-Service Management Team (ISM) sits within the SDA.
As part of the In-Service Management (ISM) team’s quality assurance function, periodic engineering audits are performed to ensure that processes are correctly followed when delivering equipment parts. During these audits, non-conformances may be identified which require attention, resulting in actions which must be tracked to completion.
ISM required a new capability to automate the management of this work and improve governance.
THE CHALLENGE
Equipment failure could occur with associated potential safety issues, because of a failure to track non-conformance actions.
Experience was being lost as staff are normally moved to new posts every two years.
Lessons from previous audits were not always applied due to limited information accessibility.
Efficiency needed improvement. Previous tools used to manage audit work (e.g. Excel and SharePoint) required significant overheads to track and manage the audit calendar.
ISM wanted a tool that would secure the audit process and better support operations by decreasing the probability of actions being missed or delayed. Easy access to previous audit outcomes would help preserve team knowledge.
The solution needed to be self-sufficient in that all details of the item being audited could be input to the tool, and the audit team assigned. In addition, ISM looked for a significant reduction in elapsed time to complete each audit.
THE SOLUTION
The SDA chose CACI’s Mood Software to underpin their solution because of how well it lends itself to extending capabilities through the addition of new modules. COMPASS Submarines was initially developed to provide management of documented business processes, and CACI were able to weave in a new audit module that would avoid users needing to log in to separate software tools.
The new tool digitises the recording of audit details such as non-conformance findings and related actions. This is underpinned by a workflow with alert emails triggered by activities like adding or updating audits, or a non-conformance needed to be acted upon.
Scheduled emails act as reminders, for example when an audit is due. This is a successful instance of Mood software’s ability to be customised using JavaScript to deliver extra functionality to the end solution.
THE RESULTS
Efficiency is improved through system-driven working rather than relying on personnel knowledge and human driven processes:
Strengthened governance, as there’s auditable evidence that findings are being captured and tracked.
Reduced likelihood of recurring issues.
Management overhead surrounding audits have been significantly lowered, allowing a reduction in FTE dedicated to the tasks.
Retention of knowledge is improved, as outcomes of latest and previous audits are readily available.
THE FUTURE
The audit module is available to other parts of Defence, however, its value as an engineering audit compliance tool isn’t limited to a Defence context. We’ll be exploring new uses and are actively looking at extending the solution design to be relevant to other types of audits such as the complete range of ISO standards.
We’re proud to share that the audit module was recognised with an award at the SDA Improvement Awards in November 2022.
Network automation has become increasingly prevalent in enterprises and IT organisations over the years, with its growth showing no signs of slowing down.
So, how is the network automation space evolving, and what are the top network automation trends that are steering the direction of the market in 2024?
Hyperautomation
With the increasing complexity of networks that has come with the proliferation of devices, an ever-growing volume of data and the adoption of emerging technologies in enterprises and organisations, manual network management practices have become increasingly difficult to uphold. This is where hyperautomation has been proving itself to be vital for operational resilience into 2024.
As an advanced approach that integrates artificial intelligence (AI), machine learning (ML), robotic process automation (RPA), process mining and other automation technologies, hyperautomation streamlines complex network operations by not only automating repetitive tasks, but the overall decision-making process. This augments central log management systems such as SIEM and SOAR with functions to establish operationally resilient business processes that increase productivity and decrease human involvement. Protocols such as gNMI and gRPC for streaming telemetry and the increased adoption of service mesh and overlay networking mean that network telemetry and event logging are now growing to a state where no one human can adequately “parse the logs” for an event. Therefore, the time is ripe for AI and ML to push business value through AIOps practices to help find the ubiquitous “needle” in the ever-growing haystack.
Enterprises shifting towards hyperautomation this year will find themselves improving their security and operational efficiency, reducing their operational overhead and margin of human error and bolstering their network’s resilience and responsiveness. When combined with ITSM tooling such as ServiceNow for self-service delivery, hyperautomation can truly transcend the IT infrastructure silo and enter the realm of business by achieving wins in business process automation (BPA) to push the enterprise into true digital transformation.
Increasing dependence on Network Source of Truth (NSoT)
With an increasing importance placed on agility, scalability and security in network operations, NSoT is proving to be indispensable in 2024, achieving everything the CMDB hoped for and more.
As a centralised repository of network-related data that manages IP addresses (IPAM), devices and network configurations and supplies a single source of truth from these, NSoT has been revolutionising network infrastructure management and orchestration by addressing challenges brought on by complex modern networks to ensure that operational teams can continue to understand their infrastructure. It also ensures that data is not siloed across an organisation and that managing network objects and devices can be done easily and efficiently, while also promoting accurate data sharing via data modelling with YAML and YANG and open integration via API into other BSS, OSS and NMS systems.
Enterprises and organisations that leverage the benefits of centralising their network information through NSoT this year will gain a clearer, more comprehensive view of their network, generating more efficient and effective overall network operations. Not to mention, many NSoT repositories are much more well-refined than their CMDB predecessors, and some – such as NetBox – are truly a joy to use in daily Day 2 operations life compared to the clunky ITSMs of old.
Adoption of Network as Service (NaaS)
Network as a Service (NaaS) has been altering the management and deployment of networking infrastructure in 2024. With the rise of digital transformation and cloud adoption in businesses, this cloud-based service model enables on-demand access and the utilisation of networking resources, allowing enterprises and organisations to supply scalable, flexible solutions that meet ever-changing business demands.
As the concept gains popularity, service providers have begun offering a range of NaaS solutions, from basic connectivity services such as virtual private networks (VPNs) and wide area networks (WANs) to the more advanced offerings of software-defined networking (SDN) and network functions virtualisation (NFV).
These technologies have empowered businesses to streamline their network management, enhance performance and lower costs. NaaS also has its place at the table against its aaS siblings (IaaS, PaaS and SaaS), pushing the previously immovable, static-driven domain of network provisioning into a much more dynamic, elastic and OpEx-driven capability for modern enterprise and service providers alike.
Network functions virtualisation (NFV) and software-defined networking (SDN)
A symbiotic relationship between network functions virtualisation (NFV), software-defined networking (SDN) and network automation is proving to be instrumental in bolstering agility, responsiveness and intelligent network infrastructure as the year is underway. As is often opined by many network vendors, “MPLS are dead, long live SD-WAN”; which, while not 100% factually correct (we still see demand in the SP space for MPLS and MPLS-like technologies such as PCEP and SR), is certainly directionally correct in our client base across finance, telco, media, utilities and increasingly government and public sectors.
NFV enables the decoupling of hardware from software, as well as the deployment of network services without physical infrastructure constraints. SDN, on the other hand, centralises network control through programmable software, allowing for the dynamic, automated configuration of network resources. Together, they streamline operations and ensure advanced technologies will be deployed effectively, such as AI-driven analytics and intent-based networking (IBN).
We’re seeing increased adoption of NFV via network virtual appliances (NVA) deployed into public cloud environments like Azure and AWS for some of our clients, as well as an increasing trend towards packet fabric brokers such as Equinix Fabric and Megaport MVE to create internet exchange (IX), cloud exchange (CX) and related gateway-like functionality as the enterprise trend towards multicloud grows a whole gamut of SDCI cloud dedicated interconnects to stitch together all the XaaS components that modern enterprises require.
Intent-based networking (IBN)
As businesses continue to lean into establishing efficient, prompt and precise best practices when it comes to network automation, intent-based networking (IBN) has been an up-and-coming approach to implement. This follows wider initiatives in the network industry to push “up the stack” with overlay networking technologies such as SD-WAN, service mesh and cloud native supplanting traditional Underlay Network approaches in Enterprise Application provision.
With the inefficiencies that can come with traditional networks and manual input, IBN has come to network operations teams’ rescue by defining business objectives in high-level, abstract manners that ensure the network can automatically configure and optimise itself to meet said objectives. Network operations teams that can devote more time and effort to strategic activities versus labour-intensive manual configurations will notice significant improvements in the overall network agility, reductions in time-to-delivery and better alignment with the wider organisation’s goals. IBN also brings intelligence and self-healing capabilities to networks— in case of changes or anomalies detected in the network, it enables the network to automatically adapt itself to address those changes while maintaining the desired outcome, bolstering network reliability and minimising downtime.
As more organisations realise the benefits of implementing this approach, the rise of intent-based networking is expected to continue, reshaping the network industry as we know it. The SDx revolution is truly here to stay, and the move of influence of the network up the stack will only increase as reliance on interconnection of all aspects becomes the norm.
How CACI can support your network automation journey?
CACI is adept at a plethora of IT, networking and cloud technologies. Our trained cohort of network automation engineers and consultants are ready and willing to share their industry knowledge to benefit your unique network automation requirements.
From NSoT through CI/CD, version control, observability, operational state verification, network programming and orchestration, our expert consulting engineers have architected, designed, built and automated some of the UK’s largest enterprise, service provider and data centre networks, with our deep heritage in network engineering spanning over 20 years.
Take a look at Network Automation and NetDevOps at CACI to learn more about some of the technologies, frameworks, protocols and capabilities we have, from YAML, YANG, Python, Go, Terraform, IaC, API, REST, Batfish, Git, NetBox and beyond.
Digital forensics is a branch of forensic science that focuses on the recovery and investigation of digital devices, data and electronic evidence. With over 90% of crimes having a digital element associated with it nowadays, digital forensics plays a pivotal role in delivering justice within criminal investigations, from the scene of the crime to the courtroom.
So, what does digital forensics entail? What makes it integral for businesses, and how are digital forensics processes carried out? What skills must one possess to pursue a role in this industry?
What does digital forensics entail? Digital forensics encompasses the identification, extraction and interpretation of electronic evidence from digital devices such as computers, laptops, smartphones, tablets and even network infrastructure. By examining the data on these devices, digital forensics experts can supply insights and an understanding of the events that occurred, the actions taken and the individuals involved. Within an organisation, digital forensics can be used to identify and investigate cybersecurity and physical security incidents, as well as fraud, intellectual property theft, insider threats/bad leavers, sexual misconduct and embezzlement.
Why is digital forensics integral to businesses? Digital forensics is vital for businesses as it safeguards against data security discrepancies. Since businesses typically have an influx of digital data from financial records to customer data and intellectual property, the use of digital forensics to investigate identified issues helps them avoid financial losses and reputational damage by identifying and investigating cyber enabled or dependent crimes and securing their information.
Data preservation Digital forensics plays a crucial role in preserving and presenting evidence for legal proceedings. When crime(s) involving digital devices occur, law enforcement agencies and businesses must gather relevant evidence for legal purposes, such as criminal prosecutions or civil litigation. Digital forensics experts follow policy and procedure documentation to ensure the integrity, preservation and authentication of electronic evidence. They create forensic copies of digital devices using validated methods, document the chain of custody and use advanced techniques to extract and analyse data without altering its original state. This aspect is vital, especially in situations where data is regularly updated or extracted from various sources. This also ensures that the evidence collected is admissible in court and can effectively support legal actions.
Digital forensics process deep dive
During an investigation, digital forensic techniques are applied to collect, preserve, and analyse digital evidence in a manner that ensures its integrity and admissibility in a court of law. With computer type devices, this involves using forensic software and hardware tools to create a digital forensic image of the device or media being examined. This image is a bit-by-bit copy of the original data, which allows investigators to work with the evidence without altering or compromising the original source. The forensic image is then processed and analysed in a controlled environment using forensic software and techniques to search for meaningful information that can be used as evidence.
In criminal cases, the digital forensics process has succeeded in identifying, apprehending and prosecuting criminals in a wide range of offenses covering both cyber enabled and cyber dependent offences. In civil litigation, digital forensics can be used in intellectual property disputes, employee misconduct investigations, and to support or challenge contractual claims.
While the digital forensics process may be unique to specific scenarios, it typically consists of the following steps:
Step 1: Collection and recovery The digital forensic process begins with the collection and recovery of information through advanced technological methods to extract and store data from computer systems, mobile devices and other storage mediums. Recovering such a vast scope of information can be fundamental to understanding the root cause of any digital incident, whether it’s a security breach, fraud or other cybercrime.
Step 2: Examination and analysis Once the evidence is recovered, digital forensics experts process the data using a range of tools before thoroughly analysing the data. Some examples of techniques used during analysis include file carving, registry analysis, database analysis, timeline investigation, hash comparison, filtering and keyword searching to identify relevant information that may support or refute a hypothesis or allegation. This can involve linking digital evidence between devices or people– with physical evidence or other forms of non-digital evidence– to create a comprehensive picture of the events under investigation. Digital forensics experts may need to work on a live or dead system— working live from a laptop or connecting via a hard drive to a lab computer– to decide which pieces of data are relevant to the investigation. The examination will result in a report or reports produced to address the points to prove defined within the digital evidence strategy and any data of significance presented evidentially for use in criminal or civil proceedings.
Step 3: Reporting and documentation The reporting process is tightly controlled by the Forensic Science Regulator and ISO 17025, ensuring that the status of compliance (to those standards) of work conducted is appropriately declared and the findings of the examination cannot be misinterpreted. Reporting can come in many forms, ranging from simple to complex, in line with criminal standard reporting formats.
Types of digital evidence
Communications Communications can occur in a wide range of mediums, from traditional emails and text messages to app-based communication, in-game, encrypted and secure communication channels.
Recovered communication data can be invaluable in establishing a suspect’s intentions, activities, connections between involved parties and potential evidence of illegal activities. Metadata relating to recovered communication data can be used during analysis to inform the investigation. Email headers, for example, can contain valuable metadata that can establish the authenticity and integrity of the communication. They can also supply information about the sender and recipient email addresses, the date and time of transmission, details of the email servers involved in the delivery process and enable investigators to define timelines and track communication flow. Attachments within emails can also give away clues about illegal activities, which can help prove a criminal’s motive, intent or even their involvement in the event in question. App based communications often contains media, links to other content or individuals of relevance and location data.
Internet activity Internet activity can be recovered from a wide range of browsers and is often extremely valuable in determining intent– for example, recovered ‘search terms’ entered into a search engine by an individual of relevance to the investigation. Internet records can be used alongside other activity conducted on the suspect’s device when investigating a time-period of relevance to the investigation during ‘timeline’ analysis.
Application data Mobile devices utilise software applications, or ‘Apps’, to enable the user of the device to perform a wide range of different functions. Recovered application data if often used during investigations for evidential purposes.
Logs Logs are automated records of computer processes, user activities or communication transactions generated by computer and mobile devices. They can be compelling evidence by being able to detail who accessed a specific system and what actions were taken
Media Videos and images are another significant type of evidence that can be used to identify and prove the physical presence of an individual at a specific location at a given time, concluding their involvement in the event in question. Metadata recovered from media is examined during analysis.
Archives Archives involve storing offline copies or backups of databases, files or even websites. This is a practical way of retrieving lost information, which can be crucial in a digital forensics investigation.Each of these types of evidence features their own unique characteristics and functions and contributes significantly to the realm of digital forensics, aiding experts in piecing together the digital aspects of investigations and solving cases.
What challenges commonly arise in digital forensics?
Adapting along with ever-changing technology Devices, operating systems and security are constantly changing, significantly complicating the field of digital forensics. With Windows, macOS, Linux, iOS and Android being the main operating systems used across consumer computer and mobile devices, forensics experts must innately understand each operating system’s structure and functions to effectively extract and interpret digital evidence.
Encryption and password protection Encryption is a widely used security measure that maintains data privacy and integrity. While these techniques effectively safeguard sensitive information, they can obfuscate investigations when authorities require access to relevant data. Encryption obfuscates the data format, making it decipherable only with the correct encryption key, or password. Without these credentials, accessing the encrypted data can become impossible.
Privacy concerns Digital forensics experts must always consider privacy while performing their work. Not only is their professional credibility at stake, but also the fundamental rights of individuals, as any breaches can lead to legal complications and reputational damage. As a result, forensics investigators must exercise caution in accessing information that is specifically pertinent to the investigation in question and that any non-relevant personal data is not intruded upon.
Establishing data authenticity and reliability Since electronic data can be easily altered or destroyed, establishing its authenticity and reliability can be compromised, resulting in complications during court proceedings. Despite forensics professionals’ best efforts, there is always a chance that the evidence could be disallowed by the court if certain legal criteria are not met.
Emerging trends in digital forensics
The integration of artificial intelligence (AI), machine learning (ML) and blockchain technologies, coupled with a rise in mobile device forensics, are transforming digital forensics as we know it. These advancements will bolster forensics experts’ capabilities in terms of visualising and interacting with complex digital crime scenes, leading to a significant enhancement in their ability to gather crucial evidence and reconstruct events accurately.
Artificial intelligence (AI) and machine learning (ML) integration Artificial intelligence (AI) and machine learning (ML) integration will continue to revolutionise the ways in which digital forensics experts can investigate and analyse data and evidence. Through AI-powered algorithms, experts can rapidly process large volumes of data to significantly reduce the time needed to prepare for investigations. AI and ML algorithms can also be used to identify patterns within the data that may not have been picked up during traditional, manual analysis. These algorithms can also automatically categorise and prioritise evidence to help forensics analysts assess the relevance and potential significance of collected data. Automating this process saves analysts considerable time, ensuring their focus remains on the most essential elements of the investigation. While AI can aid the investigation process, it is important to stress that digital forensic experts must never use material identified by AI as being of potential relevance within evidential reports without first reviewing and verifying it.
Implementation of blockchain technology Blockchain characteristics– immutability, transparency and decentralisation– make it ideal for ensuring the security and integrity of digital evidence. With digital evidence traditionally stored and managed by centralised systems or authorities, potential vulnerabilities and risks emerge, as the evidence can be tampered with or manipulated, compromising the integrity of the investigation. By implementingblockchain technology, a decentralised and distributed ledger system that addresses these concerns is created. Blockchain acts as an immutable and tamper-proof record that stores all forensic activities, including the collection, analysis, and preservation of digital evidence, ensuring that any changes made to evidence will be easily detected, providing increased trustworthiness to the investigation process.
Rise of specialised mobile device forensics Mobile device forensics has become increasingly prominent due to the widespread usage of mobile devices. It is a sub-branch of forensics that focuses on the recovery of data or information from mobile devices. This specialised area of digital forensics employs advanced tactics and approaches to analyse data, calling for an increased importance of this specialised forensics.
Certification and career opportunities
Digital forensics experts’ innate software understanding coupled with access to sophisticated tools and technology allows them to analyse and report on data effectively. These experts understand technology, computer systems and data structures to a degree that guarantees secure data evidence collection. Their roles are critical in corporate environments, where they may be tasked with examining malware, breaches or damages that can identify attackers to help organisations prevent incidents of this nature reoccurring.
Digital forensics professionals can pursue a range of classroom and online courses that cover a variety of aspects and specialisms of the field. While some organisations may task digital forensics experts with broader tasks and responsibilities, there will be a unanimous understanding of software to back these roles. A typical day could include:
Handling exhibits, data and materials to avoid contamination or corruption.
Disassembling and examining computers or hardware for non-volatile data storage.
Acquiring and processing data in line with defined digital forensic strategies.
Review processed data and analysing material(s) of relevance.
Creating formal reports with evidence to support investigations.
Roles within Digital Forensics Units include, but are not limited to:
Digital Forensic Technician
Digital Forensics Investigator
Senior Digital Forensics Investigator
Digital Forensics Manager
Quality Manager
Technical Manager
Quality Technician/Assistant
Consultant
How can CACI help?
CACI can supply comprehensive digital forensic services that encompass computer, mobile phone device examination and scene support for law enforcement, commercial and civil investigations.
To ensure compliance with The Forensic Science Regulators Code of Practice, and ensure quality of all Digital Forensics Investigation and proficiency Testing services, the United Kingdom Accreditation Service (UKAS) has granted CACI with accreditation for ISO/IEC 17025:2017.
UKAS Recommendation Details: Accreditation Scope: ISO/IEC 17025:2017 with compliance to ILAC G19:06/2022 and Forensic Science Regulator Code of Practice Version 1.
Capture and preservation of data from computers and digital storage devices HDDs, SSDs, M.2 memory devices, memory cards and USB flash devices – Using FTK Imager, EnCase Imager and Tableau T356789iu.
Capture, preservation, processing and analysis of data from Mobile Devices, SIM cards and Memory Cards – Using Cellebrite 4PC, Cellebrite Physical Analyser, MSAB XRY, MSAB XAMN and Magnet Axiom.
CACI Ltd has also been recommended for accreditation to ISO/IEC 17043:2023. This recommendation is for proficiency testing schemes relating to the acquisition, processing and analysis of computer and mobile devices.
In addition, CACI’s Digital Forensics Lab holds certification from British Standards Institute (BSI) to ISO 27001 for the provision of Digital Forensic Science Services.
To learn more about our Digital Forensic Proficiency schemes or to book a demonstration,contact us today.
CACI is delighted to announce that its Digital Forensics Laboratory has been granted accreditation by the United Kingdom Accreditation Service (UKAS) to ISO/IEC 17025:2017 with compliance to the Forensic Science Regulator Code of Practice and ILAC G19. This accreditation signifies CACI’s commitment to providing compliant, quality assured digital forensics services to support Law Enforcement related industries.
This achievement is particularly significant in light of the Forensic Science Regulator’s Act 2021, which came into effect on 2nd October. The Act introduced a statutory requirement of compliance to the new ‘Code’ for Forensic Science Activities provided to the UK Criminal Justice System. The new ‘Code’ is crucial for ensuring the admissibility of robustness of evidence, and includes the requirement for accreditation to ISO/IEC 17025:2017 for the digital forensic science activities provided by CACI from within their laboratory.
CACI’s Digital Forensics laboratory, situated in Northallerton, North Yorkshire, has been designed to match the capabilities of law enforcement digital forensic laboratories. This enables CACI to integrate seamlessly with their clients, minimising the impact of outsourcing digital forensic investigation services and maximising the benefit for the client. The team behind the laboratory consists of highly skilled professionals with extensive experience in the digital forensics field within Law Enforcement investigations.
Richard Cockerill, Operations Director of CACI’s Digital Forensics Laboratory, expressed his team’s excitement about the accreditation, highlighting their dedication and expertise. He further emphasised CACI’s ability to deliver high-quality digital forensic investigation services to the UK criminal justice system.
“CACI looks forward to expanding its support for both existing and new law enforcement clients. This achievement highlights the dedication and expertise of our digital forensics team. With our robust capabilities and specialist expertise, CACI is well-positioned to deliver high-quality digital forensic investigation services to the UK criminal justice system and related industries. This accreditation from UKAS is a significant milestone in our development and ongoing commitment to excellence.”
Having secured ISO/IEC 17025:2017 accreditation, CACI is now actively expanding its support for both existing and new law enforcement clients.
UKAS Accreditation Details:
ISO/IEC 17025:2017 with compliance to Forensic Science Regulator Code of Practice and ILAC G19:06/2022
Mobile type devices: Acquisition, Processing and Analysis
Computer type devices: Acquisition and Preservation
Network Automation and NetDevOps are hot topics in the network engineering world right now, but as with many new concepts, it can be confusing to decipher the meaning from the noise in the quest to achieving optimal efficiency and agility of network operations.
A useful starting point would be to first define what network automation is not:
Network automation is not just automated configuration generation or inventory gathering
It is not just using the same network management system (NMS) as today but faster
It is not just performing patching and OS upgrades faster, or network engineers suddenly becoming software developers
Network automation is not going to work in isolation of changing lifecycle and deployment processes, nor is it a magic toolbox of all-encompassing applications, frameworks and code.
At CACI, we view network automation as both a technology and a business transformation. It is as much a cultural shift from legacy deployment and operations processes as it is a set of tools and technology to implement speed, agility and consistency in your network operations. Infrastructure is changing fast, and with Gartner reporting 80% of enterprises will close their traditional data centres by 2025, the only constant in networking is that change will persist at faster clip.
So, how does Network Automation work? What differentiates network automation from NetDevOps? What difference can it make to modern IT operations, and which best practices, technologies and tools should you be aware of to successfully begin your network automation journey?
How does Network Automation work?
Network Automation implements learnings from DevOps developments within the software development world into low-level network infrastructure, using software tools to automate network provisioning and operations. This includes techniques such as:
Anomaly detection
Pre/post-change validation
Topology mapping
Fault remediation
Compliance checks
Templated configuration
Firmware upgrades
Software qualification
Inventory reporting.
In understanding how these differ from traditional network engineering approaches, it is important to consider the drivers for network automation in the post-cloud era – specifically virtualisation, containerisation, public cloud and DevOps. These technologies and approaches are more highly scaled and ephemeral than traditional IT Infrastructure, and are not compatible with legacy network engineering practices like:
Using traditional methodology to manage infrastructure as “pets” rather than “cattle”
Box-by-box manual login, typing CLI commands, copy-pasting into an SSH session, etc.
“Snowflake networks” which don’t follow consistent design patterns
Network automation aims to change all this, but to do so, must overcome some obstacles:
Cross-domain skills are required in both networking and coding
Some network vendors do not supply good API or streaming telemetry support
Screen scraping CLIs can be unreliable as CLI output differs even between products of the same device family.
Cultural resistance to changes in both tooling and practice
Lack of buy-in or sponsorship from the executive level can compound these behaviours.
What differentiates network automation from NetDevOps?
You may also have heard of “NetDevOps” and be wondering how – or if – this differs from network automation. Within CACI, we see the following key differences:
We often see our clients use a blend of both in practice as they go through the automation adoption curve into the automation maturity path, from ad-hoc automation, through structured automation, into orchestration and beyond:
What difference can network automation make to modern IT operations?
Network automation aims to deliver a myriad of business efficiencies to IT operations. This has proven to be transformational across our wide and varied client base, with improvements demonstrated in the following ways:
Increased efficiency
Much of networking is repetition in differing flavours – reusing the same routing protocol, switching architecture, edge topology or campus deployment. A network engineer is often repeating a task they’ve done several times before, with only slight functional variations. Network automation saves time and costs by making processes more flexible and agile, and force-multiplying the efforts of a network engineering task into multiple concurrent outputs.
Reduced errors
Networking can be monotonous, and monotony combined with legacy deployment methodology can cause repetition of the same error. Network automation reduces these errors – particularly in repetitive tasks – to lower the chances of reoccurrence. When combined with baked-in, systems-led consistency checking, many common – but easily-avoidable – errors can be mitigated.
Greater standardisation
Networks are perhaps uniquely both the most and least standardised element of the IT stack. While it is easy to have a clean “whiteboard architecture” for higher-level concerns such as application development, the network must often deal with the physical constraints of the real world, which, if you’ve ever tried to travel to a destination you’ve not been to before, can be messy, confusing and non-sensical. Network automation ensures the starting point for a network deployment is consistent and encourages system-level thinking across an IT network estate over project deployment-led unique “snowflake” topologies.
Improved security
Increased security often comes as a by-product of the standardisation and increased efficiency that network automation brings. Most security exploits are exploits of inconsistency, lack of adherence to best practice or related – which ultimately pivot around “holes” left in a network (often accidentally) due to rushing or not seeing a potential backdoor, open port, misconfiguration or enablement of an insecure protocol. When combined with modern observability approaches like streaming telemetry and AIOps, network automation can help enforce high levels of security practice and hardening across an IT estate.
Cost savings
Given its position as the base of the tech stack, the network is often a costly proposition – with vertically-integrated network vendors, costly telco circuit connectivity, expensive physical world hosting and colocation costs, and so on – the network is often a “get it right first time” endeavour which can be cost-prohibitive to change once live and in service. Network automation encourages cost savings through the creation of right-the-first-time and flexible network topologies and in performing design validation which can minimise the amount of equipment, licensing, ports and feature sets required to run a desired network state.
Improved scalability
As both consumer and enterprise expectations of scale are set by the leading web scalers of the world, the enterprise increasingly expects the flexibility to scale both higher and lower levels of the IT stack to larger and more seamless sizes, topologies and use cases. Network automation aids in achieving this through the enforcement of consistency, modularisation, standardisation and repeatability for network operations.
Faster service delivery
IT service delivery is increasingly moving away from being ticket-led to self-service, with the lower-level infrastructure elements expected to be delivered much faster than the traditional six-to-eight-week lag times of old. As telco infrastructure moves through a similar self-service revolution, so too does the enterprise network require the ability for self-service, catalogue-driven turn-up and modularised deployment. Network automation enables this by optimising network performance to the required parameters of newer services and applications in the modern enterprise.
What are the best practices for network automation?
Network automation is as much a cultural transformation as it is a technology transformation. Much as DevOps disrupted traditional ITIL and waterfall approaches, NetDevOps similarly disrupts current network engineering practices. We find the following best practices to be beneficial when moving towards network automation:
Choose one thing initially to automate
Pivot around either your biggest pain point or most repetitive task
Don’t try to take on too much at once. Network automation is about lots of small, repeated, well-implemented gains which instil confidence in the wider business
People love automation, they don’t want to be automated. The biggest barrier to adopting automation will be keeping colleagues and stakeholders on-side with your efforts by showing the reward of that they provide to them and to the wider business.
Choose tooling carefully
Stay away from the “latest shiny” and pick open, well-used tools with large libraries of pre-canned vendor, protocol and topology integrations, and human-readable configuration and deployment languages
Maintain your specific business context during tool selection
Think ahead for talent acquisition and retention – writing custom Golang provisioning application might be handy today, but you could struggle to get others involved if the author decides to leave the business.
Optimise for code reusability
Build and use version control systems such as Git, GitHub and Azure DevOps from day one and encourage or even mandate their use
Advocate for the sharing of functions, modules, routines and snippets written within code, runbooks, IaC and state files within scrapbooks and sandpits. The flywheel of productivity increases exponentially within NetDevOps as increasingly more “we’ve done that before” coding and practices accelerate the development of newer, more complex routines, IaC runbooks and functions
Code should be written with reuse and future considerations in mind. While it may be tempting to “save ten minutes” so as to not functionise, modularise or structure code, this will catch up with you in the future.
Use templating for configuration generation
Templating programmatically generates the vendor-specific syntax for a network device based on a disaggregated, vendor-neutral input format (such as Jinja2, Mako or Markdown) which is later combined with data (such as specific VLANs, IP Addresses or FQDNs) to generate the vendor-specific syntax (such as Cisco IOS, Arista EOS or Juniper Junos) for the network device
The act of creating the templates has an added by-product of forcing you to perform design validation. If your design document doesn’t have a section covering something you need template syntax for, it could well be due for an up-issue
Templates become a common language for network intent that are readable by all network engineers regardless of their individual network vendor and technology background, aiding in time to onboard new staff and ensuring shared understanding of business context around the IT network.
Which tools, frameworks and languages enable network automation?
There are a myriad of network automation tools, frameworks, languages and technologies available today. Deciphering these can be confusing, but a good starting point is categorising the distinct types of network automation tooling available:
Network Configuration and Change Management (NCCM)
Enable patching, compliance and deployment (rollout)
Often align to network management systems (NMS) or BSS/OSS (Telco space)
Abstract network device box-by-box logic into estate-wide, policy-driven control
Often align to industry frameworks and controls (SOC2, HIPAA, CIS, PCI/DSS)
Intent-Based Networking Systems (IBNS)
Translate business intent through to underlying network configuration and policy
Are starting to become the “new NMS”
It would be exhaustive to list all possible tools, frameworks and languages available today, but these are some of our most seen within our client base today. Our current favourites can be seen in What are the most useful NetDevOps Tools in 2023?:
Tools
Terraform – An open-source automation and orchestration tool capable of building cloud, network and IT infrastructure based on input Infrastructure as Code (IaC) code via HCL (HashiCorp Configuration Language) that defines all attributes of the device and configuration blueprint required. Terraform is highly flexible and has a vast array of pre-built modules and providers for most network engineering concerns via the Terraform Registry.
Ansible – An open-source automation and orchestration tool typically used to configure within the device rather than provision the underlying Baremetal or cloud infrastructure the cloud, network or IT device sits atop, which is based on input IaC code via YAML that defines the attributes and device configuration required. Ansible is versatile and has a large cache of pre-built runbooks and integrations for network engineering concerns via Ansible Galaxy.
NetBox – The ubiquitous, open-source IP Address Management (IPAM) and Data Centre Infrastructure Management (DCIM) tool, which acts as the Network Source of Truth (NSoT) to hold a more detailed view of network devices, topology and state than could be achieved via alternative approaches such as spreadsheet or CMDB. NetBox is highly customisable, with a rich plugin ecosystem and customisable data models via YANG to adapt around business-specific topology data models.
Git – The de facto version control system, which is the underlying application that powers GitHub and GitLab and supplies a mechanism to store IaC, configuration and code artefacts in a distributed, consistent and version-controlled manner. Git is pivotal in enabling the controlled collaboration on network automation activities across a distributed workforce while maintaining the compliance and controls required within the enterprise environment.
Frameworks
Robot framework: A generic test automation framework allowing network automation code and IaC runbooks to run through acceptance testing and test-driven development (TDD) via a keyword-driven testing framework with a tabular format for test result representation. It is often used in conjunction with tools such as pyATS, Genie, Cisco NSO and Juniper NITA.
PEP guidelines: Short for Python Enhancement Proposals (PEP), these are to Python what RFCs are to network engineering, and provide prescriptive advice on setting out, using, structuring and interacting with Python scripts. The most commonly known of these is the PEP8 – Style Guide for Python.
Cisco NADM: The Cisco Network Automation Delivery Model (NADM) is a guide on how to build an organisation within a business around an automation practice, addressing both the human aspect as well as some of the tooling, daily practices, procedures, operations and capabilities that a network automation practice would need to take traction in an IT enterprise landscape.
Languages
Python: The de facto network automation coding language, utilised as the underlying programming language in tools from NetBox, Nornir, Batfish, SuzieQ, Netmiko, Scrapli, Aerleon, NAPALM and more, popularised by its extensive network engineering-focused library within PyPi. Python is the Swiss army knife of NetDevOps, able to turn its hand to ad-hoc scripting tasks through to full-blown web application development using Flask or API gateway hosting using FastAPI.
Golang: An upcoming programming language, which benefits over Python in terms of speed via a compiler-based approach, parallel-execution, built-in testing and concurrency capabilities when compared to Python. On the downside, it has a significantly steeper learning curve than Python for new entrants into the realm of development and has far fewer network engineering library components available to use.
What does the future of network automation look like?
The demand for network automation and NetDevOps professionals is undoubtedly on the rise, a trend that we at CACI expect to continue as budgetary pressures from the macroeconomic climate accelerate and trends like artificial intelligence (AI) begin to challenge the status quo and push businesses to deliver seamless, scalable network fabrics with more expectation of self-service and less tolerance of outage, delay or error. We see more of our clients moving up through the automation maturity path towards frictionless and autonomous network estates and expect this to accelerate through the coming years with ancillary trends such as NaaS (Network as a Service), SDN (Software Defined Networking) and NetDevOps set to continue and embed the NetEng Team firmly into the forthcoming platform engineering teams of tomorrow.
How can CACI help you on your network automation journey?
With our proven track record, CACI Network Services is adept at a plethora of IT, networking and cloud technologies. Our trained cohort of high calibre network automation engineers and consultants are ready and willing to share their industry knowledge to benefit your unique network automation and NetDevOps requirements. We are a trusted advisor that ensures every team member is equipped with the necessary network engineering knowledge from vendors such as Cisco, Arista and Juniper, along with NetDevOps knowledge in aspects such as Python for application Development, NetBox for IPAM and NSoT, Git for version control, YAML for CI/CD pipeline deployment and more.
Our in-house experts have architected, designed, built and automated some of the UK’s largest enterprise, service provider and data centre networks, with our deep heritage in network engineering spanning over 20 years across a variety of ISP, enterprise, cloud and telco environments for industries ranging from government and utilities to finance and media.
Get in touch with us today to discuss more about your network automation and NetDevOps requirements to optimise your business IT network for today and beyond.
There was once a time when the Internet consisted of only a few top-level domains (TLDs) – .com, .net, .org and a few others – but not anymore. TLD-List reports there are now over 3,745 domain extensions and growing, with some brands even having their own organisation extensions such as .barclays and .bbc in use for careers sites, product pages and more.
As well as their obvious and well-documented impact on SEO, did you know they can also negatively impact your website performance – particularly on load times and TTFB?
In this blog, we’ll uncover how we re-platformed some of our web assets to overcome an issue with SEO performance we didn’t even know we were having.
Background on the roots of DNS
DNS – or Domain Name System – is an often-overlooked part of the Internet today, but is very much a key building block for the rest of the web. Before we can discuss why differing TLDs impact performance, it is useful to have some grounding in DNS concepts first.
At its heart, DNS is a tree hierarchy – with an invisible dot as the mandatory root. At a base level, the main job of DNS is to resolve a domain such as www.google.com to an IPv4 or IPv6 Address such as 8.8.8.8 for TCP/IP to start the packet connection process to.
Let’s explore this with the well-known subdomain calendar.google.com. Firstly, although you don’t type this with the suffixed dot (.), it is very much there. Don’t believe us? Try this in a command prompt:
ping calendar.google.com.
Here’s what we get:
ping calendar.google.com.
Pinging calendar.google.com [142.250.180.14] with 32 bytes of data:
Reply from 142.250.180.14: bytes=32 time=7ms TTL=119
Ping statistics for 142.250.180.14:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 7ms, Maximum = 7ms, Average = 7ms
ProTip: Ever tried to ping or nslookup something external like website.com only to have the result come back as website.com.internal.corp.acmeco.local on you? Same difference– Windows commands like nslookup assume everything is a subdomain of your DNS Suffix Search List if you don’t suffix a “.” (dot) at the end. Most applications like web browsers automatically do this in the background for you; much the same as they silently add:443 to the end of HTTPS websites.
If we deconstruct calendar.google.com, and process it right-to-left from the perspective of the DNS tree hierarchy, it’s composed of:
Root – . (invisibly added in)
Top Level Domain (TLD) – com
Domain – google
Subdomain – calendar
The hierarchy is important because each layer informs which signpost needs to be used for the IP Address Lookup up the tree. Name servers (or “NS”) are used to act as the signpost for domain name to IP address resolution. There are two main “types” of name server (DNS Purists will come for us – we know, we know…):
Root Zone (Internet Bodies)
Run by organisations similar to IANA and ISC for the good of the wider Internet itself, as a going concern
DNS Zone (Registrars and Organisations)
Run by either Domain Registrars such as Namecheap, ISPs and CDNs such as Cloudflare or end user organisations themselves – out of the organisation’s own interest
This distinction is important when it comes to the level of detail signposted by a given name server at any given level of the DNS hierarchy.
Function of a name sever
The level of a name server within the hierarchy determines the amount of information it returns.Normally, the higher the level (towards the root) gives less information andis likely to only signpost to another name server with the detailed IP information. Using the same example, let’s see this in practice with the same calendar.google.com example and walk through the hierarchy:
As can be seen, the function of each upper-level of the hierarchy is to point to another signpost element lower-down in the hierarchy that knows the IP address answer. The reason for this is scaling – it wouldn’t scale to have a flat hierarchy of all the 628.5 million registered Domain Names in the web, let alone the many hundreds or thousands of subdomains each of those could have.
In practice, the client rarely probes the actual authoritative DNS resolver for a given DNS zone, usually the DNS recursion chain is such that a client resolves against a caching DNS server run by their downstream ISP, CDN, Cloud or other DNS provider. Caching plays a role here to offload lookup stress on the DNS ecosystem; the above example is simplified to show the concept in isolation of the impact of DNS optimisations such as DNS caching and Anycast.
The famous thirteen
You may have been around the Interwebs long enough to have heard of the infamous “13” Root Servers which effectively act as the name server phonebook for the original TLDs such as .com and .net, and are currently:
As it turns out, these aren’t the only root name servers in town; there are an ever-increasing set of root servers to support the ever-increasing amount of TLDs proliferating today, as recorded by the IANA Root Zone Database.
What’s more, these aren’t all created equally – as it turns out, not everyone is as passionate about DNS performance optimisation as internet society organisations like the IETF are.
Sometimes, Root DNS is three quarters of the response time
We were looking into why the response time of some parts of our previous Job Board cacins.careers were so bad and had just moved from using cURL commands into the ReqBin REST Testing Tool when we spotted the above. Assuming no DNS caching in play, of the 668ms response time, DNS resolution took up a shocking 509ms – or 76% – of the entire HTTP GET process.
So, the journey began. Using the IANA Root Zone Database, we undertook the following process to confirm whether it was indeed just us:
Collate a bunch of TLDs operated by differing TLD Managers
Measure the time of this lookup in Windows using powershell “Measure-Command { nslookup domain.tld. root.tld.server.tld. }”
i.e. For fast.com this command was: powershell “Measure-Command { nslookup fast.com. a.gtld-servers.net. }”
Here are our quasi-scientific results:
We only picked the fast domain because we thought it would be most likely to be registered across a selection of some technology-related TLDs and not give a false-positive with a NXDOMAIN response. Also, maybe we want you to think we’re really fast at doing things.
Clearly, this isn’t accounting for the following:
DNS Anycast effect on root name server choice and availability
DNS Anycast routing effect on geographic location
DNS Recursion Chain effect on lookup client (an ISP, CDN, Cloud or dedicated DNS Name Server)
DNS caching of lookups (an ISP, CD, Cloud or other downstream DNS Resolver will cache responses for a time)
Appeasing the DNS Gods with your latest incantation.
However, it does give some food for thought on the impact that a seemingly invisible part of every web app request – the DNS lookup – can have, and how such a low-level component such as choice of TLD can affect this. Turns out it’s not just SEO that can be impacted by TLD choice.
How can CACI can help you
At CACI, our Network Services team are well-practiced on a variety of DNS, Network and Cloud systems. If you have any enquiries into how we can assist on your infrastructure issues, we can help.