How AI is rewriting the rules of network engineering

How AI is rewriting the rules of network engineering

AI is coming for your network… but not as you expect

Seasoned IT professionals are no strangers to technology transformations and weathering the storms associated with them. Artificial Intelligence (AI), however, presents different, unique challenges to your network. Everyone is talking about the changes that AI will bring to your work, but few are talking about the changes AI application workloads bring to the design, architecture and operations of your network.

What changes are coming to network engineering and automation due to AI?

The advent of AI means that now more now than ever before, the architecture, design and operational excellence of your network matters. Network automation is coming to the fore to deal with the changes AI requires of networks, including: 

  • High throughput transactions facilitated via features such as RoCE Adaptive Routing (AR) 
  • Parallelised datagram transmission through AI network protocols such as RoCE, InfiniBand and other RDMA-based approaches 
  • Dense port connectivity to interconnect numerous distributed GPU and TPU processors required for generative AI (GenAI) training and model processing 
  • Lossless packet transmission to optimise LLM training runs and prevent the need for costly retransmission that can lead to AI training data corruption 
  • Extreme bandwidth utilisation from bursty elephant flows which can flow up to the line-rate of the connected NICs. 

AI workloads such as GPT, LLM and ML have different requirements of your network to traditional IT workloads. Legacy ITSM approaches also won’t cut it for AI-enabled business applications. It isn’t just routers, switches, firewalls and cables – it’s the 24/7 backbone of your organisation’s competitive advantage. 

This is FCoE (Fibre Channel over Ethernet) all over again; only this time it’s not going away – AI is here to stay. Humans driven through ITIL don’t work 24/7 at 100% capacity like AI does, which is where automation comes in. Specifically, network automation facilitated through expert NetDevOps practices and tooling. 

How CACI can help

Embracing the power of automation will lead to a robust and agile network infrastructure for your organisation. With over 20 years of experience with all aspects of network engineering – data centre, service provider, hybrid cloud and beyond – including complimentary offerings in delivery assurance and DevOps, CACI has networked, designed automated some of the UK’s most successful companies in financial services, telecommunications, utilities, government and public sector. 

Our renowned network automation and NetDevOps services revolutionise your network infrastructure by leveraging advance technologies required for AI workloads. From configuration management to network monitoring and troubleshooting through observability, we streamline your operations, improve efficiency and maximise your network performance. 

A few of the many benefits of CACI’s network automation services include:

  • Automating network provisioning and troubleshooting: Eliminating manual network provisioning and expediting network troubleshooting through assisted alarm and event correlation 
  • Enhancing network understanding and management: Codifying an understanding of the network topology in a structured data format and integrating network provisioning workflows into IT Service Management (ITSM) tooling 
  • Improving efficiency and cost-effectiveness: Reducing the risk of network deployment mistakes and rework and minimising costs through a modularisation of network configuration approach 
  • Optimising resource utilisation and talent management: Increasing ROI through reuse of codified “Network Functions as Code” and retaining in-demand network engineering talent through use of modern network deployment working practices. 

Don’t let your network get left behind by the AI network revolution. Contact CACI today to navigate AI and bolster your network ready for the AI-enabled, LLM-led, ML-fed future. 

Digital Twin: Seeing the Future

Digital Twin: Seeing the Future

 

Predicting what’s coming next and understanding how best to respond is the kind of challenge organisations struggle with all the time. As the world becomes less predictable and ever-changing technology transforms operations, historical data becomes harder to extrapolate. And even if you can make reasonable assumptions about future changes, how they will impact on the various aspects of your business is even more problematic.

Decision makers need another tool in their arsenal to help them build effective strategies that can guide big changes and investments. They need to combine an understanding of their setup with realistic projections of how external and internal changes could have an impact. A Digital Twin built with predictive models can combine these needs, giving highly relevant and reliable data that can guide your future course.

The Defence Fuels Prototype

Using Mood Software and in collaboration with the MOD’s Defence Fuels Transformation, CACI built a digital twin focused on fuel movement within an air station. With it we aimed to understand the present, but also crucially, to predict the near future and test further reaching changes.

We used two kinds of predictive model that can learn from actual behaviour. For immediate projections, we implemented machine learning models that used a small sample of historical data concerning requirements for refuelling vehicles given a certain demand, allowing an ‘early warning system’ to be created.

However, we knew that the real value came in understanding what’s further ahead, where there is a higher risk of the wrong decision seriously impacting the success of operations. We adapted and integrated an existing Defence Fuels Enterprise simulation model, Fuel Supply Analysis Model (FSAM), to allow the testing of how a unit would operate given changes to the configuration of refuelling vehicles.

Functions were coded in a regular programming language to mimic the structural model and to mimic the kinds of behaviour that is evidenced through the data pipeline. As a result, we are able to make changes to these functions to easily understand what the corresponding changes would be in the real world.

This allows decision makers to test alternative solutions with the simulation models calibrated against existing data. Models informed by practical realities enables testing with greater speed and confidence so you have some likely outcomes before committing to any change.

 

What does this mean for me?

Digital Twins are extremely flexible pieces of technology that can be built to suit all kinds of organisations. They are currently in use in factories, defence, retail and healthcare. Adaptable to real world assets and online systems, it’s hard to think of any area they couldn’t be applied to.

Pairing a digital representation of your operations, processes and systems with predictive and simulation models allows substantial de-risking of decision making. You can predict what will happen if your resourcing situation changes, and plan accordingly; you can also understand the impact of sweeping structural changes. The resulting data has been proven against real-world decisions, making it truly reliable.

Time magazine has predicted that Digital Twins will ‘shape the future’ of multiple industries going forward and I think it’s hard to argue with that.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

For more on Mood Software and how it can be your organisation’s digital operating model, visit the product page.

How ethical is machine learning?

How ethical is machine learning?

We all want tech to help us build a better world: Artificial Intelligence’s use in healthcare, fighting human trafficking and achieving gender equity are great examples of where this is already happening. But there are always going to be broader ethical considerations – and as AI gets more invisibly woven into our lives, these are going to become harder to untangle.

What’s often forgotten is that AI doesn’t just impact our future – it’s fuelled by our past. Machine learning, one variety of AI, learns from previous data to make autonomous decisions in the present. However, which parts of our existing data we wish to use as well as how and when we want to apply them is highly contentious – and it’s likely to stay that way.

A new frontier – or the old Wild West?

For much of human history, decisions were made that did not reflect current ideals or even norms. Far from changing the future for the better, AI runs the risk of mirroring the past. A computer program used by a US court for risk assessment proved to be highly racially biased, probably because minority ethnic groups are overrepresented in US prisons and therefore also in the data it was drawing conclusions from.

This demonstrates two dangers: repeating our biases without question and inappropriate usage of technology in the first place. Supposedly improved systems are still being developed and utilised in this area, with ramifications on real human freedom and safety. Despite its efficiencies, human judgement is always going to have its place.​​​​​​​

The ethics of language modelling, a specific form of machine learning, are increasingly up for debate. At its most basic it provides the predictive texting on your phone, using past data to guess what’s needed after your prompt. On a larger scale, complex language models are used in natural language processing (NLP) applications, applying algorithms to create text that reads like real human writing. We already see these in chatbots – with results that can range from the useful to the irritating to the outright dangerous.

At the moment, when we’re interacting with a chatbot we probably know it – in most instances the language is still a little too stilted to pass as a real human. But as language modelling technology improves and becomes less distinguishable from real text, the bigger opportunities – and issues – are only going to be exacerbated.

Where does the data come from?

GPT-3, created by OpenAI, is the most powerful language model yet: from just a small amount of input, it can generate a vast range, and amount, of highly realistic text – from code to news reports to apparent dialogue. According to its developers ‘Over 300 applications are delivering GPT-3–powered search, conversation, text completion and other advanced AI features’.

And yet MIT’s Technology Review described it as based on ‘the cesspits of the internet’. Drawing indiscriminately on online publications, including social media, it’s been frequently shown to spout racism and sexism as soon as it’s prompted to do so. Ironically, with no moral code or filter of its own, it is perhaps the most accurate reflection we have of our society’s state of mind. It, and models like it, are increasingly fuelling what we read and interact with online.​​​​​​​

​​​​​​​Human language published on the internet, fuelled by algorithms that encourage extremes of opinion and reward anger, has already created enormous divisions in society, spreading misinformation that literally claims lives. Language models that generate new text indiscriminately and parrot back our worst instincts could well be an accelerant. ​​​​​​​

The words we use

Language is more than a reflection of our past; it shapes our perception of reality. For instance, the Native American Hopi language doesn’t treat time in terms of ‘chunks’ like minutes or hours. Instead they speak, and indeed think of it, as an unbroken stream that cannot be wasted. Other examples span across every difference in language, grammar, sentence structure – both influencing and being influenced by our modes of thinking.

The language we use has enormous value. If it’s being automatically generated and propagated everywhere, shaping our world view and how to respond to it, it needs to be done responsibly, fairly and honestly. Different perspectives, cultures, languages and dialects must be included to ensure that the world we’re building is as inclusive, open and truthful as possible. Otherwise the alternate perspectives and cultural variety they offer could become a thing of the past.

What are the risks? And what can we do about them?

Ethical AI

Language and tech are already hard to regulate due to the massive financial investment required to create language models. It’s currently being done by just a few large businesses that now have access to even more power. Without relying on human writers, they could potentially operate thousands of sites that flood the internet with automatically written content. Language models can then learn what characteristics result in viral spread and repeat, learn from that, and repeat, at massive quantity and speed.

Individual use can also lead to difficult questions. A developer used GPT-3 to create a ‘deadbot’ – a chatbot based on his deceased fiancée that perfectly mimicked her. The idea of chatbots that can mask as real, live people might be thrilling to some and terrifying to others, but it’s hard not to imagine feeling squeamish about a case like that. ​​​​​​​

Ultimately, it is the responsibility of developers and businesses everywhere to consider their actions and the future impact of what they create. Hopefully positive steps are being made. Meta – previously known as Facebook – has taken the unparalleled step of making their new language model completely accessible to any developer, along with details about how it was trained and built. According to Meta AI’s managing director, ‘We strongly believe that the ability for others to scrutinize your work is an important part of research. We really invite that collaboration.’

The opportunities for AI are vast, especially where it complements and augments human progress toward a better, more equal and opportunity-filled world. But the horror stories are not to be dismissed. As with every technological development, it’s about whose hands it’s put it in – and who they intend to benefit.

To find out more about our capabilities in this area, check out our DevSecOps page.

 

What can a Digital Twin do for you?

What can a Digital Twin do for you?

Digital Twin

Meaningfully improving your organisation’s operations sometimes requires more than just tinkering: it can require substantial change to bring everything up to scratch. But the risks of getting it wrong, especially for mission critical solutions depended on by multiple parties, frequently turn decision makers off. What if you could trial that change, with reliable predictions and the potential to model different scenarios, before pushing the button?

CACI’s Digital Twin offers just that capability. Based on an idea that’s breaking new ground from businesses like BMW to government agencies like NASA, it gives decision makers a highly accurate view into the future. Working as a real-time digital counterpart of any system, it can be used to simulate potential situations on the current set up, or model the impact of future alterations.

Producing realistic data (that’s been shown to match the effects of actual decisions once they’ve been undertaken), this technology massively reduces risk across an organisation. Scenario planning is accelerated, with enhanced complexity, resulting in better alignment between decision makers.

What are Digital Twins doing right now?

From physical assets like wind turbines and water distribution, Digital Twins are now being broadly used for business operations, and federated to tackle larger problems, like the control of a ‘smart city’. They’re also being used for micro-instances of highly risky situations, allowing surgeons to practice heart surgery, and to build quicker, more effective prototypes of fighter jets.

Recently, Anglo American used this technology to create a twin of its Quellaveco mine; ‘digital mining specialists can perform predictive tests that help reduce safety risks, optimise the use of resources and improve the performance of production equipment’. Interest is increasingly growing in this tech’s potential use within retail, where instability from both supply and demand sides have been causing havoc since the pandemic.

This technology allows such businesses to take control of their resources, systems and physical spaces, while trialling the impact of future situations before they come to pass. In a world where instability is the new norm, Digital Twins supersede reliance on historical data. They also allow better insight and analysis into current processes for quicker improvements, and overall give an unparalleled level of transparency.

Digital twin data visual

Where does Mood come in?

Mood Software is CACI’s proprietary data visualisation tool and has a record of success in enabling stakeholders to better understand their complex organisations. Mood is crucial to CACI’s Digital Twin solution as it integrates systems to create a single working model for management and planning. It enables collaborative planning, modelling and testing, bringing together stakeholders so they can work to the same goals.

Making effective decisions requires optimal access to data – and the future is one area we don’t have that on. But with Digital Twin technology, you are able to draw your own path, and make decisions with an enhanced level of insight.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

How to create a successful M&A IT integration strategy

How to create a successful M&A IT integration strategy

IT integration woman looking at laptopFrom entering new markets to growing market share, mergers and acquisitions (M&As) can bring big business benefits. However, making the decision to acquire or merge is the easy part of the process. What comes next is likely to bring disruption and difficulty. In research reported by the Harvard Business Review, the failure rate of acquisitions is astonishingly high – between 70 and 90 per cent – with integration issues often highlighted as the most likely cause.

While the impact of M&A affects every element of an organisation, the blending of technical assets and resulting patchwork of IT systems can present significant technical challenges for IT leaders. Here, we explore the most common problems and how to navigate them to achieve a smooth and successful IT transition.

Get the full picture

Mapping the route of your IT transition is crucial to keeping your team focused throughout the process. But you need to be clear about your starting point. That’s why conducting a census of the entire IT infrastructure – from hardware and software to network systems, as well as enterprise and corporate platforms – should be the first step in your IT transition.

Gather requirements & identify gaps

Knowing what you’ve got is the first step, knowing what you haven’t is the next. Technology underpins every element of your business, so you should examine each corporate function and business unit through an IT lens. What services impact each function? How will an integration impact them? What opportunities are there to optimise? Finding the answers to these questions will help you to identify and address your most glaring gaps.

Seize opportunities to modernise

M&A provide the opportunity for IT leaders to re-evaluate and update their environments, so it’s important to look at where you can modernise rather than merge. This will ensure you gain maximum value from the process. For example, shifting to cloud infrastructure can enable your in-house team to focus on performance optimisation whilst also achieving cost savings and enhanced security. Similarly, automating routine or manual tasks using AI or machine learning can ease the burden on overwhelmed IT teams.

Implement strong governance

If you’re fusing two IT departments, you need to embed good governance early on. Start by assessing your current GRC (Governance, Risk and Compliance) maturity. A holistic view will enable you to target gaps effectively and ensure greater transparency of your processes. In addition to bringing certainty and consistency across your team, taking this crucial step will also help you to tackle any compliance and security shortfalls that may result from merging with the acquired business.

Clean up your data

Managing data migration can be a complex process during a merger and acquisition. It’s likely that data will be scattered across various systems, services, and applications. Duplicate data may also be an issue. This makes it difficult to gain an updated single customer view, limiting your ability to track sales and marketing effectiveness. The lack of visibility can also have a negative impact on customer experience. For example, having two disparate CRM systems may result in two sales representatives contacting a single customer, causing frustration and portraying your organisation as disorganised. There’s also a significant financial and reputational risk if data from the merged business isn’t managed securely. With all this in mind, it’s clear that developing an effective strategy and management process should be a key step in planning your IT transition.

Lead with communication

Change can be scary, and uncertainty is the enemy of productivity. That’s why communication is key to a successful merger and acquisition. Ensuring a frequent flow of information can help to combat this. However, IT leaders should also be mindful of creating opportunities for employees to share ideas and concerns.

If you are merging two IT departments, it is important to understand the cultural differences of the two businesses and where issues may arise. This will help you to develop an effective strategy for bringing the two teams together. While championing collaboration and knowledge sharing will go a long way to helping you achieve the goal of the M&A process – a better, stronger, more cohesive business.

How we can help

From assessing your existing IT infrastructure to cloud migration, data management and driving efficiencies through automation, we can support you at every step of your IT transition.

Transitioning your IT following M&A? Contact our expert team today.

The mitigation of unwanted bias in algorithms

The mitigation of unwanted bias in algorithms

Unwanted Bias is prevalent in many current Machine Learning and Artificial Intelligence algorithms utilised by small and large enterprises alike. The reason for prefixing bias with “unwanted” is because bias is too often considered to be a bad thing in AI/ML, when in fact this is not always the case. Bias itself (without the negative implication) is what these algorithms rely on to do their job, otherwise what information could they use to categorise such data? But that does not mean all bias is equal.

Dangerous Reasoning

Comment sections throughout different articles and social media posts are plagued with people justifying the racial bias within ML/AI on light reflection and saliency. This dangerous reasoning can be explained for, perhaps, a very small percentage of basic computer vision programs out there but not frequently utilised ML/AI algorithms. The datasets utilised by these are created by humans, therefore prejudice in equals prejudice out. The data in, and training, thereafter, has a major part in creating bias. The justification doesn’t explain a multitude of other negative bias within algorithms, such as age and location bias within applying for a bank loan or gender bias in similar algorithms where it is also based on imagery.

Microsoft, Zoom, Twitter, and More

Tay

In March 2016, Microsoft released its brand-new Twitter AI, Tay. Within 16 hours after the launch, Tay was shut down.

Tay was designed to tweet similarly to that of a teenage American girl, and to learn new language and terms from the users of Twitter interacting with her. Within the 16 hours it was live, Tay went from being polite and pleased to meet everyone, to a total of over 96, 000 tweets of which most were reprehensible. These tweets ranged from anti-Semitic threats, racism and general death threats. Most of these tweets weren’t the AI’s own tweets and was just using a “repeat after me” feature implemented by Microsoft, which without a strong filter led to many of these abhorrent posts. Tay did also tweet some of her own “thoughts”, which were also offensive.

Tay demonstrates the need for a set of guidelines that should be followed, or a direct line of responsibility and ownership of issues that arise from the poor implementation of an AI/ML algorithm.

Tay was live for an extensive period, during this time many people saw and influenced Tay’s dictionary. Microsoft could have quickly paused tweets from Tay as soon as the bot’s functionality was abused.

Zoom & Twitter

Twitter user Colin Madland posted a tweet regarding an issue with Zoom cropping his colleagues head when using a virtual background. Zooms virtual background detection struggles to detect black faces in comparison to the accuracy when detecting a white face or objects that are closer to what it thinks is a white face, like the globe in the background in the second image.

After sharing his discovery, he then noticed that Twitter was cropping the image on most mobile previews to show his face over his colleagues, even after flipping the image. Amongst this discovery, people started testing a multitude of different examples, mainly gender and race-based examples. Twitters preview algorithm would choose to pick males over females, and white faces over black faces.

Exam Monitoring

Recently due to Coronavirus it has become more prevalent for institutions like universities to utilise face recognition for exam software, which aims to ensure you’re not cheating. Some consider it invasive and discriminatory, and recently it has caused controversy with poor recognition for people of colour.

To ensure ExamSoft’s test monitoring software doesn’t raise red flags, people were told to sit directly in front of a light source. With many facing this issue more often due to the current Coronavirus pandemic, this is yet another common hurdle that needs to be solved immediately in the realm of ML & AI.

Wrongfully Imprisoned

On 24th June 2020, the New York Times had reported on Robert Julian-Borchak Williams, who had been wrongfully imprisoned because of an algorithm. Mr Williams had received a call from the Detroit Police Department, which he initially believed to be a prank, However, just an hour later Mr Williams was arrested.

The felony warrant was for a theft committed at an upmarket store in Detroit, which Mr. Williams and his wife had checked out when it first opened.

This issue may be one of the first known accounts of wrongful conviction from a poorly made facial recognition match, but it certainly wasn’t the last.

Trustworthy AI According to the AI HLEG

There are three key factors that attribute to a trustworthy AI according to the AI HLEG (High-Level Expert Group on Artificial Intelligence – created by the EU Commission), these are:

  1. It should be lawful, complying with all applicable laws and regulations;
  2. It should be ethical, ensuring adherence to ethical principles and values; and
  3. It should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

These rules would need to be enforced throughout the algorithm’s lifecycle, due to different learning methods altering outputs that could potentially cause it to oppose these key factors. The timeframes where you evaluate the algorithm would ideally be deemed based on the volume of supervised and unsupervised learning the algorithm is undergoing on a specific timescale.

If you are creating a model, whether it’s to evaluate credit score or facial recognition, it’s trustworthiness should be evaluated. There are no current laws involving this maintenance and assurance – it is down to the company, or model owner, to assure lawfulness.

How Can a Company/Individual Combat This?

By following a pre-decided set of guidelines continuously and confidently, you can ensure that you, as a company/individual, are actively combatting unwanted bias. It is recommended to stay ahead of the curve in upcoming technology, whilst simultaneously thinking about potential issues with ethics and legality.

By using an algorithm with these shortfalls, you will inevitably repeat mistakes that have been already made. There are a few steps you can go through to ensure your algorithm doesn’t have the aforementioned bias’:

  1. Train – your algorithm to the best of your ability with a reliant dataset.
  2. Test – thoroughly to ensure there is no unwanted bias in the algorithm.
  3. Assess – test results to figure out next steps that need to be done.

Companies that utilise algorithms, or even pioneering new tech, need to consider any potential new issues with ethics and legality, to assure no one is hurt ahead.

We can only see a short distance ahead, but we can see plenty there that needs to be done

A. Turing