Unlocking the power of Digital Twins with Mood: Your ultimate platform for organisational excellence

Unlocking the power of Digital Twins with Mood: Your ultimate platform for organisational excellence

 

In today’s rapidly evolving business landscape, organisations are seeking innovative ways to enhance efficiency, streamline operations, and drive strategic growth. One of the most transformative concepts to emerge in recent years is the Digital Twin of an Organisation (DTO). This powerful paradigm allows businesses to create a virtual replica of their entire enterprise, enabling real-time analysis, simulation, and optimisation. Among the wide range of tools available, Mood stands out as the unparalleled enabler for creating a comprehensive Digital Twin, offering unmatched capabilities. 

What is a DTO? 

A DTO is a dynamic, virtual representation of the business, encompassing its processes, systems, capabilities, assets, and data. This digital counterpart takes real-time information allowing businesses to monitor performance, predict outcomes, and make informed decisions. By leveraging DTO organisations can visualise their entire operation, identify inefficiencies, test scenarios, and implement changes with confidence, all without disrupting actual operations. 

The Mood advantage: A unique proposition 

Mood offers a unique and comprehensive suite of capabilities for creating and managing a DTO. which makes it the game-changer required: 

  • Holistic Integration: with a whole-systems approach, Mood sits at the centre of your eco-system, mapping a wide range of enterprise systems and data sources, ensuring that your DTO is a true reflection of your organisation, enabling evidencebased decision making. From ERP and CRM systems to IoT devices and data warehouses, Mood consolidates information from disparate sources into a unified, coherent model.
  • Dynamic Visualisation: With Mood, you can visualise complex processes and structures in an intuitive, user-friendly interface. This dynamic visualisation capability allows stakeholders to easily comprehend intricate relationships and dependencies within the organisation, facilitating data-driven decision-making.
  • Monitoring and Analysis: Mood enables continuous monitoring of organisational performance through real-time data feeds. This ensures that your DTO is up to date, providing accurate insights and enabling proactive management of potential issues before they escalate.
  • Simulation and Scenario Planning: One of Mood’s standout features is its ability to run simulations and scenario analyses. Whether you’re considering a process change, a new strategy, or a potential disruption, Mood allows you to model these scenarios and assess their impact on the organisation, helping you make data-driven decisions with confidence.
  • Scalability and Flexibility: As your organisation grows and evolves, Mood grows with you. Its scalable meta-modelling and flexible customisation options ensure that your DTO remains relevant and aligned with your business needs, regardless of size or complexity.
  • Robust Security: Mood prioritises the security of your data, employing encryption and access control mechanisms to safeguard sensitive information. This ensures that your DTO remains secure and compliant with industry regulations. 

Real-world applications and benefits 

The adoption of Mood as your DTO brings tangible benefits across various aspects of your organisation: 

  • Enhanced Operational Efficiency: By visualising and analysing processes in context, you can identify bottlenecks, optimise resource allocation, and streamline operations, leading to significant cost savings and productivity improvements.
  • Informed Strategic Planning: Mood’s powerful query capabilities enable you to test different strategies and initiatives in a risk-free environment, providing valuable insights that guide strategic planning and execution.
  • Proactive Risk Management: With monitoring and analytics, Mood helps you anticipate and mitigate risks, ensuring business continuity and resilience in the face of disruptions.
  • Improved Collaboration: Mood’s intuitive visualisation fosters better collaboration among departments and stakeholders, ensuring that everyone is aligned and working towards common goals. 

See our case studies for the myriad ways in which Mood has been used, including the Defence Fuels Enterprise digital twin, here. 

Conclusion 

In an era where digital transformation is not just an option but a necessity, DTO stands out as a vital tool for achieving business excellence. Mood emerges as the unparalleled enabler for this transformative journey, offering an unmatched suite of capabilities that empower organisations to create, manage, and leverage their Digital Twins effectively. 

No other platform combines the holistic integration, dynamic visualisation, powerful analytics, scalability, and robust security that Mood provides. By choosing Mood, you are not just adopting a software tool; you are embracing a comprehensive solution that equips your organisation to thrive in the digital age. 

Unlock the full potential of your organisation with Mood – the ultimate platform for creating and harnessing the power of your Digital Twin of an Organisation. 

Top network automation trends in 2024

Top network automation trends in 2024

Network automation has become increasingly prevalent in enterprises and IT organisations over the years, with its growth showing no signs of slowing down.  

In fact, as of 2024, the Network Automation Market size is estimated at USD 25.16 billion (GBP 19.78 billion), expected to reach USD 60.59 billion (GBP 47.65 billion) by 2029. By 2028, a growth rate of 20% is predicted in this sector in the UK. Within CACI, we are seeing a higher demand for network automation than ever before, supporting our clients in NetDevOps, platform engineering and network observability. 

So, how is the network automation space evolving, and what are the top network automation trends that are steering the direction of the market in 2024?  

Hyperautomation

With the increasing complexity of networks that has come with the proliferation of devices, an ever-growing volume of data and the adoption of emerging technologies in enterprises and organisations, manual network management practices have become increasingly difficult to uphold. This is where hyperautomation has been proving itself to be vital for operational resilience into 2024. 

As an advanced approach that integrates artificial intelligence (AI), machine learning (ML), robotic process automation (RPA), process mining and other automation technologies, hyperautomation streamlines complex network operations by not only automating repetitive tasks, but the overall decision-making process. This augments central log management systems such as SIEM and SOAR with functions to establish operationally resilient business processes that increase productivity and decrease human involvement. Protocols such as gNMI and gRPC for streaming telemetry and the increased adoption of service mesh and overlay networking mean that network telemetry and event logging are now growing to a state where no one human can adequately “parse the logs” for an event. Therefore, the time is ripe for AI and ML to push business value through AIOps practices to help find the ubiquitous “needle” in the ever-growing haystack. 

Enterprises shifting towards hyperautomation this year will find themselves improving their security and operational efficiency, reducing their operational overhead and margin of human error and bolstering their network’s resilience and responsiveness. When combined with ITSM tooling such as ServiceNow for self-service delivery, hyperautomation can truly transcend the IT infrastructure silo and enter the realm of business by achieving wins in business process automation (BPA) to push the enterprise into true digital transformation. 

Increasing dependence on Network Source of Truth (NSoT)

With an increasing importance placed on agility, scalability and security in network operations, NSoT is proving to be indispensable in 2024, achieving everything the CMDB hoped for and more. 

As a centralised repository of network-related data that manages IP addresses (IPAM), devices and network configurations and supplies a single source of truth from these, NSoT has been revolutionising network infrastructure management and orchestration by addressing challenges brought on by complex modern networks to ensure that operational teams can continue to understand their infrastructure. It also ensures that data is not siloed across an organisation and that managing network objects and devices can be done easily and efficiently, while also promoting accurate data sharing via data modelling with YAML and YANG and open integration via API into other BSS, OSS and NMS systems.  

Enterprises and organisations that leverage the benefits of centralising their network information through NSoT this year will gain a clearer, more comprehensive view of their network, generating more efficient and effective overall network operations. Not to mention, many NSoT repositories are much more well-refined than their CMDB predecessors, and some – such as NetBox – are truly a joy to use in daily Day 2 operations life compared to the clunky ITSMs of old. 

Adoption of Network as Service (NaaS)

Network as a Service (NaaS) has been altering the management and deployment of networking infrastructure in 2024. With the rise of digital transformation and cloud adoption in businesses, this cloud-based service model enables on-demand access and the utilisation of networking resources, allowing enterprises and organisations to supply scalable, flexible solutions that meet ever-changing business demands. 

As the concept gains popularity, service providers have begun offering a range of NaaS solutions, from basic connectivity services such as virtual private networks (VPNs) and wide area networks (WANs) to the more advanced offerings of software-defined networking (SDN) and network functions virtualisation (NFV).  

These technologies have empowered businesses to streamline their network management, enhance performance and lower costs. NaaS also has its place at the table against its aaS siblings (IaaS, PaaS and SaaS), pushing the previously immovable, static-driven domain of network provisioning into a much more dynamic, elastic and OpEx-driven capability for modern enterprise and service providers alike. 

Network functions virtualisation (NFV) and software-defined networking (SDN)

A symbiotic relationship between network functions virtualisation (NFV), software-defined networking (SDN) and network automation is proving to be instrumental in bolstering agility, responsiveness and intelligent network infrastructure as the year is underway. As is often opined by many network vendors, “MPLS are dead, long live SD-WAN”; which, while not 100% factually correct (we still see demand in the SP space for MPLS and MPLS-like technologies such as PCEP and SR), is certainly directionally correct in our client base across finance, telco, media, utilities and increasingly government and public sectors. 

NFV enables the decoupling of hardware from software, as well as the deployment of network services without physical infrastructure constraints. SDN, on the other hand, centralises network control through programmable software, allowing for the dynamic, automated configuration of network resources. Together, they streamline operations and ensure advanced technologies will be deployed effectively, such as AI-driven analytics and intent-based networking (IBN). 

We’re seeing increased adoption of NFV via network virtual appliances (NVA) deployed into public cloud environments like Azure and AWS for some of our clients, as well as an increasing trend towards packet fabric brokers such as Equinix Fabric and Megaport MVE to create internet exchange (IX), cloud exchange (CX) and related gateway-like functionality as the enterprise trend towards multicloud grows a whole gamut of SDCI cloud dedicated interconnects to stitch together all the XaaS components that modern enterprises require. 

Intent-based networking (IBN)

As businesses continue to lean into establishing efficient, prompt and precise best practices when it comes to network automation, intent-based networking (IBN) has been an up-and-coming approach to implement. This follows wider initiatives in the network industry to push “up the stack” with overlay networking technologies such as SD-WAN, service mesh and cloud native supplanting traditional Underlay Network approaches in Enterprise Application provision. 

With the inefficiencies that can come with traditional networks and manual input, IBN has come to network operations teams’ rescue by defining business objectives in high-level, abstract manners that ensure the network can automatically configure and optimise itself to meet said objectives. Network operations teams that can devote more time and effort to strategic activities versus labour-intensive manual configurations will notice significant improvements in the overall network agility, reductions in time-to-delivery and better alignment with the wider organisation’s goals. IBN also brings intelligence and self-healing capabilities to networks— in case of changes or anomalies detected in the network, it enables the network to automatically adapt itself to address those changes while maintaining the desired outcome, bolstering network reliability and minimising downtime. 

As more organisations realise the benefits of implementing this approach, the rise of intent-based networking is expected to continue, reshaping the network industry as we know it. The SDx revolution is truly here to stay, and the move of influence of the network up the stack will only increase as reliance on interconnection of all aspects becomes the norm. 

How CACI can support your network automation journey? 

CACI is adept at a plethora of IT, networking and cloud technologies. Our trained cohort of network automation engineers and consultants are ready and willing to share their industry knowledge to benefit your unique network automation requirements. 

From NSoT through CI/CD, version control, observability, operational state verification, network programming and orchestration, our expert consulting engineers have architected, designed, built and automated some of the UK’s largest enterprise, service provider and data centre networks, with our deep heritage in network engineering spanning over 20 years. 

Take a look at Network Automation and NetDevOps at CACI to learn more about some of the technologies, frameworks, protocols and capabilities we have, from YAML, YANG, Python, Go, Terraform, IaC, API, REST, Batfish, Git, NetBox and beyond. 

To find out more about enhancing your network automation journey, get in touch with us today.  

Digital Twin: Seeing the Future

Digital Twin: Seeing the Future

 

Predicting what’s coming next and understanding how best to respond is the kind of challenge organisations struggle with all the time. As the world becomes less predictable and ever-changing technology transforms operations, historical data becomes harder to extrapolate. And even if you can make reasonable assumptions about future changes, how they will impact on the various aspects of your business is even more problematic.

Decision makers need another tool in their arsenal to help them build effective strategies that can guide big changes and investments. They need to combine an understanding of their setup with realistic projections of how external and internal changes could have an impact. A Digital Twin built with predictive models can combine these needs, giving highly relevant and reliable data that can guide your future course.

The Defence Fuels Prototype

Using Mood Software and in collaboration with the MOD’s Defence Fuels Transformation, CACI built a digital twin focused on fuel movement within an air station. With it we aimed to understand the present, but also crucially, to predict the near future and test further reaching changes.

We used two kinds of predictive model that can learn from actual behaviour. For immediate projections, we implemented machine learning models that used a small sample of historical data concerning requirements for refuelling vehicles given a certain demand, allowing an ‘early warning system’ to be created.

However, we knew that the real value came in understanding what’s further ahead, where there is a higher risk of the wrong decision seriously impacting the success of operations. We adapted and integrated an existing Defence Fuels Enterprise simulation model, Fuel Supply Analysis Model (FSAM), to allow the testing of how a unit would operate given changes to the configuration of refuelling vehicles.

Functions were coded in a regular programming language to mimic the structural model and to mimic the kinds of behaviour that is evidenced through the data pipeline. As a result, we are able to make changes to these functions to easily understand what the corresponding changes would be in the real world.

This allows decision makers to test alternative solutions with the simulation models calibrated against existing data. Models informed by practical realities enables testing with greater speed and confidence so you have some likely outcomes before committing to any change.

 

What does this mean for me?

Digital Twins are extremely flexible pieces of technology that can be built to suit all kinds of organisations. They are currently in use in factories, defence, retail and healthcare. Adaptable to real world assets and online systems, it’s hard to think of any area they couldn’t be applied to.

Pairing a digital representation of your operations, processes and systems with predictive and simulation models allows substantial de-risking of decision making. You can predict what will happen if your resourcing situation changes, and plan accordingly; you can also understand the impact of sweeping structural changes. The resulting data has been proven against real-world decisions, making it truly reliable.

Time magazine has predicted that Digital Twins will ‘shape the future’ of multiple industries going forward and I think it’s hard to argue with that.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

For more on Mood Software and how it can be your organisation’s digital operating model, visit the product page.

What can a Digital Twin do for you?

What can a Digital Twin do for you?

Digital Twin

Meaningfully improving your organisation’s operations sometimes requires more than just tinkering: it can require substantial change to bring everything up to scratch. But the risks of getting it wrong, especially for mission critical solutions depended on by multiple parties, frequently turn decision makers off. What if you could trial that change, with reliable predictions and the potential to model different scenarios, before pushing the button?

CACI’s Digital Twin offers just that capability. Based on an idea that’s breaking new ground from businesses like BMW to government agencies like NASA, it gives decision makers a highly accurate view into the future. Working as a real-time digital counterpart of any system, it can be used to simulate potential situations on the current set up, or model the impact of future alterations.

Producing realistic data (that’s been shown to match the effects of actual decisions once they’ve been undertaken), this technology massively reduces risk across an organisation. Scenario planning is accelerated, with enhanced complexity, resulting in better alignment between decision makers.

What are Digital Twins doing right now?

From physical assets like wind turbines and water distribution, Digital Twins are now being broadly used for business operations, and federated to tackle larger problems, like the control of a ‘smart city’. They’re also being used for micro-instances of highly risky situations, allowing surgeons to practice heart surgery, and to build quicker, more effective prototypes of fighter jets.

Recently, Anglo American used this technology to create a twin of its Quellaveco mine; ‘digital mining specialists can perform predictive tests that help reduce safety risks, optimise the use of resources and improve the performance of production equipment’. Interest is increasingly growing in this tech’s potential use within retail, where instability from both supply and demand sides have been causing havoc since the pandemic.

This technology allows such businesses to take control of their resources, systems and physical spaces, while trialling the impact of future situations before they come to pass. In a world where instability is the new norm, Digital Twins supersede reliance on historical data. They also allow better insight and analysis into current processes for quicker improvements, and overall give an unparalleled level of transparency.

Digital twin data visual

Where does Mood come in?

Mood Software is CACI’s proprietary data visualisation tool and has a record of success in enabling stakeholders to better understand their complex organisations. Mood is crucial to CACI’s Digital Twin solution as it integrates systems to create a single working model for management and planning. It enables collaborative planning, modelling and testing, bringing together stakeholders so they can work to the same goals.

Making effective decisions requires optimal access to data – and the future is one area we don’t have that on. But with Digital Twin technology, you are able to draw your own path, and make decisions with an enhanced level of insight.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

How to create a successful M&A IT integration strategy

How to create a successful M&A IT integration strategy

IT integration woman looking at laptopFrom entering new markets to growing market share, mergers and acquisitions (M&As) can bring big business benefits. However, making the decision to acquire or merge is the easy part of the process. What comes next is likely to bring disruption and difficulty. In research reported by the Harvard Business Review, the failure rate of acquisitions is astonishingly high – between 70 and 90 per cent – with integration issues often highlighted as the most likely cause.

While the impact of M&A affects every element of an organisation, the blending of technical assets and resulting patchwork of IT systems can present significant technical challenges for IT leaders. Here, we explore the most common problems and how to navigate them to achieve a smooth and successful IT transition.

Get the full picture

Mapping the route of your IT transition is crucial to keeping your team focused throughout the process. But you need to be clear about your starting point. That’s why conducting a census of the entire IT infrastructure – from hardware and software to network systems, as well as enterprise and corporate platforms – should be the first step in your IT transition.

Gather requirements & identify gaps

Knowing what you’ve got is the first step, knowing what you haven’t is the next. Technology underpins every element of your business, so you should examine each corporate function and business unit through an IT lens. What services impact each function? How will an integration impact them? What opportunities are there to optimise? Finding the answers to these questions will help you to identify and address your most glaring gaps.

Seize opportunities to modernise

M&A provide the opportunity for IT leaders to re-evaluate and update their environments, so it’s important to look at where you can modernise rather than merge. This will ensure you gain maximum value from the process. For example, shifting to cloud infrastructure can enable your in-house team to focus on performance optimisation whilst also achieving cost savings and enhanced security. Similarly, automating routine or manual tasks using AI or machine learning can ease the burden on overwhelmed IT teams.

Implement strong governance

If you’re fusing two IT departments, you need to embed good governance early on. Start by assessing your current GRC (Governance, Risk and Compliance) maturity. A holistic view will enable you to target gaps effectively and ensure greater transparency of your processes. In addition to bringing certainty and consistency across your team, taking this crucial step will also help you to tackle any compliance and security shortfalls that may result from merging with the acquired business.

Clean up your data

Managing data migration can be a complex process during a merger and acquisition. It’s likely that data will be scattered across various systems, services, and applications. Duplicate data may also be an issue. This makes it difficult to gain an updated single customer view, limiting your ability to track sales and marketing effectiveness. The lack of visibility can also have a negative impact on customer experience. For example, having two disparate CRM systems may result in two sales representatives contacting a single customer, causing frustration and portraying your organisation as disorganised. There’s also a significant financial and reputational risk if data from the merged business isn’t managed securely. With all this in mind, it’s clear that developing an effective strategy and management process should be a key step in planning your IT transition.

Lead with communication

Change can be scary, and uncertainty is the enemy of productivity. That’s why communication is key to a successful merger and acquisition. Ensuring a frequent flow of information can help to combat this. However, IT leaders should also be mindful of creating opportunities for employees to share ideas and concerns.

If you are merging two IT departments, it is important to understand the cultural differences of the two businesses and where issues may arise. This will help you to develop an effective strategy for bringing the two teams together. While championing collaboration and knowledge sharing will go a long way to helping you achieve the goal of the M&A process – a better, stronger, more cohesive business.

How we can help

From assessing your existing IT infrastructure to cloud migration, data management and driving efficiencies through automation, we can support you at every step of your IT transition.

Transitioning your IT following M&A? Contact our expert team today.

7 signs that your company needs to outsource IT

7 signs that your company needs to outsource IT

From reducing costs to meeting tight project deadlines and accessing specialist expertise, there are many advantages that come with outsourcing IT, but when does outsourcing offer the most benefit to businesses? We asked Brian Robertson, Resource Manager at CACI, to reveal the common signs that indicate a business would be better with an outsourced IT solution.

1. Your IT costs are high

Are budget worries keeping you up at night? Cost control is the most obvious reasons for businesses outsourcing IT. Indeed, a 2020 study by Whitelane Research found that 71% of UK businesses said that cost reduction was the main driver for outsourcing IT. But, is outsourcing really cost-effective?

“Just having a couple of IT specialists on your payroll can really rack up costs,” says Brian. It’s not just high salaries and the cost of employee benefits that are a concern. Companies that opt to run in-house IT departments also face the costs of purchasing, maintaining, and upgrading hardware as well as purchasing the software they need. “With outsourcing, these fixed costs become flexible, allowing you greater control of your budget,” says Brian.

2. You have skills gaps

The severe shortage in tech skills has long been a challenge for businesses, but as Brian explains, “The pandemic put organisations across every industry on a fast-speed trajectory to digitalisation.” He adds, “now, the focus is to keep that momentum going, but we’re seeing that many of our clients are looking for very specific expertise in a fiercely competitive and increasingly expensive marketplace.”

With recent research by ManpowerGroup finding that 69%, of employers globally are struggling to find workers with the right blend of technical and interpersonal skills, it’s clear that many businesses are fighting a losing battle. “This is where working with a trusted IT outsourcing partner can prove to be a strategic move,” says Brian. “A good outsourcer will always assess their client’s requirements holistically – matching skills and experience as well as cultural fit with end goals.”

3. Your IT infrastructure is outdated

“IT infrastructure is a vital component in every business, but it can become a huge drain on productivity, not to mention a growing security risk if not invested in,” warns Brian. He adds, “However, upgrading an outdated infrastructure is a resource investment that many lean I.T departments can ill-afford, creating a stalemate situation that prevents a business from maintaining competitive advantage.”

Therefore, if a business is struggling to maintain and manage its day-to-day IT operations,  outsourcing may provide a practical solution. In addition to unlocking access to the latest and greatest tech, working with a reliable IT outsourcing partner will ensure your IT operations are optimised for enhanced performance, releasing your in-house staff to focus their efforts on achieving your business objectives.

4. Your business is vulnerable to security threats

Cyber security breaches are increasing. According to a survey released by GOV.UK last year, 46% of UK businesses and charities reported a cyber attack during the year, with 33% of those claiming they experienced a cyber breach at least once a week in 2020 – up from 22% in 2017.

The growing sophistication of cybercrime puts immense pressure on in-house teams as they struggle to stay on top of critical security practices such as 24/7 networking whilst also maintaining the myriad security systems they have in place. As Brian warns, “When it comes to cyber security, it’s not just a case of having the right technology in place, you need round-the-clock specialists that have the experience and expertise to utilise those tools and prevent potential threats before they become a problem.”

The global shortage in professionals with the right security skill sets are an additional challenge for businesses as they struggle to recruit and retain the specialists they need. Partnering with a trusted IT outsourcer can provide a cost-effective and reliable solution, as outsourcing removes vulnerabilities by ensuring a business’s security defences are ‘always on’.

5. Compliance is a concern

While cyber security is one concern, ensuring regulatory compliance is another, particularly in heavily regulated industries such as financial services. Failure to comply can lead to reputational damage and hefty fines, but to ensure compliance, organisations must have the capability to implement, maintain, monitor, and accurately report on IT infrastructure and security processes. As Brian explains, a partnership with a reliable IT outsourcer can offer significant value to a business that is under pressure to maintain compliance, “As well as providing the necessary resources and expertise to ensure compliance, an outsourcing partner will keep abreast of regulation changes, so your business is always one step ahead.”

6. You need flexibility

When you’re embarking on a new project, getting the right people with the right skill sets in place can be a difficult task. While upskilling your existing team members can be beneficial, inexperience coupled with a limited bandwidth can pose major risks to your project delivery as well as have a negative impact on your day-to-day operations. These problems are more acute if your delivery deadline is tight.

“Hiring new talent in-house is an option, but often it’s not the best one if a project is short-term or requires a range of specialist skill sets,” explains Brian. In these instances, partnering with an IT outsourcer can provide the most strategic, timely and cost-effective route forward because solutions are tailored to your specific needs. “Clients also gain from the insights and expertise of an experienced team – with the added benefit of elasticity to adapt if requirements change,” says Brian.

7. You need niche expertise

More budget-friendly than hiring a team of in-house specialists, and more reliable than challenging your existing team, outsourcing IT is often the most effective option when it comes to delivering projects that require niche expertise such as cyber security. Brian also highlights the benefit of introducing an outside perspective, “One of the most overlooked benefits of outsourcing is that businesses don’t just get access to specific skills and knowledge, they get to tap into a whole wealth of experience.”

“That’s why it’s so important to look for an IT outsourcing partner that has a proven record of proficiency and delivering results. Knowing what’s worked before, how to handle specific challenges, and what pitfalls to avoid –is truly invaluable to finding the solution that’s really going to work for your business.”

Looking for a reliable IT outsourcing partner? Share your requirements with our expert team today

Space weather – Enhanced weather forecasting systems for the Met Office

Space weather – Enhanced weather forecasting systems for the Met Office

Overview:

Space weather describes conditions in space that can interact with the Earth’s upper atmosphere and magnetic field and disrupt our infrastructure and technology including power grids, radio communications and satellite operations including GPS.

The Met Office owns the UK space weather risk on behalf of the Department for Business, Energy and Industrial Strategy (BEIS) and as part of the National Risk Assessment mitigation strategy, delivers space weather services to government, critical national infrastructure (CNI) providers, international partners such as ESA, PECASUS, KNMI and the public.

Challenge:

CACI Information Intelligence Group were asked to undertake this project to enhance the existing Met Office space weather forecast systems and the services it delivers to customers. These enhancements include implementing new scientific forecasting models, incorporating new data sources and a full migration of the system into the AWS cloud while maintaining continuous operations.

Space Weather is on the UK’s national risk registry as a high impact, high likelihood event and as such this system needs to be secure and have 24/7/365 high availability.

Key Issues:

• A significant amount of data in different formats, to differing levels of quality, from NASA, NOAA and BGS that were handled as disparate external sources, were costly to maintain and could not be easily updated.
• Complex scientific models that were developed by different domain experts over a period of time, written in varying technologies, that were difficult to run as a component of a production service.
• Consumers that were interested less in complex data outputs of models, and more in what the results meant for them in their own domain, such as power or communications.

Approach:

CACI follow a disciplined Agile methodology agreed with the Met Office teams with whom we work. For this project, we needed to rapidly stand up a new data science environment and undertake cloud migration, our engineering teams followed the Scrum framework, but also have experience using Kanban and SAFe in other situations.

The Space Weather team consisted of 8 core people (data engineers, software engineers) and 3 rapid response resources (business analysts and software engineers).

In a complex situation, with just a high-level brief to work from, we adopted a highly condensed form of ‘discovery / alpha / beta’ in agreement with the customer. With access to existing data sources, models and staff, all members of the team initially ran rapid discovery activities against the three key challenges and refining requirements to give a prioritized backlog of user stories and tasks.

In a series of sprints, we proposed and implemented appropriate solutions in each of these areas. We adjusted the delivery approach as we went to fit the customer’s needs: streamlining our sprint planning meetings by having interim backlog grooming sessions and by regularly standing up demonstrations of the work we have developed. Successfully using agile principles and evolving agile techniques in this way meant that development velocity is high despite complex requirements and geographically distributed teams.

We also agreed best in class tooling with the customer, for engineering a cloud-based data pipeline and models, including MongoDB, Java, Spring, Apache Camel, AWS (Lambda, SQS, SNS, S3, API Gateway, Fargate, CloudWatch, EC2), for front end development (Angular, HighchartsJS). Using this approach, the team have recently designed and implemented an improved platform for building and deploying scientific models into operations, delivering an enterprise-ready service in close collaboration with a wide range of scientists, academics, and organisations.

What CACI provided:
• A production-scale data pipeline capable of being configured to ingest a wide variety of data formats. This includes the original sources external to the Met Office, and also a number of internal sources including complex scientific models, the ‘supercomputer’ results and forecaster analyses.
• A set of robust scientific models running as a service on AWS, such as the OVATION Aurora nowcast and forecast.
• Front end applications that allow the customer to perform qualitative analysis and predict space weather events, providing alerts, warnings, and forecasts to a diverse range of customers to allow them to take mitigating actions relevant to their domain.
• A capability for transitioning complex scientific models into an operational environment, in close collaboration with Space Weather scientists and other expert users.

Outcome:

• A single, cloud-hosted data pipeline to handle 50+ large, disparate, real-time data sets from a wide variety of sources, making a robust and extensible service to reliably and efficiently feed a productionised set of cloud-hosted models, feed an automated alerting system and multiple clients directly.
• This service is now consumed 24/7/365 by the Met Office Space Weather Operations Centre and other consumers, allowing Met Office users to make informed operational decisions using specific graphs displaying geospatial and Space Weather data e.g. predictions of Coronal Mass Ejections and geomagnetic activity, allowing a range of consumers to more readily interpret space weather e.g. interruptions to power grid, GPS and (for MOD) over the horizon communications.

Contact Us

If you have any questions or want to learn more, get in touch today.

The mitigation of unwanted bias in algorithms

The mitigation of unwanted bias in algorithms

Unwanted Bias is prevalent in many current Machine Learning and Artificial Intelligence algorithms utilised by small and large enterprises alike. The reason for prefixing bias with “unwanted” is because bias is too often considered to be a bad thing in AI/ML, when in fact this is not always the case. Bias itself (without the negative implication) is what these algorithms rely on to do their job, otherwise what information could they use to categorise such data? But that does not mean all bias is equal.

Dangerous Reasoning

Comment sections throughout different articles and social media posts are plagued with people justifying the racial bias within ML/AI on light reflection and saliency. This dangerous reasoning can be explained for, perhaps, a very small percentage of basic computer vision programs out there but not frequently utilised ML/AI algorithms. The datasets utilised by these are created by humans, therefore prejudice in equals prejudice out. The data in, and training, thereafter, has a major part in creating bias. The justification doesn’t explain a multitude of other negative bias within algorithms, such as age and location bias within applying for a bank loan or gender bias in similar algorithms where it is also based on imagery.

Microsoft, Zoom, Twitter, and More

Tay

In March 2016, Microsoft released its brand-new Twitter AI, Tay. Within 16 hours after the launch, Tay was shut down.

Tay was designed to tweet similarly to that of a teenage American girl, and to learn new language and terms from the users of Twitter interacting with her. Within the 16 hours it was live, Tay went from being polite and pleased to meet everyone, to a total of over 96, 000 tweets of which most were reprehensible. These tweets ranged from anti-Semitic threats, racism and general death threats. Most of these tweets weren’t the AI’s own tweets and was just using a “repeat after me” feature implemented by Microsoft, which without a strong filter led to many of these abhorrent posts. Tay did also tweet some of her own “thoughts”, which were also offensive.

Tay demonstrates the need for a set of guidelines that should be followed, or a direct line of responsibility and ownership of issues that arise from the poor implementation of an AI/ML algorithm.

Tay was live for an extensive period, during this time many people saw and influenced Tay’s dictionary. Microsoft could have quickly paused tweets from Tay as soon as the bot’s functionality was abused.

Zoom & Twitter

Twitter user Colin Madland posted a tweet regarding an issue with Zoom cropping his colleagues head when using a virtual background. Zooms virtual background detection struggles to detect black faces in comparison to the accuracy when detecting a white face or objects that are closer to what it thinks is a white face, like the globe in the background in the second image.

After sharing his discovery, he then noticed that Twitter was cropping the image on most mobile previews to show his face over his colleagues, even after flipping the image. Amongst this discovery, people started testing a multitude of different examples, mainly gender and race-based examples. Twitters preview algorithm would choose to pick males over females, and white faces over black faces.

Exam Monitoring

Recently due to Coronavirus it has become more prevalent for institutions like universities to utilise face recognition for exam software, which aims to ensure you’re not cheating. Some consider it invasive and discriminatory, and recently it has caused controversy with poor recognition for people of colour.

To ensure ExamSoft’s test monitoring software doesn’t raise red flags, people were told to sit directly in front of a light source. With many facing this issue more often due to the current Coronavirus pandemic, this is yet another common hurdle that needs to be solved immediately in the realm of ML & AI.

Wrongfully Imprisoned

On 24th June 2020, the New York Times had reported on Robert Julian-Borchak Williams, who had been wrongfully imprisoned because of an algorithm. Mr Williams had received a call from the Detroit Police Department, which he initially believed to be a prank, However, just an hour later Mr Williams was arrested.

The felony warrant was for a theft committed at an upmarket store in Detroit, which Mr. Williams and his wife had checked out when it first opened.

This issue may be one of the first known accounts of wrongful conviction from a poorly made facial recognition match, but it certainly wasn’t the last.

Trustworthy AI According to the AI HLEG

There are three key factors that attribute to a trustworthy AI according to the AI HLEG (High-Level Expert Group on Artificial Intelligence – created by the EU Commission), these are:

  1. It should be lawful, complying with all applicable laws and regulations;
  2. It should be ethical, ensuring adherence to ethical principles and values; and
  3. It should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

These rules would need to be enforced throughout the algorithm’s lifecycle, due to different learning methods altering outputs that could potentially cause it to oppose these key factors. The timeframes where you evaluate the algorithm would ideally be deemed based on the volume of supervised and unsupervised learning the algorithm is undergoing on a specific timescale.

If you are creating a model, whether it’s to evaluate credit score or facial recognition, it’s trustworthiness should be evaluated. There are no current laws involving this maintenance and assurance – it is down to the company, or model owner, to assure lawfulness.

How Can a Company/Individual Combat This?

By following a pre-decided set of guidelines continuously and confidently, you can ensure that you, as a company/individual, are actively combatting unwanted bias. It is recommended to stay ahead of the curve in upcoming technology, whilst simultaneously thinking about potential issues with ethics and legality.

By using an algorithm with these shortfalls, you will inevitably repeat mistakes that have been already made. There are a few steps you can go through to ensure your algorithm doesn’t have the aforementioned bias’:

  1. Train – your algorithm to the best of your ability with a reliant dataset.
  2. Test – thoroughly to ensure there is no unwanted bias in the algorithm.
  3. Assess – test results to figure out next steps that need to be done.

Companies that utilise algorithms, or even pioneering new tech, need to consider any potential new issues with ethics and legality, to assure no one is hurt ahead.

We can only see a short distance ahead, but we can see plenty there that needs to be done

A. Turing