- Join BJ's Wholesale Club for $20, and get a $20 gift card: Deal
- Delivering better business outcomes for CIOs
- Docker Desktop 4.35: Organization Access Tokens, Docker Home, Volumes Export, and Terminal in Docker Desktop | Docker
- Cybercriminals Exploit DocuSign APIs to Send Fake Invoices
- Your iPhone's next iOS 18.2 update may come earlier than usual - with these AI features
Relocating UK Public Sector to the Cloud
A recent guidance paper published by The Commission for Smart Government urges the UK Government to take action towards transforming public services into intrinsically digital services. The Commission advises the government to move all services to the cloud by 2023.
It is clear from the paper that strong leadership and digital understanding amongst decision makers is incredibly important. This is something I noted when writing this post on defining a cloud strategy for public sector organisations. The cloud cloud strategy should set out how technology supports and delivers the overall organisational goals.
If implemented correctly, cloud computing can maximise security and business benefits, automating and streamlining many tasks that are currently manual and slow. Published by the National Cyber Security Centre in November 2020, the Security Benefits of Good Cloud Service whitepaper provides some great pointers that should be incorporated into any cloud migration strategy.
This article discusses how to achieve a common cloud infrastructure, focusing on brownfield environments where local government, and other public sector organisations like the NHS, need to address some of the challenges below.
- IT is rarely seen as delivering value to end users, citizens, patients, etc. Often budgets are being reduced but IT are being asked to deliver more, faster. In general, people have higher demands of technology and digital services. Smart phones are now just called phones. Internet-era companies like Amazon, Google, and Netflix provide instant access to products, services, and content. Consumer expectations have shifted and the bar is raised for public services.
- IT staff are under pressure to maintain infrastructure hardware and software. There are more vulnerabilities being exposed, and targeted cyber attacks, than ever before, which means constant security patching and fire-fighting. I’d like to add that it means more systems being architecturally reviewed and improved, but the reality is that most IT teams are still reacting. Running data centres comes with an incredible operational burden.
- Understanding new technologies well enough to implement them confidently requires time and experience. There are more options than ever for infrastructure; on-prem, in the cloud, at the edge, managed services – Platform as a Service (PaaS), Infrastructure as a Service (IaaS). Furthermore applications are no longer just monolithic or 3-tier, they are becoming containerised, packaged, hybrid, managed – Software-as-a-Service (SaaS). IT teams are expected to maintain and securely join up all these different services whilst repurposing existing investments in supporting software and technical knowledge.
- Business models are changing at pace, successful organisations are able to react quickly and make use of data to predict and understand their customers and consumers. The emergence of smart cities and smart hospitals can improve public services and enable cost-savings, but needs to be delivered on a strong digital foundation with fast, reliable connectivity. This approach requires joined up systems that share a secure, scalable, and resilient platform. In an ideal world applications and data should be abstracted from the underlying infrastructure in a way that allows them to securely move or be redeployed with the same policies and user experience, regardless of the hardware or provider. Legacy hardware and older systems are mostly disjointed, built in silos, with single points of failure and either non-existent or expensive business continuity models.
- Innovation typically takes longer when the risk extends beyond monetary value. The ideas of agile development and fail-fast experimentation will naturally be challenged more for public facing services. A 999 operator locating a specialist hospital for an ambulance response unit cannot afford unpredictability or instability because developers and engineers were failing-fast. Neither can a family dependent on a welfare payment system. In environments where services are stable and reliable there is less appetite for change, even when other areas of the organisation are crying out for fast and flexible delivery.
Greater economical and technical benefits can be achieved at scale. Hyperscalers have access to cheaper commodity hardware and renewable energy sources. They are able to invest more in physical security and auditing. Infrastructure operations that are stood up and duplicated thousands of times over across the UK by individual public sector organisations can shift to the utility based model of the cloud, to free up IT staff from fire-fighting, and to be able to focus on delivering quality digital services at speed.
There are 7 R’s widely accepted as cloud migration strategies. These are listed below with a particular focus on relocate. Whilst a brand new startup might go straight into a cloud-native architecture by deploying applications through micro-services, those with existing customers and users have additional considerations. Migrating to the cloud will in most cases use more than one of the options below. Implementing the correct migration strategy for existing environments, alongside new cloud-native services, can reduce the desire for people to use shadow IT. Finding the right balance is about understanding the trade-off between risk, cost, time, and the core organisational drivers mentioned earlier.
- Retire. No longer needed – shut it down. Don’t know what it is – shut it down. This is a very real option for infrastructure teams hosting large numbers of Virtual Machines. VM sprawl that has built up over the years could surprise you.
- Retain. Leaving on-premises. This doesn’t necessarily mean doing nothing. In the most part your existing applications should run in the cloud. A requirement for applications that need to be closer to the action has progressed edge computing. Hardware advancements in areas like Hyper-Converged Infrastructure (HCI) enable high performance computing with single socket small footprints, or withstanding higher operating temperatures for locations away from data centre cooling. The key is to maintain that common underlying infrastructure, enabling service deployment in the cloud or at the edge with consistent operations and technologies.
- Repurchase. For example changing an on-premises and self-maintained application to a SaaS alternative. This could be the same product in SaaS form, or a competitor. The main technical consideration now becomes connectivity and how the application is accessed. Focus is generally shifted away from the overall architecture of the application itself, and more into transitioning or onboarding users and importing data.
- Rehost. Changing a Virtual Machine to run on a different hypervisor. This could be a VMware or Hyper-V VM, converted to run on a cloud providers hypervisor as a particular instance type. This can be relatively straight forward for small numbers of Virtual Machines, but consider other dependencies that will need building out such as networking, security, load balancing, backups, and Disaster Recovery. Although not huge, this potential change in architecture adds more time, complexity, and risk, as the size of the environment grows.
- Replatform. Tweaking elements of an application to run as a cloud service. This is often shifting from self-hosted to managed services, such as migrating a database from a VM with an Operating System to a managed database service. Replatform is a common approach for like-for-like infrastructure services like databases and storage.
- Refactor. The big bang. Rearchitecting an entire application to run as a cloud-native app. This normally means rewriting source code from scratch using a micro-services architecture or serverless / function based deployment. Infrastructure is deployed and maintained as code and can be stateless and portable. A desirable end state for modern applications.
- Relocate. Moves applications and Virtual Machines to a hyperscaler / cloud provider without changing network settings, dependencies, or underlying VM file format and hypervisor. This results in a seamless transition without business disruption.
Why Relocate Virtual Machines?
Relocating Virtual Machines is a great ‘lift-and-shift’ method for moving applications into the cloud. To get the most value out of this migration strategy it can be combined with one or more of the other approaches, generally replatforming some of the larger infrastructure components like database and file storage, or refactoring a certain part of an application; a component that is problematic, one that will provide a commercial or functional benefit, or that improves the end user experience. By auditing the whole infrastructure and applying this blueprint we can strike the right balance between moving to the cloud and protecting existing services.
For existing VMware customers, VMware workloads can be moved to AWS (VMware Cloud on AWS), Azure (Azure VMware Solution), Google Cloud (Google Cloud VMware Engine), as well as IBM Cloud, Oracle Cloud, and UK based VMware Cloud Provider Partners without changing the workload format or network settings. This provides the following benefits:
- Standardised software stack – A Software-Defined Data Centre (SDDC) that can be deployed across commodity hardware in public and private clouds or at the edge, creating a common software-based cloud infrastructure.
- Complete managed service – The hardware and software stack is managed infrastructure down, removing the operational overhead of patching, maintenance, troubleshooting, and failure remediation. Data centre tasks become automated workflows allowing for on-demand scaling of compute and storage.
- Operational continuity – Retain skills and investment for managing applications and supporting software (backups, monitoring, security, etc.). Allowing for replacing solutions and application refactoring to take place at a gradual pace, for example when contracts expire, and with a lower risk.
- Full data control – The Virtual Machine up is managed by the customer; security policies, data location (UK), VM and application configuration, providing the best of both worlds. Cloud security guardrails can be implemented to standardise and enforce policies and prevent insecure configurations. These same policies can extend into native cloud services and across different cloud providers using CloudHealth Secure State.
- Sensible transformation – Although a longer term switch from capex investment to opex expenditure is required, due to the on-demand subscription based nature of many cloud services, dedicated hardware lease arrangements in solutions like those listed above can potentially be billed as capital costs. This give finance teams time to adapt and change, along with the wider business culture and processes.
- Hybrid applications – Running applications that make use of native cloud services in conjunction with existing components, such as Virtual Machines and containers, supports a gradual refactoring process and de-risks the overall project.
To read more about the information available from the Government Digital Service and other UK sources see Helping Public Sector Organisations Define Cloud Strategy.
If you’re interested in seeing VMware workloads relocated to public cloud check out The Complete Guide to VMware Hybrid Cloud.
Featured image by Scott Webb on Unsplash