Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

On Friday 3rd April, the development team came together in the first step to the migration to AWS. We came together virtually as there was a little global pandemic happening so the conversation was facilitated via MS Teams with the usage of aww to sketch up any diagrams and notes.

It's been known for some time now that the current cloud provider (Microsoft Azure) was not suited for what we need/want. Issues with the speed in which infrastructure is created/destroyed. Features that are considered are a must are missing, poor documentation, poor support are some of the issues to say the least.

We’ve been given the approval to use AWS in place of Azure as its more mature, has the features we need and is a leader in the space.

The plan for this session was understand what we currently have in Azure as well as their dependencies. This will give us some transparency on what would be required in AWS to have the same sort of features as well as have some appreciation of the size of the task at hand.

What we currently have

The following is what was drawn up from the session

Apologies for the low resolution image! aww didn’t export the file in high resolution.

Below is a description of the current Azure system.

  • The applications are currently deployed to 2 virtual machines (blue/green) per environment (dev, staging, production)

    • These app servers host the docker containers for TIS as well as run the Apache service for reverse proxying

    • Various other environments have additional VM’s for branch based testing (Pink)

  • Storing the data for these applications is a single (per environment) database server hosting MySql. This also holds an instance of Maxwell’s daemon as a docker container to provided CDC forwarding to RabbitMQ

  • MongoDB is also used to store information for the ESR integration system. This is currently a 3 docker node system on one VM with the idea to moving to 3 separate VMs

  • There are a number of ElasticSearch instances of different versions, holding

  • RabbitMQ is on a cluster of 3 VM’s per environment, deployed as containers with another container holding a management web console

  • We have a build server that currently holds Jenkins, Sonarqube and metabase

  • N3 bridge which is hosted by IT. This allows us to connect to the wider NHS network (ESR)

  • A jumpbox (bastion server) to allow SSH connectivity

  • A VMs hosting monitoring tools such as Graphana, Prometheus

  • Integration environment (single VM) used to spin up and test (E2e) ESR

Applications/Services

The following is a list of managed services used in Azure as well as other HEE applications

  • Azure VM’s

  • Azure Blob store

  • Azure container registry

  • MS SQL for the NDW

  • Managed disks (VMs)

  • Data disk snapshots

  • TIS (Profile, Reference, TCS, Admins UI, Generic Upload, GMC connect, Keycloak, Notifications, Reval, Concerns, User management, Assessments, Service status)

  • ESR Integration (Inbound data reader, reconciliation, app record generator, data writer, inbound data writer, notification generator, audit, neo audit)

Downstream

Various downstream systems/products

  • A number of ETL’s but mainly for the NDW

  • GMC (requires whitelisting)

  • NDW

Going forward

Questions/unknowns/concerns

  • No labels