Overview
In order to provide a list of doctors for revalidation administrators to work with, we rely on a nightly bulk update of all connected doctors from the GMC.
The GMC provide a SOAP web service GetDoctorsForDB
(where DB stands for “Designated Body) that returns the details of all doctors currently connected to a given designated body. This web service is used to collect data from the GMC and save it to a DocumentDB (where DB stands for “database”) database.
...
The
tis-revalidation-recommendation
service (github link) is a Spring Boot application running as three load-balanced tasks in ECS. This service manages a scheduled job that kicks off the overnight sync process at midnight
The cron expressions that determine the start time of the job are stored in Parameter Store for preprod and prod
preprod:/tis/revalidation/preprod/recommendation/cron/nightlysync
prod:/tis/revalidation/prod/recommendation/cron/nightlysync
When the scheduled job in
tis-revalidation-recommendation
(see 1.) has started, the first operation is to effectively disconnect all doctors by removing their designated bodies and setting the flagexistsInGmc
to false. As GMC only send us connected doctors, this approach ensures that doctors that were disconnected externally show up as disconnected in our system.
This has the effect of “hiding” all the doctors for recommendations and current connections, so doctors cannot be worked on at this time.
Note 1: these changes will be propagated to Elasticsearch via the CDC process at this point.
Note 2: Thetis-revalidation-recommendation
service serves double duty as the “revalidation doctor service” - this is an old architecture decision that hasn’t yet been seriously re-examinedOnce all of the doctors in our system have been “disconnected” as described above, a message is published to the main reval exchange in our RabbitMQ instance (hosted using AmazonMQ). The routing details are as follows:
Code Block exchange: reval.exchange queue: reval.queue.gmcsync.requested.gmcclient routingKey: reval.gmcsync.requested
This message is consumed by thetis-gmc-client
(github link) service, a Spring Boot application running as a single task in ECS.The
tis-gmc-client
service then sends a SOAP request toGetDoctorsForDB
to GMC for each Designated Body Code stored in the following Parameter Store locations for each environement:
preprod:tis-revalidation-preprod-gmc-designated-bodies-codes
prod:tis-revalidation-prod-gmc-designated-bodies-codes
The GMC’s
GetDoctorsForDB
endpoint returns a body of xml data containing the details for all the doctors currently connected to the given DB. This is processed into DTOs (Data Transfer Objects) and then the details of each doctor are individually published to be consumed to RabbitMQ with the following routing details:Code Block exchange: reval.exchange.gmcsync queue: reval.queue.gmcsync.recommendation routingKey: reval.gmcsync
The
tis-revalidation-recommendation
service consumes the messages from the queuereval.queue.gmcsync.recommendation
and updates the doctor’s details stored in the DocumentDB database.
There are several things to note at this point:Any doctor returned from GMC has the
existsInGmc
flag field set to true, and their designated body is updated.Any doctor not returned from GMC remains in the database but disconnected (i.e. null designated body and
existsInGmc
flag field set to false).The workflow fields that appear as
TIS Status
andGMC Status
columns in the recommendations doctors list are both updated here. There is logic to “re-set” these status' when a doctor comes under notice once more.
As doctors are updated in the DocumentDB database, these changes are propagated to Elasticsearch via the via the CDC process.
Note: this in of itself is a lengthy process, first the changes are propagated to the masterDoctorIndex, then they are published back to the recommendation service as the recommendation service manages its own elasticsearch index.
Note 2: The connections list is backed by the masterDoctorIndex, so there is no dedicated connections architecture for this process
...
In case of overnight GMC sync failure, instead of waiting until the job re-run in the next mid-night, we can re-run the job manually to reduce the downtime.
NOTE (18/08/2023): The above is not necessarily advisable, as the sync job takes far to long to complete. Relevant Spike ticket here:
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...