2022-02-09 Error generated by some bad data being returned by the hourly recommendation status check from GMC.

Date

Feb 9, 2022

Authors

@Cai Willis@Doris.Wong @Jayanta Saha @Joseph (Pepe) Kelly @Adewale Adekoya

Status

Resolved

Summary

https://hee-tis.atlassian.net/browse/TIS21-2653

Impact

Large numbers of logs generated requiring ~5 minutes downtime of recommendations. Application was becoming slow beforehand due to message processing.

Non-technical Description

A large number of errors were being generated during the hourly recommendation status check (which is used to synchronise recommendation status in gmc with the revalidation app). A single Doctor GMC No: 7064627) had been returned with a deferral reason but no sub reason - which our application failed to handle gracefully.

The logs indicated that a large number of snapshots might have been being generated which could have been adding a large number of unnecessary records to the database, so quick action was taken that required a reboot of the recommendations app. Thankfully this was a red herring

Since then some further changes have been made to the application to improve resilience and prevent this sort of issue in future:

  • We have prevented erroneous updates from endlessly retrying

  • We have improved logging and error handling

  • We have improved the handling of null deferral reasons and sub reasons


Trigger

By default any message received that generates an error is requeued - if this is not handled properly it will endlessly requeue and generate error logs.

In addition, poor handling of null values in deferral sub reasons caused this error in the first place.

Detection

  • Our error notification system (Sentry) reported an error at 11pm the previous night.

    • Further investigation showed this error to have occurred multiple times since 4pm the previous day

  • awslogs-prod-tis-revalidation-recommendation logs were up to 1.2GB, we were getting lots of Execution of Rabbit message listener failed . . . Caused by: java.lang.NullPointerException errors.

  • Investigation of RabbitMq console app revealed a single endlessly requeuing message.


Resolution

Initial quick patch was to purge this message from the queue (this required temporarily stopping recommendation service in order to successfully catch the message to be purged)

Second step was an update to the recommendations app to prevent requeuing of error generating messages - this was done quickly to prevent the next hourly job starting the same problem all over again

Third step was improving the logging and messages around messages received by the recommendation service

The final step is improving the handling of null deferral reasons and sub reasons - this was the root cause.


Timeline

  •  

  • Feb 9, 2022 9:03 - @Cai Willis reported the errors

  • Feb 9, 2022 9:35 - Investigation started

  • Feb 9, 2022 9:40 - issue reported to users and Recommendation paused for 5 minutes

  • Feb 9, 2022 9:50 - Temporary fix made

  • Feb 9, 2022 9: 50- Comms sent to users that recommendation was back

  • Feb 9, 2022 10: 40- Preventative measure deployed to recommendation service (prevent requeuing)

  • Feb 9, 2022 12: 05- Likely root cause discovered

  • Feb 10, 2022 13: 15- Root cause solution deployed to production environment


Root Cause(s)

  • Some unexpected data got on reval.queue.recommendationstatuscheck.updated.recommendation

  • Poor handling of null values in deferral reasons

  • Default behaviour of requeuing messages when exception thrown


Action Items

Action Items

Owner

Action Items

Owner

https://hee-tis.atlassian.net/browse/TIS21-2653

Cai Willis

 

 


Lessons Learned

  • Increased monitoring (e.g. Sentry slack integrations) enabled us to solve this issue very quickly

  • Care needs to be taken when using message brokers to handle exception-generating messages appropriately

  • There are still some areas of the Reval Recommendation service that need improvement in terms of robustness and best practices