Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Date

Authors

Andy Nash (Unlicensed) Joseph (Pepe) Kelly John Simmons (Deactivated) Andy Dingley Simon Meredith (Unlicensed)

Status

In progress

Summary

On Friday evening we saw Jenkins struggling, and then fell over, subsequently causing loads of other weekend timed jobs to fall over

Impact

No Stage. No Prod. No data syncing in various places

...

  • 2020-10-16 06:33 (question) ESR Data exporter triggered a build of outstanding PRs (resulting from Dependabot)

  • 2020-10-17 06:00 ESR n-d-l cron job didn’t start - manually kicked by Paul at 10:08, exited at 10:09

  • 2020-10-17 06:00 ESR ETL (question)

  • 2020-10-17 08:48 (question) (see Prometheus graph below)

  • 2020-10-17 09:27 (question) (see Prometheus graph below)

  • 2020-10-17 10:23 D/B Prod/Stage sync started but never completed

  • 2020-10-17 10:23 NDW ETL: Stage (PaaS) failed

  • 2020-10-18 02:42 ESR Sentry errors x 7 (reappearance of the same issue across all services)

  • 2020-10-18 07:37 TCS ES sync job failed to run/complete on either blue or green servers

  • 2020-10-18 10:23 NDW ETL: Stage (PaaS) failed

  • 2020-10-18 10:25 NDW ETL: Stage (current) failed

  • 2020-10-18 07:37 TCS ES sync job failed to run/complete on either blue or green servers

  • 2020-10-19 01:29 TCS ES Person sync job failed (None of the configured nodes were available)

  • 2020-10-19 07:46 Users started reporting problems using Search on Prod

  • 2020-10-19 08:59 Users reporting problems using Search on Prod had been resolved

  • 2020-10-19 07:54 (question) (see Prometheus graph below)

  • 2020-10-19 08:17 (question) (see Prometheus graph below)

  • 2020-10-19 10:35 (question) (see Prometheus graph below)

  • 2020-10-19 (question) massive Sentry hit, on ESR, using up our entire monthly allocation

  • 2020-10-20 07:30 Person Placement Employing Body Trust job failed to run/complete on either blue or green servers

Prime Timeline - according to the monitoring channel

  • 2020-10-16 (Friday) 07:18 Staging RabbitMQ node 2 down

  • 2020-10-16 (Friday) 07:38 Prod ES node 3 down

  • 2020-10-16 (Friday) 07:38 Prod ES node 1 & 2 down - additional alert of too few nodes running - at this point, prod person search should not be working

  • 2020-10-16 (Friday) 07:58 Staging ES node 2 down

  • 2020-10-16 (Friday) 08:43 Phil W asks what this all means, Phil J summaries

  • 2020-10-16 (Friday) 12:03 Old concerns on green&blue stage goes down

  • 2020-10-16 (Friday) 16:38 Jenkins goes down

  • Same alerts continue over the weekend and ETL failures occur because ES is down

  • 2020-10-17 (Saturday) 01:13 high messages in RabbitMQ Prod

  • 2020-10-17 (Saturday) 01:28 high messages in RabbitMQ Staging

  • 2020-10-17 (Saturday) 07:08 Staging ES node 2 down, Prod RabbitMQ node 3 down

  • 2020-10-17 (Saturday) 07:18 Staging ES node 1 & 3 down

  • 2020-10-17 (Saturday) 07:33 Staging RabbitMQ node 1 & 3 down

  • 2020-10-17 (Saturday) 07:43 Prod Mongo goes down

  • can’t be bothered to go through any more alerts, everything is broken at this point

...

Everything fell over

  1. Jenkins

  2. ESR containers taking up all the resource

  3. Too many (Dependabot) PRs outstanding, builds, rebasing

  4. ESR did not had time to action them because of the launch of new world code

  1. what else (double check VMs, Logs, etc)?

Discussion, along with short and longer term actions

What can we do about Dependabot creating and building simultaneously?

Dependabot does run sequentially, but much faster than Jenkins can process things so everything appears concurrent.

We could get Dependabot to add a GitHub label to the PR - add something to the Jenkins file to read the label and mark as “Don’t run”.
But this stops Dependabot being useful.

ESR preoccupied with launching New World, understandably!

Can perm team keep on top of ESR stuff when they leave?

Even when keeping on top of things, will it eventually be too much anyway?
Or is it simply a case of the team not controlling the overall number of open PRs?

Original Jenkins build was never designed to handle this much load - underlying architecture isn’t there for the level of automation we now have. It is designed for a single node, not load-balancing

Is Jenkins the right tool for everything it’s being asked to do? No:

  1. bump up the Jenkins RAM to 32Gb (short term ONLY). Add a reminder to revisit this in 1 month / 2 months?

  2. disable integration tests on ESR projects for PR pipeline (they’d still run on merge to master, rather than each PR). These are what fire up the local stack and test containers (hold back on…'if we’re not planning to do any further ESR work once Leeerrroooy leave, we could just disable the integration tests in ESR')

  3. Close outstanding ESR PRs - how many is ‘critical mass’? But without being blazé about approving PRs

  4. Restrict the number of PRs Dependabot opens on each ESR project to 1 (but given they’re microservices, it still might be a big number). Not much of a concern if we do 2. above.

  5. The ElasticSearch nightly sync shouldn’t be necessary. Verify that ElasticSearch is being updated properly during the day.

  6. move ETLs over to ECS tasks (serverless ‘run container' instructions to AWS - not reliant on our infrastructure).
    This would remove the dependency on Jenkins - so if it went down, the jobs could continue.
    Don’t do scheduled jobs / anything with a timer - use Cron server instead for this stuff.
    Just use Jenkins as a build server (Metabase also runs on Jenkins, but doesn’t use much)

  7. ticket up addressing our infrastructure so that the set up ESR have created does run - it’s been done right!

  8. get ourselves a dedicated Jenkins server (what size (question))

  9. move to ElasticSearch-aa-S

...