2022-03-08: Hack day – Open house (Chatbot and Data wrangling)

We decided a Hack day was long overdue! Hack days are great opportunities to do a bunch of things:

  • practice collaboration outside the normal group of people you collaborate with

  • test your specialist skills, or, conversely, test your non-specialist skills

  • challenge a group of people that may not always work together to plan a time-bound task

  • examine how the team approaches that task - Agile / UCD
    (e.g. plan an hour, reconvene, check with client, plan another hour, reconvene, check with client etc /
    research, clarify context, confirm problem statement, hypothesise and experiment to a PoC)

  • maintain focus on one task for a whole day

2022-03-08_TISTeamHackDay.pptx

 

Lean coffee 'ideation' session

Using a Mural board to give everyone the opportunity to present any ideas they had that the group could then vote on as the most popular subject for a day of teamwork.

Ideas put forward were attributed to their proposer.

Dot-voting

  • on the range of ideas (using a neat in-built Mural function).

  • narrowed the list of ideas down to 5 (all receiving 3 votes each).

  • followed by sticking names next to the idea each individual wanted to work on during the day.

  • for the most voted on ideas, the proposer of the idea became the de facto ‘Client’ for that idea. They were still allowed to work on their idea, but would be the first port of call for anyone who had questions about the idea, and could be the user to bounce off when it came to experimentation.

Split into teams

…with some suggestions of things to think about

  • Split the day into morning (10:45-12:15) and afternoon (13:00-15:30) sections, with AndyN checking in to see if anyone needed any help at the start of each session and from time to time throughout.

  • Both teams began by asking the ‘Client’ to provide more context to the idea so that the team were all on the same page.

 

Questions to consider answering about how the day went

 

Team Alpha: Chatbot

Team Alpha: Chatbot

Intro

Team Alpha had a quick conversation with @Nazia AKHTAR as she was the client of this task. The following problem statement was reported by client :

There is an expectation that as we transition the forms R/ARCP process into TIS self service there will be a number of queries from the trainees regarding Form Rs, ARCP process, personal data. Historically trainees would email admin staff or visit the regional websites for information which adds delay and is time consuming. The aspiration is to get trainees rapid responses in one centralised location and to reduce manual query handling by admins. This will also improve the engagement with trainees and also result with confidence in the system.

Hypotheses

The following requirements hypotheses1 were discussed. She articulated that if the proposed system could handle:

  • “Categories of enquires”,

  • “Doctors enquiry on personal data of ESR”,

  • “Capturing the questions while admins are not able to answer instantly”,

  • “Some queries on ARCPs process” etc.

@Rob Pink mentioned that HEE has a chat system which supports for Specialty recruitment for junior doctors and others, i.e, How they can apply for a job. He also mentioned E-learning for health care which does the word search.

 

Notes:

1 until users are able to confirm anything an Agile team does, the things we create experiments against remain “hypotheses”, that we need to work up and test with users. The word “requirements” is an indicator that the user has already told us that this is what they want and we have validated it.

Experiment outline

After having some concepts in mind that team considered few things to design the chatbot. Some of them are stated below:

  • Capturing inaccuracy of data

  • Traffic of email

  • less delay – save time

  • Responses from Admin

  • Conversation tree – add that in the future

  • Instant answer

  • Reporting mechanism

  • How intelligent the chatbot would be?

  • FAQ

Given the above characteristics, team were researching on the net to get some hints and also design and implementation discussions were being continued. @Andy Dingley came up with the ideas of implementing Amazon Lex into our TSS. We found out that we had only 1 hour 30 minutes sprint/iteration to come up with the PoC.

Experiment detail

We also had a quick whiteboard session in which we agreed on the tech stack and the rough architecture for the system.

We had only 30 minutes lunch break and we divided the dev team into front and back end part. AndyD and Jay were looking into the Amazon Lex and @john o and @Yafang Deng were concentrating on the Front end with Amplify.

Given the time considerations, we adopted a ‘move fast and break things’ approach to the implementation. This involved the two sub-teams (FE/ BE) spending about 45 mins on each side of the tech stack to see how far they could implement things and then reconvening to discuss any blockers and work on connecting the Amplify chatbot (FE) with Lex (BE).

At first, we started with the Amazon Lex V2 console, the new version which has a more friendly UI and provides support for interactive conversation flow. We added a new bot and some new intents in the bot. The AWS console gave us a built-in test functionality to allow testing the utterance configured (https://docs.aws.amazon.com/lexv2/latest/dg/build-test.html ).

Screenshot of the console test dialog

After testing was done via the console, we decided the next step was to Terraform the Amazon Lex config and integrate the bot in the trainee-ui frontend.

Terraforming
The Terraform resources for Amazon Lex are fairly simple, but did not map well to what we had configured in the AWS console. Attempts to import the existing resources to the Terraform state also failed, with an error that the bot could not be found. The combination of these two issues made initial progress on Terraforming slow, with a lot of assumptions made on values to use as we were unable to make any direct comparisons. Once the Terraform resources were set up, they were applied successfully but no bot was found in the AWS console. It was at this point that we realised that Terraform was creating V1 resources only, with no alternative Terraform provider available for V2. At this point we agreed to ditch Terraform and set the V2 bot up in the AWS console.

Front-End Integration
We’ve already used Amplify for integrating Amazon Cognito, so this time we thought of Amplify as well. The first challenge of frontend integration was to set the authentication for the bot. We spent some time understanding that we need a new identity pool and to configure the identity pool to provide access the Amazon Lex chatbot. After solving the authentication issue, when we tried testing the bot from trainee-ui, we always got a “Invalid bot name or alias“ error. We tried various different configuration tweaks but couldn’t find what was wrong, then @Andy Dingley realised this could be because of the version incompatibility and found this doc: https://www.npmjs.com/package/@thefat32/aws-amplify-lex-provider-v2. We tried to implement the third party provider but still had issue configuring the V2 bot, at this point we made the decision to use a V1 bot instead.

A return to Terraforming
Now that we knew Lex V1 was the way forward we could use the existing Terraform configuration to spin up a basic bot with a single question.

Back to the Front-End
With a V1 bot available we reverted the attempts to integrate a V2 bot and finally, the chatbot worked!

Screenshot of the TIS Self-Service implementation:

Adding initial data
Now we had a working solution we added as many questions and answers (via Terraform) as possible in the time as had left, given this was now limited we focused on a set of FormR related questions only.

Ran a live test at the end of the Hack day, with Naz playing the client. And the tool worked seamlessly!

What went wrong with V2?
Amazon Lex V2 was only recently released (Jan 2022), as a result many of the supported tooling like Terraform Providers and the Amplify library have not been updated to support it.
The AWS console defaults to the V2 console, but it is not immediately clear that it is the console for V2 of the service, as opposed to V2 of the UI for the unversioned Amazon Lex service. As a result we didn’t notice the difference until we got to a point of “the tooling can’t be this bad, there must be something we’re missing!”.

Team One Data Wranglers!

Team One Data Wranglers!

Intro

Team Data Wranglers looked at…

…the issue of whether TIS data could be used as a predictor of poor trainee assessment outcomes. The objective would be to flag-up 'at-risk' trainees to help to target local or national supportive interventions.

Problem exploration

Being a very broad objective, the team spent most of the morning in a huddle call devoted to exploring the problem space:

  • expanding our understanding of the assessment process and outcomes,

  • the implications for HEE of trainees taking longer to complete their training than expected,

  • existing research on risk factors for trainee progression (e.g. https://www.gmc-uk.org/-/media/documents/2016_04_28_FairPathwaysFinalReport.pdf_66939685.pdf),

  • the tools and statistical frameworks one might use to tackle the problem (e.g. Amazon Sagemaker),

  • ethical concerns around trainee profiling,

  • other non-TIS sources of data that could be diagnostic (e.g. ePortfolios), and

  • how one might interface with existing support programmes such as Doctors in Difficulty.

Morning progress

Participation in the call was good (even ‘the client’ pitched-in), but it became apparent that it was unlikely that we would be able to deliver any actual ‘work’ (i.e. a product or POC) during a one-day session. As energy levels flagged, we broke for a 30min lunch with a view to consolidate our objectives for the day thereafter.

 

Perhaps it was at some point during this ‘discovery’ phase that it would have been useful to call a time out and have a think about either:

  • vertical slicing so you could work on some ‘build’ in the afternoon; or

  • what you could sketch out diagrammatically to show a potential approach (what elements of TIS data / e-Portfolio data could be used to help with the predictions, for example)

  • characteristics of Sagemaker that you could explore conceptually in the ‘build’ work down the line (kind of a Sagemaker Spike)

  • deeper dive into the ethical concerns and how you could mitigate these

  • build out the stakeholder map for collaborators - WBID, team that deals with Doctors in Difficulty, e-Portfolio teams (like Horus - is there any PoC we could work up with them, ahead of engaging with the wider e-Portfolios owned by the Royal Colleges?)

Afternoon progress

 

After lunch, we decided that non-TIS data might be key to providing trainee-specific risk assessments, while TIS data could provide a baseline of general risk pertaining to e.g. particular programmes/specialties. This suggested two avenues of work:

  • to model general trainee risk using TIS data, and

  • to look at incorporating ePortfolio data into TIS, to make it more readily useable for more detailed modelling in future.

 

At that point, @Steven Howard made contact with the London Workforce Planning & Business Intelligence Directorate, and obtained a very useful presentation on their own initiative in exactly this area. Reviewing and discussing that work consumed much of the remaining time, but gave rise to another avenue of work:

  • to collaborate with LWP&BID

    • to investigate why the project had perhaps stalled

    • to see which areas we could help advance

    • to apply the modelling framework to Outcome 3s (training extensions), as the existing work has focused on Outcome 4s where trainees leave training altogether.

 

Overall, it was a very useful learning experience for the group. Targeting a broad objective, of which we had limited knowledge at the start, it was perhaps to be expected that we would not be able to deliver anything more concrete than the avenues of work to explore indicated above. However, without the galvanising vision of a specific product to work towards, we were perhaps more in need of a process champion to keep us focused in the latter parts of the day than the other group.

Hack day Retro

Hack day Retro

Hack day Retros are best carried out at the end of a Hack day, while everything is still fresh in everyone’s minds.

But it had been a first introduction to Hack days for many participants, and a long day accordingly, so on @Reuben Roberts's suggestion, we hijacked the scheduled Team Sharing two days after the Hack day for an ad-hoc Hack day Retro instead.

Not content with his usual technique of asking two questions in one sentence, AndyN posed every question he could think of to tease out as many lessons as possible, from all conceivable perspectives he could think of!

 

Then asked the team to post-it up any thoughts they had, using these questions as prompts.

Then did a card-sorting exercise to group similar observations together.

Finally, open the floor for a discussion on each of the tickets in each of the groupings.


Discussion

UCD / Agile / Service Manual

Quick summary of context round the initial idea can turn a solution back into a Problem Statement.

Even a small amount of planning helps structure a day that is a surprisingly short amount of time to achieve any outcome.

Not all ideas on a Hack day need necessarily adopt the UCD / Agile / Service Manual approaches.

Observations

Identifying elements of what needs to be done is easier than working out what order to do them in.

Felt like more MDT working than our normal day-job.

Difficulties

Involving everyone wasn’t easy.

Because AndyN hadn’t given enough notice for everyone to clear their diaries, this caused disruption - not a stable team.

Team size (smaller) would probably have worked better.

Struggled breaking down idea scope.

Some ideas are too big for a working software outcome. What alternatives are there to that?

Got caught up discussing - no time for doing.

Didn’t discuss enough breadth of solutions.

Some team structure would be value.

Follow ups

  1. Review ideas - add to backlog / save for future Retros / discard?

  2. Take chatbot idea and run with it - present at Review, with a cross-reference to having had a Hack day.

  3. Hack days are safe spaces to conceive experiments against problems, build them out and have fun. Do we need to review any outputs before presenting relevant ones to users, or do we present the raw outputs in the context of this is a PoC created during a Hack day?

Next time

  1. Clearly decide at the start whether the problem statement your team is working on lends itself to UCD, Agile, etc.

  2. More rapid planning at the start of the day to get a clearer course of action from the start.

  3. Consider JFDI the 1st solution to gain insight to inform how you modify it and/or develop other solutions to the problem statement.