Transcript of "Best practice: how to build digital services that deliver for citizens and the business case"

ROBIN KNOWLES: Hello, hi everybody, and welcome to today’s talk. Your cameras and microphones are muted, but your chat and Q&A is live, down the bottom, so do please use that. We’ll be having questions, a Q&A at the end when there will be an opportunity to ask us speakers some questions. But without further delay, best practice. How to build digital services that deliver for citizens and the business case. And this is sort of, I think if you do this then you have some secretsauce,so I’m hoping that our speaker can show us how to do that because as we all know it’s really important to create digital products and services that deliver value for the citizen, because it’s a public service, we want to use technology to deliver what our customers want, and this is the hard bit, it’s also got to deliver value against the business case; and that is what ultimately gets you the approval and the funding to do the project in the first place.

So, how do you balance those two things together? How do you achieve them both? Well our speaker, who’s going to help us understand that is Laura Burnett, she is Made Tech’s Delivery Director, central and devolved industry and a people-focused product specialist with a wealth of experience in building high quality products, managing global teams and driving positive, organisational change. Laura, I’m really looking forward to this talk. I know there’s lots of practical stuff in it which I’m really looking forward to finding out about; so without further ado, I shall get out the way and the floor is yours.

LAURA BURNETT: Thanks, Robin, and thank you everybody for joining. So as Robin kindly said, today I’m going to be talking about how we make sure that we’re building services that are valuable and deliver impact and outcomes. But what is a valuable service? Well, it is a service that solves a problem or a need for users, achieves policy intent, delivers economic or financial benefit, saves time or improves processes and is useable and accessible by all. Now it may not necessarily be all of those but it’s probably a few of them.

I’m sure you’ve seen this process before but the Government Digital Service released the service standard which includes the discovery, alpha, beta, live process for building digital services. I’ve also included this initiation stage, and as I’m talking through this talk, we’ll be talking about activities that build upon the previous phase so that you’re continuing to focus on, are we meeting the benefits and delivering what we set out to. Too often teams forget to review what was actually put in the business case and by the end of the time, you’ve delivered something that doesn’t quite match what we thought that we were starting out to build.

So what is this initiation phase, that I’ve termed? Well this is where the business case project justification is first put in place; and actually a lot of this is happening across central government at the moment with the current spend review period beginning to start as well as business planning for the new financial year. So across many of our clients that we’re working with, they’re looking at this. How do they make sure that anything that they’re bidding for is really solving a need, and by- The way that you can identify these needs, well, could be an idea or hunch, it may be through customer feedback, it could be to deliver a policy change or perhaps there’s data that helps you identify this need.

So using the example of a project that we work on with DLUHC for improving the way that data collection happened for social housing,sales and lets. There were several ways, or several problem statements that made it into the initial business case. So the hunch was that they believed that a service could have a cheaper total cost of- The service could be built to have a cheaper total cost of ownership rather than the legacy software that they were built on. They had feedback from local government that too much spend was being spent at keyboards by the housing officers, and taking away from valuable citizen-facing time, doing these data returns to central government. The policy wasn’t really impacted for this project but we had data that was saying that the social housing policy making was very reactionary and there wasn’t sufficient data, and the data that they had was quite inaccurate so it made it quite hard for them to review those social housing policies.

Once you’ve got the initial business case and justification, you’ve identified those needs, well that’s where we can start to look at, what’s the value position? You know, what is the one overarching reason why we’re beginning to work on this piece of work. This value proposition is a simple statement that summarises why a customer would choose your product or service and communicates the benefit that users receive by using your service.

One of the ways we’ve worked with clients to generate these value propositions is using this Mad Libs workshop where all of the key stakeholders who are involved in the project get given a cue card, you know, you can use one of the – ideally a printed one and you can do it in person but obviously Miro works as well, or on Mural or whatever your virtual white boarding tool of choice is. And each person individually completes the phrase, Our service helps to ___ so ___, and by doing this we can make sure, you know, is everybody on the same page, do we understand who we’re building the service for, what the activity is that we’re trying to help them do, and what the benefit is that we’re expecting to deliver. I find this is a really good way of having a conversation early on and making sure that everybody is on the same page for why we’re beginning to start work on something.

Then from this benefits mapping which is also called impact mapping, theory of change, they’re all very similar. You start to look at, you know, if we’re going to build a new service, as we’ve got here on the right under this strategic objective, well how do we- You know, why do we think it’s actually going to deliver the benefits that we’ve said it’s going to. So using the example of that DLUHC service, for example. Well we thought that we could generate a better quality housing data and a better user experience through redesigning the service with a user-centred design approach, and we believed that we could reduce the cost of ownership by replacing the expensive legacy software as a service.

There are several types of benefits that you want to start thinking about as you’re building now your business case and quantifying the benefits realisation. Some of these may be benefitsto users,such astime savings,simplerservices, wider economic benefits, accessibility or an ability to access a new service. And some of them may be benefits to government, such as time savings for civil servants, increasing their channel of service delivery, eliminating expensive third party legacy software, improving data quality, reducing failure demand, and adhering to policy and legislation.

I’ve just included this soundbite from the most recent budget from Jeremy Hunt, “Treasury will prioritise digital projects,” and prioritise proposals that, “deliver annual savings within five years, equivalent to the total cost of ownership of the investment.” So it’s really important that, you know, you’re starting to think about how do you quantify the benefits that we think we’re going to deliver and actually make sure that the value is there as we’re moving into the spend review period. There’s further guidance on how you can build out the business cases in much more detail than I provided, provided by the Treasury and the Green Book on how they assess projects.

So we’ve gone through this initiation process, it’s been agreed that there is, you know, there’s enough of a reason to start working on this. We think we can deliver some benefits through the service and have a lot, enough- You know, enough capital to move into a discovery phase. This is where we’re, you know, during this discovery phase this is where we want to test those early hypothesise and make sure that the benefits map that we did, you know, the reasons why we think we’re going to deliver benefit if we make changes, you know, we’re testing those early assumptions.

So one of the first things that we do in a discovery to ensure this is actually identify and document the users’ needs that must be met to deliver those benefits. So this is normally done through user research, often generative at this phase, so we’re going out and talking to users, asking them to talk through their current processes, what they find hard, what they don’t, you know, what they find easy, what work-a-rounds they have and then documenting these user needs for a new service.

We also need to estimate the cost of doing nothing. It’s a perfectly acceptable, even a good outcome of a discovery is to decide actually, we’re not going to move forward. We’ve looked into it, you know, we’ve done the research and we can’t validate the reasons that we put into the business case. We actually don’t think we’re going to deliver those benefits based on the research that we’ve done. And if that’s the case, well we need to know what is the cost of not changing, and our discovery research should help us to baseline this and, you know, examples of costs for staying with the, as is, could be user experience and sort of the fact that they then use less cost effective channels, inefficient or time consuming processes and the cost of legacy systems.

So an example problems space, as I say, with some example data could look like this where, you know, you look at the monthly users of a service, and get that data from the service logs. Identify what the average cost of salary is for the users for user research or desk-based research, make an assumption on overheads, so things like holiday, etc, that you might also need to include in the cost of having a member of staff, using service logs to identify the logs per user per month. Doing usability studies to assess how long it takes to submit a log, making assumption on how many working hours there are and then using this to identify what’s that total cost of ownership of the service at the moment in financial records.

 

And then there are several different ways that we can measure outcomes and benefits. So data collection methods, I’ve already mentioned some of these but user research, looking at case studies, doing questionnaires and surveys, reviewing analytics data, looking at historical service desk data, this is often really sort of a wealth of information, reviewing usability benchmarking and getting historical financial data for services that already exist. Some of those data points could be the average completion rate of the service, perception of time taking because interestingly if a service is well designed, even if it doesn’t actually take any less time but has less cognitive load for someone to complete, they may often find that they perceive it to have taken less time. The number of complaints about a service, drop-off rate and error rate, the number of service desk calls, you know, when someone has tried to go on an online journey, given up, rang someone for example. And then that current cost of service.

When defining measures, there are a few sort of key tips that we’ve come across. One is consider the entire end-to-end journey. Don’t just focus on the small bit of the service that you’re providing. So going back to that DLUHC example with the submission of logs for housing officers, you know, we weren’t just looking at how long does it take them to submit those logs but actually how do they collect that data in the first place, how do they store it, are they having to change their processes and the questions that they ask to social housing tenants because of the questions we’re asking? How is that data then used internally with DLUHC? So looking much broader than the sort of service itself.

And similarly reviewing offline user journeys, you know, how many people are using based- based systems, how many people are printing something off and going into the Post Office or  Citizen’s Advice Office to sort of access a service. You know, what’s the unhappy person, as a developer or a product manager, you might know how you think people are going to use a service and how it’s been designed to be used, but what are people actually doing? Can you use say Google analytics for example, to see the journey that people are actually taking because you may find that there’s a few loops that people get stuck in before they finally work out where they’re supposed to be going.

Avoid vanity metrics,so don’t just measure things that you know are going to be good, instead look at what’s a broader set of measures. Make sure you’re being user-need led, so I talked at the beginning of discovery about identifying what those user needs and pain points are, use those for designing those measure. Segment your data to explore different user groups and personas, so you may have citizens, you may have internal users such as policy or data- So, you know, looking at what your different segmentations are. And then finally be aware of biases, you know, as a team working on something, particularly if this is a service that you’ve already ran, can be really hard to say actually this service isn’t

good enough.

Using all of this information in discovery we then start to define the hypotheses and tests that we’ll take into alpha. So, the approach that we use for this again often using like a cue card or a prompt is first of all, you know, what is hypothesis, a nice way of writing that is, we believe that if we do X, then this will happen. To test that we will ___; and measure___; and, we are right if __. So this is saying that, you know, from the research we believe that there’s going to be a certain benefit to doing a thing that we’re going to test now for. We’re going to test it by doing usability studies, doing user researcher, for example, and measure this may include feedback from users, it might include the time it takes to complete the service, the quality of data and then we are right if- You know, ideally we have a quantifiably measure that shows if we’re actually meeting these needs but it may also be quantifiable.

Off the back off this, in sort of discovery and/or alpha, often we start with quite a lean version of this in discovery and layer on more things into alpha. We can start to put a performance framework together. So this is taking all of those layers from the discovery and from the initial business case, working it back to that vision or purpose, thinking about what are the individual goals that we want to achieve that we’d detailed in our business case and in that benefits mapping. How do they align to the user needs that we’ve identified; and similarly what hypotheses are we taking forward from those user needs, how are we going to measure it, what are the data sources and then, you know, you can start to build actions and blockers as well.

Off the back of the hypotheses we can then start to uncover assumptions, you know, it is very expensive to build the wrong thing, both in time, in frustration, in energy and in money. So we always want to reduce the risk of not realising the benefits of the service or a change initiative. So one of the ways that we do this is through assumption mapping where we get everyone in a room or again on a Miro board and start to plot out, what are all of the things that we think are true? We often use three colours for the different lenses, so desirability, you know do people actually want this? What assumptions have we made that tell us that people do want it. Viability, you know, can we actually afford it, are there any assumptions here in the way that it’s going to be supported or built that we need to test, and finally is it feasible for us to build it. Are there any underlying technology issues that we haven’t yet tested.

Then we can map them on this two by two matrix, where you have from risky to least risky and known and unknown; and what we’re doing in alpha should predominantly be focusing on de-risking these kind of unknown, risky items where we made a real assumption in discovery that we still need to test before we invest more money in this, in beta.

Just going to pause for two seconds- Apologies.

ROBIN KNOWLES: I should have mentioned at the start that Laura has not been terribly well,so we agreed that there’d be a pause in her fabulous presentation which I hope you’re all enjoying, incredibly comprehensive.

Welcome back.

LAURA BURNETT: Thanks Robin, sorry, I needed to sneeze and nobody needed to see that. So we’ve done the discovery, we’ve decided what those hypotheses are and the tests that we’re going to do in alpha. During alpha, you know, we’re always doing just enough to validate those assumptions, and de-risks starting to build the service, you know, an alpha phase is not plotting every single assumption and risk, testing the entire functionality and doing a really waterfall design process and trying to answer every single question about what the service is going to look like. It’s really just focusing on reducing the risk and answering, you know, any key questions that could stop the beta become successful.

And during alpha, and then also kind of as we move into beta, we start to move into this ideate, prototype and test sort of cycle. So in ideate the team comes together and says, you know, how might we solve the user needs that we identified in discovery and have continued to identify for user research, and prove our hypotheses and then deliver some of these expected benefits. Some of the tools and techniques that we use for this, is like crazy8s where everybody has to come up with eight ideas as mad and crazy as they are, come back together and then decide which to take forward. Card sortings, so working with users and asking them to group things and sort cards as they make sense, and then, you know, brain storming, coming together as a team to think about what are all of the different solutions that we could take forward.

 

Then prototyping, you know, what’s the simplest way that we can test the solution and test these assumptions? So it might be a paper prototype or a walk through or a role play, it could be a clickable prototype and sometimes we’ll use things like Figma for this. And obviously the prototyping toolkit from [s/l GDS 00:21:05] is a favourite tool of ours as well just because it provides such a great way for testing things in a sort of true-to-life function.

Then test. How do we quantitatively and qualitatively prove our hypotheses and valid our assumptions? So, you know, ideally we want a mix of the rich quantitative data that you get from user research and when talking to people and getting them to actually play with the prototype and have a go at using it; and ideally mix that with some qualitative data as well, you know, how long did it take them to complete it; was it any quicker than before etc.

Through this cycle of prototype tests and then adjust, we will keep reviewing and updating the business case and iterating the approach to decide, you know, can we actually meet these benefits and objectives. Are we happy that we’re going to- You know, if we move into beta, that we’re still working towards the initial business case, it might be that we decide actually we need to update the business case, so the benefits case, based on what we’ve learnt but there’s still good enough reason to move forward. But having that conversation is really important and we’ve termed it here, the sort of start/stop continue conversation.

Then once that’s happened, you know, kind of maybe in tandem, we can start to ask ourselves, does it make financial sense to proceed and actually take some of those estimated benefits and build out a more cohesive and substantive economic [s/l but 00:22:48] model. So using the economic benefits and costs to create this economic model and help determine which approach to take forward to beta and live, because it might be that in alpha actually you’ve got three or four different solutions. You could have a buy or build type decision and so having the assessment of the different economic models for those can really help with those decisions.  

So, you know, we’ve gone through this process, we’ve created our initial business case, we’ve tested it in discovery and agree that it makes sense to move forward based on the research that we’ve done. In alpha, we’ve validated those assumptions and proved some of our hypotheses and we’ve agreed that there’s enough financial economic reason to move forward.

So now we start to develop and build the service iteratively, and, you know, I’ve put here, this process isn’t waterfall. We’re not trying in discovery to understand every single possible user need and write a full backlog of user stories. In alpha we’re not trying to design the entire service, we always need to think about what’s the least possible we can do to de-risk the service, to make sure that we’re not investing too much money into something that we don’t decide to take forward.

One of the ways that we can continually focus the team on being outcome focused, is to use an outcome based roadmap. So an example of this is, we have a vision here, so this is an example from the social house submit repairs service that we built for some local authorities, where the vision was making it simpler for social housing tenants to submit repairs whilst enabling local authorities to make better priority decisions out of the back of the overarching vision, we also had 2022 goal which was reducing the time it takes to submit a repair.

From the back of that we agreed a few key themes that we were working towards, so for example, improving the form submissions, improving data quality, reducing missed appointments. And they were actually aligned to the benefits mapping that we did right back in discover where we’d said, you know, we think that we can meet this vision by doing all of these under-arching things.

Then we start to break this down into what are the outcomes that we want to look at each quarter. So a good example of this Q1 2022, we want to focus on saving time towards- You know, time saving towards the theme of improving form submissions. One of the goals, the quantitative goal that we’re working towards is actually reduce the entry time to 11 minutes without impacting data quality. And once we’d set that theme or goal for the quarter, we then need to say, well how do we think we’re actually going to do it? It’s all very well setting a target and an objective but what are the different levers that we have for generating something, and how do we assess which ideas we’re going to take forward?

One of the things that I have used with teams really successfully before is this bet cost matrix, where, you know, as we’ve generated some of those ideas and, you know, we’ve talked before about like different ways of ideating. How do you assess which you’re going to take forward? Well, you know, we then used this- You know, what value do we expect something to have, if we make a change to a form is it going to deliver the value roughly equivalent to buying an ice-cream? Is it about the cost of a meal out? Does it feel like it’s about a holiday or maybe, you know, maybe you think actually you’re going to get about a car’s worth of benefit. Or perhaps it’s such an important feature, we think it’s going to generate a house worth of value. Now, obviously these are kind of silly metrics but it just gives you something to quantitate to

and use to compare different ideas against each other.

Similarly, you know, this estimated cost, how much would you bet on it? Would you bet your house that this is going to generate the benefits that you’ve said? Would you bet your car on it? Would you bet the cost of a holiday? We’re probably not actually going to ask you to put your house on the line but again it just gives that flavour for starting to think about these ideas. And then, you know, for the things that are low cost and high benefit, just build it, let’s just build it, ship it, test it, measure it. For things that actually we’re not a hundred per cent sure, you know, we’re not quite willing to bet the whole house on it or the benefits aren’t definitely that good, well then that’s where we want to use directional research to break it down into smaller parts and make sure that they deliver value. Finally, for ideas that we’re not quite sure on or maybe we don’t know how we’re going to meet the goal that we set for this quarter, well that’s where we start to do more foundational research to really understand those user needs.

And we can take this idea of sort of outcome-based roadmaps into sprint planning, and focus on outcomes with our sprints. And a great way of doing this is setting a sprint goal based on user needs and business goals. And another way of doing it is to take away from story pointing and instead move to story splitting. You know, instead of having a huge 20 point story that ends up rolling over multiple sprints, you know, what’sthe smallest thing that we can do that’s going to deliver some value to users that we can get into their hands and start testing with them. And how are we going to do the work, you know, we want the whole team to be working together to meet the goals of this service. So, you know, continually focusing on the value proposition that we’ve agreed and the benefits that we’re trying to deliver and unifying the whole team around this; and this enables the successful sprint planning where everybody has got a shared understanding of the outcome and sprint goal, and works together to achieve it.

And once we started to actually build something and we’re probably into a private [s/l beetle 00:29:10], we’ve got something in live, we can then start to use data to assess the improvements that we’ve made and, you know, this is particularly on those low risk items where we’ve gone ahead and built something with minimal rounds of prototyping and testing, instead we’ve gone for build, ship and measure. Then you might want to use things like A/B testing or multivariate testing, perhaps use heat map analysis and identify sort of technical performance and error logging as well. And this also allows you to identify problems that users may be facing. So we’ll often use funnels to track how users journey through the service, so we can identify those entry points and exit points, make sure they line up with what we’re expecting users to do, and identify if there’s any dropping out points or pain points that we perhaps want to focus on for our research.

So a quick summary. In initiation we’re talking about why has the project or programme been started and some of the inputs to this are the business case, policy change and customer feedback, and during this phase we need to define the value proposition, for example, using Mad Libs and map the expected benefit. As we move into discovery, the key question we want to ask ourselves is what’s the cost of doing nothing, financial or otherwise, and we may use sort of inputs such as the business case, policy, user research, benefits map, service vision and desk-based research and service data, and some of the activities that we’re doing in this phase include understanding who the users are, identifying and estimating the cost of their problems, creating a business model canvas, estimating the cost to deliver the service, conducting net cost benefit analysis and then if agreed to move into alpha, defining the hypotheses and tests and creating a performance framework that we can use to map assumptions.

In alpha we’re asking ourselves, are the assumptions valid and can we deliver on the service benefits? Some of the inputs to this stage include the user personas, business model canvas,hypotheses, performance framework, assumption map and cost benefit analysis. And some of the activities in this phase include testing different ideas to scrutinise risky assumptions, increasing the accuracy of our forecasted benefits, iterating the benefit map accordingly, estimating the cost of beta and live and creating an economic model, and then advocating for the right solution as we decide whether to proceed into beta.

And in beta and live, we’re asking ourselves, are we on track to deliver these benefits? So again using this benefit map, the economic case, performance framework and the out for output to take a benefit first approach to delivery, for example using outcome based roadmaps, bet cost matrix for prioritisation and outcome focus sprint planning, and then using a performance framework, KPIs and measuring approaches such an analytic tools and potentially a beta dashboard so that we can use real data to test that the service is still doing what we’re expecting it to.

Thank you, I think back over to Robin.

ROBIN KNOWLES: Brilliant, thank you Laura, do you want to turn your camera and microphone off and have a quick sneeze – do. That’s been, that’s so comprehensive. I mean it’s just a real tour de force run through of the whole process. So now it’s time for our questions, do put them in the chat and the Q&A. All questions are welcome. I suspect they’re going to be on some sort of big concepts. So, the first one, the first question we’ve got is really good, comprehensive, love it. What books and resources or further reading would you recommend that somebody who wants to follow this process looks at? Where should they go next from today’s presentation?

LAURA BURNETT: Good question. Yeah, there’s so many. I think I talked about outcome-focused roadmaps and delivering things that are working towards benefits and one of the- One of my favourite books that I’ve read around that is called Product Roadmaps Relaunched. I’m just quickly Googling in the background, it’s by-

ROBIN KNOWLES: Yeah, do pop it in the chat if you have time. Pop it in the chat if you have time.

LAURA BURNETT: But, yeah, it’s a really good book all about the way that you can use, you can shift from kind of time-based roadmaps and actually work in an agile way but still work towards something. And then another one that’s pretty good is Escaping the bill trap, again that’s about product management to deliver value, so I’ll pop that one in the chat as well.

ROBIN KNOWLES: Brilliant, okay, well done, thank you. Everybody go to the chat, that’s where you can download those two recommendations. So you’ve just touched on it there, so the next question’s a really good question as well which is, how long – I don’t think we touched on timescale during the presentation. You know, how long should this take, are there stages that are just going to take a lot longer than others, and I guess without some sort of deadline, can these things drift? You know, can processes take too long?

LAURA BURNETT: Yeah, I guess- It’s a slightly how long is a piece of string, but I think, you know, I’d probably just refer back to the service manual by and large, and I think discovery sort of six to ten weeks, alpha eight to twelve to sixteen weeks- You know, I think it sort of, it depends on the complexity of the service and the problem areas. So, you know, for a discovery into a brand new piece of legislation where something doesn’t already exist, it’s going to be hard to recruit users, you know, perhaps the legislation hasn’t even been launched yet, so you’ve got to be really cautious about how you’re talking about it. You may need a longer discovery period versus replacing a some legacy software where the user groups are relatively defined, you’ve got good engagement with those users and you’ve already got a wealth of information to hand. So I think, you know, there needs to be some consideration and almost like project design as you’re moving into these phases and considering what your constraints are.

ROBIN KNOWLES: Okay, I really like this next question, so things change and when you put a picture of the chancellor with a quote from this week, you know, plans change, people’s priorities change, budgets change. If that happens during your process do you just sort of- Do you have to go back to the start, do you just go back a stage or do you just go back as far back as you need to go test the right things? What do you do?

LAURA BURNETT: Well I think, you know, things do change and that’s why we would always advocate that you work in an agile way, that you embrace change and that you’re doing that, the iteration on the work that you’re delivering. And I think, you know, I talked about doing just enough to de-risk a service and move into beta and live, and I think the main reason for that is, you know, you want to start delivering benefits as quick as possible. Now you may decide to pivot and change what you’re focusing on but if, you know, if you’ve spent six months doing discovery and alpha and then the business decides to reprioritise you somewhere else, that’s potentially- You know, not wasted time but it’s time where you haven’t really been able to deliver on the benefits that you’re expecting to, whereas if you can start building something, you know, three months in, well actually even if you don’t fully build everything because you’ve had to pivot on to something else, ideally you work in agile way, you’ve still built some of the items that you’re working towards.

ROBIN KNOWLES: Okay, next question, so the process you’ve presented today is enormously comprehensive or it feels enormously comprehensive. So this is a good question. So is there an optimum scale of project that you would apply this methodology to, and if this feels like overkill, can you apply a light version, in sort of inverted commas, or- Yes, go on?

LAURA BURNETT: Oh, sorry, I was- Yeah, I guess- I’m not sure I’d necessarily see it as that, like that heavy way. I’m trying to think of what you might want to miss out on but, yeah, I’m not sure.

ROBIN KNOWLES: Lots of people probably aren’t following this process do you think- What’s your take on it? I mean obviously the people you’re working with are following it, do you kind of anecdotally do you get the sense that lots of people don’t do this maybe because they feel it’s too much overkill or too much time or too much resource?

LAURA BURNETT: I mean they might not use- You may not use all of these specific tools but I think, you know, in each of the phases the overarching themes are still really important. So, you know, before you start a project you need to know why you’re starting it and what you’re expecting to get out of it. In discovery we need to make sure that we’re actually- We think we’re going to meet those benefits, in alpha we’re still, you know, we’re testing and proving that before moving into beta. So I think maybe the specific examples of tools might not be used but I think the overarching themes are still really important for any team.

ROBIN KNOWLES: I suppose, so this isn’t my question, this is my supplementary to that question which is, you know, have you had an experience where you’ve said, no, what you’re wanting to do doesn’t need this level of process? Have you come across something where you’ve said to them, look, I think you can pick and choose bits of this that are going to be helpful to what you’re doing but it’s just-

LAURA BURNETT: Yeah, I mean I’d say that we’re always picking and choose- You know, this- I’ve perhaps laid this out as a dogmatic process but that’s not my- That’s not kind of my intention at all, these are examples of tools that you may want to use. You know, we’ve worked on a discovery for DCMS recently where we did a four week discovery, it was a smaller piece of work where the users were well know but we still did user research, we still mapped out roughly how much is their current service costing, so maybe all of the specific workshops weren’t ran but I think the underlying aims were still delivered.

ROBIN KNOWLES: Brilliant, okay. We’ve probably got time for one question, maybe more depending on the length of the answer. Really good question this one which is about, where do teams tend to struggle most in this process? What’s the most challenging part?

LAURA BURNETT: Historically I would have said discoveries and that, you know, teams sometimes struggle with what to focus on with a discovery and how to dis- Let me rephrase, I think often the teams who are doing discoveries, you’ve got DDaT professionals who are used to building a thing and delivering a thing, and a discovery is a bit more intangible, so sometimes it can feel a bit uncomfortable. However I think people are getting better at it and more used to it and the, you know, this process from the GDS service manual has been around for longer and I think now perhaps it’s actually alphas that I’d say people are struggling with a bit more. It’s the, you know, how do we work out what to test and how to quantify whether we’ve delivered those assumptions and we’re meeting those hypotheses or not. So, yeah, I’d say it’s perhaps shifting slightly.

ROBIN KNOWLES: Yes, because I mean at the start we said, you know, this is bringing together the kind of the delivering fantastic user experience to our end user and also meeting the business case. So presumably at the alpha stage that’s when there’s enough there that you can really get into the numbers, the costs. Do new people come into each phase? So, you know the accountants might say, come back to us when you’ve got something, gotsome numbers and we’ll have a look at it and compare it to the business plan, or do you say, no, you know, you guys and girls need to be part of this process all the way through?

LAURA BURNETT: Yeah, good question. I haven’t often seen economists being that closely involved. I think in reality it often seems that something gets, you know, agreed in a business case or a proposal and there’s not always much assessment at the end as to whether it actually met what it set out to or not. And I think as a tax payer, you know, I don’t think that’s necessarily the best way of doing it. So I am almost trying to encourage teams to be more introspective and make sure that they are delivering good value for money for us as tax payers.

ROBIN KNOWLES: Brilliant. Laura, we are out of time, that has been fantastic, really enjoyed it. Thank you. I know you’re not feeling terribly well, so that’s even double thanks.

LAURA BURNETT: You’re welcome. [laughs]

ROBIN KNOWLES: No, no, for well presenting in a way where we frankly didn’t know, if you hadn’t told us I think. So presumably you’re completely up for people getting in touch?

LAURA BURNETT: Yes, of course. I’m on holiday next week but I’m sure, erm, I’ll respond to you the week after if I don’t-

ROBIN KNOWLES: Yeah, fantastic, so do get in touch with Laura and there’s oodles of experience and knowledge there that you’ve just had demonstrated. So, yes, I will officially declare this sort of session closed, and Laura thank you so much for your time.

LAURA BURNETT: Thank you Robin.

ROBIN KNOWLES: Thank you.

Back to the episode