This episode is a “semi follow-up” to the last episode, where we talked about Process Mining – today’s topic is Task Mining (aka Robotics Process Discovery).

Task Mining will bring clarity into what your users do on a procedure level, and can give insights into what might cause a “spike” in your analysis, or fill the holes that you might have in a discovered / mined process (for example when there are manual steps or steps in a system that does not create a log file).

In this episode we spoke about the following topics:

  • What is task mining (vs. process mining)? What is the information that one gets from task mining?
  • How to handle procedure-level in models
  • The “Big Brother” fear that comes with task mining, and how this is mitigated
  • Benefits of task mining
  • Steps to a useful application of task mining (how to get to a good analysis)
  • Hybridized setup of process and task mining – 5 steps to follow
  • Velocity and throughput analysis (frequencies and times) as a way to identify improvement and (RPA) automation opportunities
  • What/if analysis in a process/task mining tool vs. (proper) simulation in an architecture tool
  • Technical integration of task and process mining tools; benefits of using BPMN in mining scenarios
  • Experiences from a client project

Additional information

  • Analysis to identify different improvement opportunity types
Time and Frequency Analysis

Credits

Music by Jeremy Voltz, www.jeremyvoltzmusic.com

  • CP1 (Welcome)
  • Lofi Lobby Loop (Interlude 1)
  • Airplane Seatbelt (Interlude 2)
  • Be Loved In Return (Interlude 3)
  • South Wing (Outro)

Transcript

(The transcript is auto-generated and was slightly edited for clarity)

Roland: Hey J-M, how are you doing today?

J-M: Hey Roland, I’m doing good. You know, I’m finding my way. Isn’t that what we’re all looking to do these days?

Roland: Especially after the last episode with Julian Krumeich, our product manager for Process Mining at Software AG

J-M: What a treasure of a human being. His insights and actions we can take every day to make our lives more process driven, more evidence based. I love it.

Roland: So this is a semi-followup of the last show. We’re going to take a little twist after learning about the history, current state and future of process mining. But today we want to focus a little more on the little brother of process mining, called “task mining”.

J-M: Little brother? A couple of our colleagues would really want it to be seen as an “equal sibling”.

Roland: It depends on your position. So we’re going to take a look at task mining. In a nutshell, we’re going to talk about three topics in this episode. The first is the journey to actual insights: what are you looking for, when do you use process mining, when do you use task mining? What is it? Why are they different? Why do they belong together. The second part that we’re going to talk about is your first steps into task mining. What value does that information have, and how does it get integrated? That brings us to the last point, stitching process and task mining together. What types of insights and capabilities do you get that will drive change. Without further ado, let’s talk about the difference between process and task mining, starting with definitions.

J-M: Let me jump right into it! We want to start process and task mining, and while there is some similarity in the kinds of information they can capture, the way in which they do it is quite different. Process mining is what Julian talked about – capturing automated steps in a transactional system in a process flow. That allows you to extract the logical flow of activities. That’s going to be coming from transactions. That’s going to be displaying attributes of those transactions in the order in which they occurred to figure out how your automation is working. What are your automated systems? What flows are they going through? And as I mentioned before, it sees transactions. While that is a strength in the ability to turn logs into processes, it also ONLY really sees transactions. That’s going to cause a bit of a gap in the actual execution of processes vs. what you have documented. Because, as you might imagine, transactions don’t represent ALL of your process.

Roland: And to be very clear, we’re not only talking about a transactional system like an SAP or an Oracle – like an ERP. We’re calling transactional systems “every system that records an activity that a user does”. And that’s not dependent on how often that happens. So you have an electronic record of every activity.

J-M: Yeah. Those electronic records come from systems – business rules engines, decision systems, those sorts of things. They produce those logs that are being used to reconstruct the sequence of activities. Then the next piece of the puzzle is on the side of task mining. That fills in the gaps. So instead of taking transactional logs, which are produced by systems, we’re taking user actions and capturing them through intelligent object detection on the screen. That’s right, this is screen scraping and key logging that allows us to see what a user did in reality. Not just in hitting a button to cause a transaction to fire and an event to be logged. Rather, when they clicked on that field in Excel or when they opened that email and copy-pasted that text string from one thing to another, and those activities are really important because they’re part of your business process, but they’re not really captured at a transactional level. They’re outside of those event logging systems.

Roland: So I understand that with process mining you get something like a visualization of a process in a chart with all the details. What are the outcomes of a task mining exercise, once you’ve run the bots on the desktop. What do you get?

J-M: That’s a great question! You get something like process mining, in the sense that it gives you a flow of process steps in the order they happened, but they look a lot different. Instead of it being a transaction with its named activity which comes from a system, it tends to be a combination of user action and where that action was performed. So we were in “window X” or in Excel and a user clicked. Or we were in Word and a user “pasted” – ctrl-v. We see a combination of Location-Action. From there, we can intuit the process steps. It’s not quite one-to-one. We’ll talk a little bit later about why we need business expertise to attribute what we’re seeing from the task mining capture system to an actual business step. In the meantime, besides capturing the activities on the desktop, we can aggregate that data across users. That’s really important. In process mining, we often find activities associated with user accounts, and that’s a secondary condition. In task mining, it’s the opposite. We are capturing tasks BY user. So each user is the source of a single set of processes. Because that’s where we’re capturing it – the robot is on your desktop. So when you’re identifying variances and long-running tasks, it’s always segmented by user. That’s where we start. Then we aggregate it up to an overall flow across all users. As you might imagine, that’s very powerful for giving you an insight into what’s happening in your business. What are people actually doing? But because it can only see user actions – primarily user actions in non-transactional systems – you’re missing a lot of the automation you’ve spent so much money and so much time trying to enable. So you’re seeing the manual side of what’s happening in an automated process. And that’s really useful. So that’s the two sides of the coin – you’re getting transaction logs from automated systems or you’re getting screen-scraped logs from user actions in computers. So why do I want to do task mining? That’s a good question! And Roland, we ask that same question to our customers. Because they say “oh, we want RPA/RPD, we want to take a look at technology”. You see people just throw acronyms at the wall.

Roland: Oh yeah, that’s normal. Because people just hear something, understand half of it, repeat it, then hopefully they make up the difference during the conversation and get a bigger picture of the whole topic. But when you look at the benefits of task mining, it’s much less likely that the processes that you’re capturing have been documented. Because they’re not assignable to a system. So if you would do regular process modelling, which has been done for years and years in ERP implementation, you pull out reference content from the systems. And those steps are the transactions the system does. They typically do not include manual steps. They typically do not include all the workarounds that users invent while doing things. So that is obviously a benefit of getting task mining in there because there’s a huge effort required to document those manual steps in your process model if the system doesn’t provide you with that reference content.

J-M: Yeah, I agree! It’s funny – the number of models you would have had to create during process modelling to capture all these very low-level / task-level models is very high. So part of the problem is that you’re just not going to want to capture that normally. You’re not going to want to spend the money and time to get these low level models manually. So you want to do it automatically, and having this technology is very useful!

Roland: So there’s one thing that I typically recommend to clients. They ask “can I get a procedure-level model”, and I don’t recommend that they do this. Quite the opposite – what I recommend (if they don’t use task mining) is for them to take screenshots of your systems and put callouts on those pictures saying “click here”. Because you also have that maintenance effort that goes with it. If the system changes, business users typically don’t know that, and this would be a faster way to support the change: just change the callouts.

J-M: Yeah, I worked for a pretty substantial municipality a few years ago, and they were doing a massive system landscape upgrade project, and all their task-level models were captured in uPerform, which is a click-monitoring system which produces training manuals off of system activities. And that’s what they did for their process models. Is that good? It’s fine. It was all contained within this single system that was out of date the minute they published it, but it was a good start for the conversation about low-level processes. But the point is that task mining offers more than just a one-time upload of what people are doing into your system as a model. It offers continuous monitoring of how people are executing tasks at the lowest level, and continuous update of models so you can keep them fresh. We talked about that in a few episodes – keeping your environment evergreen. Full of really useful information people can take, rather than out-of-date content from when someone did the project 5-years-ago to capture this one process. You also might find people using the letters SOP (standard operating procedure) – pieces of documentation that captures these sorts of things. So you’re capturing tasks in checklists and doing this low-level work. That’s where you’d normally find these things concealed in these flat file documents that don’t end up being useful at an enterprise repository level. And that’s something I want to talk about when we look at information. That information tends to find itself walking out the door, which is really problematic. We talked about this before as well, but particularly on a task level, people who have improved the way they do their job. They’ve Kaizen’d their life. That’s great, they leave the company that’s gone. So all that innovation you’ve paid their salary during the time that they were thinking about getting things better. Geez, I mean most people try and just keep the lights on if they’re going through innovation processes, and they’ve actually done the work to make their lives easier, when you replace them with somebody new when they retire or they move to a different company. Don’t you want to build on that knowledge? You’re buying knowledge capital as part of their employment contract, right? So you want to take that knowledge and use it to make your business better and without retaining that knowledge through things like task mining, you’re just letting it go.

Roland: That is true, but I think the ultimate question is: who cares?

J-M: Yeah, absolutely, and there’s a few different people that I often talk to about this. Besides the folks who love the idea of acronym-based-buying, there’s obviously the business unit folks who really love task mining. They, particularly if you’re looking at business units and business unit managers, they’re very focused on the performance of their team and being able to discover how things are actually happening: that’s really important. Maybe they don’t have access to process mining data. They can install robots on people’s computers and see how people are performing. And not that this is necessarily just like looking over your shoulder and micromanaging but getting a sense of the baseline. Am I able to, with the people I have, comply with the SLAs that have been put on my business unit? I might be able to improve that by making small changes or advice or doing learning and development. And that leads to the idea of higher transparency, right? You want to give that visibility to the lines of business.

Roland: Yeah, but that is scary, right? Now I have “big brother” on my machine and you’re watching every step I do, every website I visit, and know my bank account number and password? Isn’t that something we should NOT like?

J-M: Absolutely, it is a scary idea of having Big Brother look over your shoulder. However, and this is something that’s really important to remember there are a lot of protections put in place whenever you look at task mining. The most important one, of course, is masking. There’s a ton of things that are in the whole system of robotic process discovery that are applied that are masked so you’re not going to capture the keystrokes for a Social Security number. You’re not going to capture the keystrokes for a password or login, those just aren’t even tracked by the robot that’s on your desktop. The second piece of the puzzle is that you were only recording activities that are happening in business relevant systems. You go and check the score of the game on your Google Chrome looking at: hey did my local sports team win? We don’t care. Nobody cares. The truth is: that was never part of your process, so it’s not part of the capture algorithm. And that’s why intelligent object detection, intelligent RPD, or robotic process discovery tools are really important. And so that solves the issue of: our people watching me when I do things that are not process oriented? No, they’re not. That’s not even captured. And are they capturing my private information? Once again, no they’re not because that information is not logged in the things that are being captured by that system at all, like those keystrokes. Those things don’t come up.

Roland: I can see this being a more contentious thing in Europe, where you have a lot more data protection and data privacy, as well as much stronger unions than we have here in North America. But that’s a good thing, I think, because honestly I don’t want to be surveyed by Big Brother.

J-M: But the idea is: what are you getting in return? So I want to go back to the benefits, particularly for business units in, particularly when they’re looking at implementing things like RPA, getting that ROI from RPA can take some time, and if you have robotic process discovery, well, then you’re able to get that a lot faster because you’re able to see exactly where you can get that value and quantify it, and ultimately make decisions on it. And that’s really good. That’s really good for those folks who run the business units. Also for folks who are looking at leading centers of excellence or, you know, sort of cross functional teams, shared services like an internal consulting or something like that. They’re looking to drive out ROI for the work that they’re doing internal or even external consultants, and they’re looking to do cost and time savings analysis on the work that you do. Time and motion studies. Well, you know they love this stuff. This is like a free time and motion study for everyone who’s on their computer. They get it in a cloud system. They can now look at it and they can use data / evidence to back their consulting recommendations. And particularly for centers of excellence they can ensure a lower cost of business and show that transformation that their center of excellence is pushing out as part of their best practices. And they’ll love that, and your stakeholders will love that too, because they can see the value; they can derive the value from that.

Roland: Alright, that brings us to the end of this segment, but before we go and leave you alone for a minute, I have a couple of questions for you. What are the top 3 processes that you or folks in your organization are doing today that require manual “busywork”, and would profit from things like process and task mining? What would you ideally want to automate or eliminate to make things better? We’ll be back in half a minute and continue with our conversation.

Musical Interlude: “Lofi Lobby Loop”, Jeremy Voltz

Roland: Welcome back! I hope during the break you identified the areas you want to improve with process and task mining. But in this segment, we want to talk about the “how”. How do we make all these things happen? So maybe, J-M, give us a sense of how task mining fits into the bigger picture of a transformation project.

J-M: Yeah, absolutely. So I see this as: there’s source, there’s analysis, there is development, there’s deployment and there’s tracking. The first is a source. We’re going to try and hybridize data from both process and task mining, and we’ll talk a little bit more about how those fit together, but ultimately they’re going to provide some insights to you about how things are operating both in automated and manual fashions. The next piece of the puzzle we’re going to look at is what does that mean for developing new processes and we’re going to design those processes based off of the combination of task and process mining, and we’re going to use those insights to drive out what we should be doing. Now that’s going to lead into the creation of automation. That could be things like RPA or through things like further transformation. And we’ll talk about that a little later. Why do you do certain things to develop the actual code that goes into automating the things we’ve designed in our process models. And then as we deploy that code to production, whether it be bots on the screen or new automations in your larger systems, we are going to operate that work with people and do the learning and development required to change our practice and then capture benefits and track back by continuously mining those processes and mining those tasks. So we’re never going to go away from the data. You can never really leave the data on the shelf you want to keep it in the stable to make sure that you’re ready to use it for that next phase of capturing, designing, deploying and once again monitoring. So that’s how we do it with process and with task mining we hybridize those two solutions, particularly when we look at the lower level decomposition of those higher level steps. But Roland I want to get started with this. This sounds like a lot of work. Where do I go? What do I need to make this all come to life?

Roland: Yeah, it’s a couple of things, but I want to go back to what you just said. I think going forward it will be more like operating a car. So when you’re driving and want to know how fast you’re going, you could look out your window and take a guess. Or the other way is to do this is just have a look at your speedometer, and that will tell you exactly how fast you’re going. So this is where I see process and task mining going in the future. We’re not there yet because this is all brand new and there’s lots of hype around it, but at some point in time, it’s just good business practice to see what we’re actually doing. As opposed to what we’ve done in the past around process analysis, bringing people into a room and picking on people’s anecdotes and experiences. Data-driven things are obviously better.

J-M: The car analogy is really interesting because that’s something we talk about when we talk about the creep of features. Remember when power windows used to be a feature of a car and now they’re table stakes. I feel like we used to have this idea: oh, wouldn’t it be a really cool feature if we could monitor the data of how things are happening? Well soon that’s going to be the standard: evidence based decision making for all. So Roland, tell me how that actually works and how we can get there.

Roland: What you need for this (and nobody will help you solve this problem) is a good understanding of where you want to go. So, to stick with the analogy, you need to know where to drive to. And while you do that, you’re measuring your speed. So in this case, you need to develop a good idea of what a good process looks like. So you should specify (and typically you do this in analysis projects) the objective of the analysis. What do I want to get out of it? If you stay at a very high level like “I want to improve my operational performance”, I think you’re missing out. So you need to go a level or two deeper. And that brings me to my next point. You need to develop one or more hypotheses. Once you’ve brought in data, you look at data, and say “something stinks here – something doesn’t look right”. So you come up with your hypothesis for the reasons, then you try to find proof in the data for your hypothesis. That also means, on a lower level, you would need to look for analysis criteria. Cost, throughput time, frequency, all these things. Because that brings you down to the level of the data you need to collect. And obviously, you should have a clear visibility from your objective to your hypothesis to your data.

J-M: You want to see what you are going to offer to the business back right? Are you going to look at costs? Are you going to look at reducing the time? That’s going to help justify the projects that you’re going to do right?

Roland: Correct. And then when we look at the actual data that you’re collecting, there’s a step where you look at the generic data model. For instance, I have activities, and they have a start and end date and name. Whereas in a Procure-to-Pay scenario, I have a purchase order / purchase requisition and I need to know what the amount is on the order and what you’re actually ordering. This is where you take your conceptual model (your data model with requirements), and transform it into your logical view, where you say: ok if I have a Purchase Req, a Goods Receipt, a Purchase Order and an Invoice. On three of those I need an “amount”. For all of them I need to have a “user”, so I can see who did it. Or if it’s a touchless PO, I can see that because it was fully automated. So that’s where you look. And as we learned from Julian in the last episode, there is a minimum set of data needed, which is the Process ID, the Activity Name, and Timestamps (like start/end time of an activity) so you can calculate the duration and throughput.

J-M: Yeah, I think there’s an important distinction between the duration of an activity and the interfacing between two activities. That’s why I say particularly for when you’re looking at multi-level decompositions, having both start and end time will allow you to encapsulate what actually happened in that one box. And then you don’t have to guess: OK, how long did this step actually take versus how long did it take between steps? Because that’s the difference between instep analysis and intermediate analysis, which we’ll talk about in a couple of seconds. 

Roland: The next step you have to do is take the logical data model and look at the physical tables in your systems. Then you know which systems you need to build a connection to / export from. And obviously, it’s not in the format that your process mining tool expects, so you’ll need to do some sort of transformation. That will let you see “this step is a PO”, and it will be referenced in every step afterwards, and you actually see that this is one instance of it. So you take your logical model and map it to a physical model and then that goes to your engineer that does all the magic of pulling the data and transforming it to a format your process mining tool can read.

J-M: Now let’s talk about the task mining side. So now we’ve got our process mining, which is a great place to start. Now we need to actually install task mining. Roland, what does installing task mining mean?

Roland: There might be differences based on the task mining tool you select, but what you’re doing is recording on a desktop level. So there will be a little bot installed on your machine. It will be configured to prevent the big brother syndrome we spoke about. Then, it will kick in without a user triggering anything. It’s not like a screen recorder where you have to press the record button. It does it in the background. Then once you’re done with your task, and the instance is complete, it collects the data and sends it to a central studio where the magic happens. The studio will create the analysis and all these things, including the blurring and security.

J-M: To get that to work, you need to have people actually doing their jobs. So you need to have executors who are doing their job under that watchful eye of the robotic process discovery system, right?

Roland: Talk to me about the data you collect then, based on task mining.

J-M: So what you’re getting is you’re getting the steps that happen and the order in which they happened and the variations of those particular steps. But you’re getting a lot of data that is only attributed to windows and actions as I talked about before, and you need to contextualize that data. So the next thing you’ll need to get in your process mining and task mining initiative is knowledgeable process resources. People who know what actually is happening in those processes. Oftentimes those stakeholders will end up being some of the group that would have been the executors during your recording. Because they’re going to go and say: OK so when I did this, or when somebody does this, it translates to this particular logical step in what will be a process. So you copy paste from field X on screen Y to field Z, on screen A. That’s what it is. OK, so we are doing an insert of customer data. Great, and that will help key what you’re actually seeing on the screen to something you can analyze when it comes to the decomposed process.

Roland: I know from one of the RPA partners we’re working with is introducing a grouping feature in the next release. It will take those screen scrapes – the 5 clicks you did in application A, and it groups them together into one step, which might be a familiar representation of a business step. That might be a link to the process mining which has a bunch of those business steps.

J-M: That’s more about the keying we talked about as well. Which is: you want to understand where that task fits into the logical process that you’re the conceptual process. Even though you’re diagramming at a higher level, we’ll talk about that when we look at the decomposition of task and process mining. But this is a great place to start. And so once that’s done, how do you realize process and task mining value? So the first thing you need to do is build a targeted list of those suspicious process steps when you talk about those before. Like, what am I trying to do? What am I looking further into? Because those are the things that are most ripe for value. I want to gather that data for those process steps at execution level and then work together with end executors to contextualize that data right? Please explain why you took that path. 

Roland: And I think there are two main use cases that we see. One is the gaps that might be a result of your process mining. You might have manual steps that do not have a record in any system. So you can capture that in task mining. The second thing is where you look at process mining and find outliers. Steps that run for a long time, and you want to know more details. Is it the process, the system, or the user that makes this hard? So, J-M, what would you gather in terms of data on those steps?

J-M: Yeah, I would look at a few different things. The first is I want to understand velocity and I want to understand throughput. Those are the two things that I start my conversation out with because velocity and throughput matter a lot. So how often does this user perform this task velocity? How often does it come up and throughput, which is how long does this task, For each task in a process, take for that user? So those are the first two things I ask, and that’s just sort of quantifying things. A third thing I want to understand is when this user is presented with this existing task. When this request comes to their queue, what do they do consistently and what do they vary? And in the cases that they vary and go down different paths, what properties of this task are different that caused them to have to go down that different variant path. That may help me understand motivation, because motivation is really important to understand, particularly if you’re looking to gather insights from users. Actions like they say, oh, when I have a priority request that comes into my queue, I personally take a different path ’cause I know the systems that we are using won’t address this priority request in a timely fashion. I’m going to do this manually. That’s great to know – why don’t we look at automating that so we don’t have to do that? More so these are really good to contextualize what’s going on. And then what do I use those things for? Well, once I’ve got the user actions from velocity from throughput from flow and variation. Now I can simulate what might happen if I changed properties in the process that I can control and so I can change certain conditions pretty easily as a mental exercise. Perfect example is if I have a certain number of people doing a certain number of tasks and those tasks are taking a certain amount of time. If I added a person to that queue, another resource in that role, would I be able to handle more requests so I can vary those resource allocations? Or I can do target based variants and I can say I want to complete X number of orders per day and I have this number of people. How fast do I have to do each of these low level processes in order to achieve that goal? And what type of learning and development and what type of automation, what type of standardization can I do, (‘Cause I’m finding processes that actually fit into the time requirements.) so that I can consistently achieve this as a business unit owner?

Roland: That really brings it together with your design platform in your architecture toolset. Depending on which vendor you talk to, they will give you a different story. What I see in some vendor’s approaches is allowing changes right in their process mining tool trying to simulate design. That’s what I call limited “what-if analysis”. Alternatively, you can take your discovered model with attributes and push it through a full simulation engine, which some vendors might or might not have. Because there you have a greater variety of variables you can put in. Think about schedules, available resources, etc. which are obviously not part of what you capture from a transactional system.

J-M: Yeah, I think it’s the context of a larger process that you’re looking to harmonize, and you miss that when you’re looking at just task mining and just what if analysis in a single task mining system. You want to bring this to the process model. That’s really ultimately where you’re going to do your analysis. That’s the connective tissue that brings it all together, and that’s that’s pretty important. And when you’ve got it all in a process model, you can build a better business case for transformation and that business case for transformation can include the cost of learning and development. Cost of resource allocation. The cost of implementation of new systems, whatever it might be, but now you have data that backs up that business case for transformation. And then as you implement that business case, starting small. We always say start small, smaller, limited use case that will help to be isolated and show you the value of the work you’re doing and prove the concept of what you’ve been working on. We want to check back: are we actually achieving our goals? 

Roland: Yeah, that makes complete sense. Now, our dear listeners, think about the mandates in front of you. Are folks throwing around requests for BPM and RPA implementations? Do they have a clear use case or data? How can you affect the decision making on how to proceed? We’re going to leave you alone for another half minute and J-M will hopefully have some good answers for you.

Musical Interlude: “Airplane Seatbelt”, Jeremy Voltz

Roland: Welcome back. I hope you had the time to think about what implementation requests you see and how to construct a business case for your ideas of task mining, process mining and how to improve processes. But the interesting question that now comes to mind is: how do I connect those two things? J-M?

J-M: That’s a great question, and one we’ve been alluding to this whole episode. So let’s get into specifics! Process and task mining in general, and that’s not always the case, but in general, represent two different levels of decomposition of process. I generally think of process mining (in a lot of the work that I do) process mining represents something that steps on a Level 3 process model. They are the activities that are required in the sequence that are required in order to achieve a business goal. I tend to think of task mining steps (and when I’m measuring them) I see that as the steps on a level 4 model, which is the task, what activities are being done in order to achieve these steps in the process required to achieve the business name? So I take a look at the process model at a Level 3. The task meta-model at a Level 4. Now we need to stitch them together and to do that I have a five point plan that’s ready to rock to make that happen for you in your organization. 

Number one is we want to group those user actions and user steps against the desired business outcome. So what distinct things did our users do during the capture period that achieved an outcome? So you want to get those user actions grouped up because that’s going to ultimately become your process step in the Level 3 model. 

Number two is we want to align that step or action grouping against an established process step and that would correspond to the process mining captured steps. Or it could be part of what we call an intermediate analysis. I love to use that word because there’s so much (as you refer to before Roland) that is not captured in transactional systems. We don’t want to lose it. Otherwise it’s at the door. Things between automated steps happen all the time and they’re not captured. They are in task mining. So scenario one is you’ve got a mined-out process step that gives you throughput. So start and end of a process execution. But we know that there are low level steps required. Well that’s fantastic. We can just attribute that lower level task mining process to that higher level existing captured step. Wonderful. Like, you’ve opened up a screen in SAP and you’ve got a whole bunch of fields to fill out like VA01 (Create Sales Order) all those sales order fields. The data that goes into them has to come. From an Excel spreadsheet on your desktop or from Outlook ’cause you would copy in the customer name or customer number. Or you’ve got a database you’re pulling from and you’re going into accessing it through your proprietary little tool that you built and goes: OK, cool, I gotta pull that information out, copy and paste it into SAP. Fantastic, we know the start. We know the end, we know you were doing and now we have the tasks underneath that you had to do to achieve that step. Scenario two is a process that does not appear in your model, and that’s when we talk about intermediate analysis and what is interesting analysis. We generally think of intermediate analysis as the invisible step between the end time of one process step and the start time of a second process step. You might think of them as a wait time between the two. There’s lots of different phrases for it, but if we know that there’s something happening in between these two captured steps, we can insert a dummy step that now will become the representation of those lower level process steps that are happening in other systems. And so the intermediate analysis we slot into the end to end and now it becomes part of our picture. Does that make sense, Roland? 

Roland: It does. The interesting question is how it will be done. That will depend on the tools you choose. How much automation of what you just explained is there, or that a manual step. I think it’s a little early. As a listener, I would expect a lot of manual activity right now to do that connection between process and task mining. But there will be a lot more automation available once the vendors start working together and things just happen “automagically”.

J-M: Be wary right now if a vendor tells you that it’s already completely automated ’cause it isn’t. Because: A – they can’t possibly know the business context behind what you’re doing. And B – those tools are just starting to get their integrations up and running. It’s early as you said. So some number three is we want to model the task mining steps as subprocesses, so we’re going to bring them all into one process model, and then attach that subprocess model through a connection into your higher level process steps. So once again the box on the Level 3 process becomes the whole model at a Level 4.

Roland: Which brings me to a point. What I’ve seen is that most vendors use BPMN as a file format because that’s not only a notation (as previously mentioned), but it’s also an interchange format. So if you have still not jumped on the BPMN train, you might want to rethink your decision and use it as your lower level notation so that I can take advantage of that file format. Your task mining system shoots over all the information you need for the BPMN model, and Ta-Da, your data will show up as a BPMN model!

J-M: Oh yeah, we talked about this at the beginning of the episode, but boy is it really annoying to have to model all of these low level processes. That can take you forever and drive you insane. So let’s maybe do it a little bit more automatically. So step number 4, and this is really important, is consolidation and roll up of the insights available at the lower level to the higher level. So we want to connect those suspicious processes that we’ve been talking about to the lower level activities that are happening. And let’s see why we’re actually getting these long running process steps where there’s issues. Let’s do some root cause mining built on the insights from both our transactional systems and from our task mining systems underneath. And once again, you don’t necessarily have to have process mining. You can actually start with task mining and then roll it up to a logical or conceptual process model. But then you’ll have to do that modeling yourself, and that would take business expertise.

Roland: I think an interesting thing here is to talk about the attributes you capture. One client I worked with had the challenge that they didn’t record everything in their SAP system. There were some steps like manual entries that they wanted to know: “how long does that actually take”? They have a big spreadsheet where they do some pre-data entry, then they upload that spreadsheet into their SAP system. When we did the project with them, we saw that the actual upload is a matter of seconds or minutes, but the outstanding question was “why does it take 7 hours to go to the next step”? So that was an interesting challenge to uncover if people were busy with other tasks, or if they’d just missed the boat, and task mining tools will give you those attributes to calculate those.

J-M: I would say that’s a little “suspicious”! Isn’t that what we’re talking about? Anyway, that brings me to my last point, and that’s the other piece of the puzzle. Rolling insights down. Because you’re getting suspicious information and you’ve aggregated the statistics up. Now we want to take the lessons learned from our process analysis and push it back down. Say: here are targets we want to be able to meet with those suspicious process steps to allow us to achieve our business goals. So now we use that to drive automation opportunities at the RPA level at the lower level, the task level back down. Once again, the five things: we are going to group our user steps. We’re going to align them with our processes. We’re going to model things through both at lower level, high level, or consolidating insights upwards, and we’re rolling insights back downwards.

Roland: And that closes the loop on our second segment: how do I get started? The first thing I said was “have an idea of what good looks like”. What does a good process look like? And when you do that over and over again, you get a feel for your process. You get a feel for what you should look for. What’s typical for my organization? What is typical bad behavior for my users? So now how do I close that loop? How do I grow my maturity in analyzing processes? But, J-M, one thing I know you love is taking action based on the insights you have. I know you love plot charts – tell us about the graphic you’ve made.

J-M: There’s a graphic that we’re going to be putting up for you on whatsyourbaseline.com (little pitch for the website if you haven’t gone there yet). This is going to give you an idea of what action you should probably take based off of or based off of the information you’re getting, or actions that are likely to provide the right kind of value to you. And I like to plot our process steps that we find on two different axes, and if you’re following along with the visuals, you’ll love it. On the Y axis. I like to ask: how long is the activity taking me? So, what’s my cycle time for that activity? On the X axis, I like to ask: hey, what is the frequency of this activity? So how often am I called on to do this activity with their request? And if I plot out my process steps on this chart, I’m going to call up four quadrants. Things that I should do about the items on the bottom left, which is low cycle time, low frequency, there’s not usually a lot of value there. I gotta be honest with you, those are the ones that people tend to put as pet projects. Something that I care about. Something about which I feel important, but there’s no data to back them up. Why are you doing this? It’s just something you like. On the top right where it’s high cycle time, high frequency processes, those are the most ripe for transformation. Those are the perfect things you can take a look at large system landscape upgrade projects. You can look at furthering automations and our end looking at building a better flow. This is the kind of thing where you’re bringing in business consultants and system integrators really to make a big difference for the organization because the size of the opportunity for you is huge. The ROI is huge so you can afford to pay a little more to make it come to life and using systems to automate, using practice, huge changes to the way in which you do things, pivots. Those will be a big benefit to you in your organization. But Roland, there are the ones that are a little sketchier and tell me what you think about the bottom right: high frequency but low cycle time, or the top left: high cycle time but low frequency. What do I do about those? 

Roland: I think those are the candidates for Robotic Process Automation. If you have a low cycle time and high frequency, those are no-brainers for serious automation. When I look at the upper-left quadrant, high cycle but low frequency, that is an interesting thing. Because it could be perfect for RPA, but it could also reveal a people problem. Maybe your users don’t know what to do, or your vendors are notoriously slow. Or maybe it’s just the way you do your process. So that’s where you need more research – the data doesn’t give you the definitive answer, but insteads points you to an area where you want to apply other process analysis methods.

J-M: Remember that you’re trying to make your process run better. You don’t necessarily need to make your process model better. You might need to make your people better and your technology better. There’s lots of ways you can improve, and particularly when you take a look at those high cycle time, low frequency tasks. Why is it taking so darn long? And if you can answer that question, and once again this becomes suspicious, and becomes a great point of analysis. It becomes a Nexus for transformation of your outcomes rather than necessarily of your process. This is a great way of getting value. Those high cycle time low frequency processes are the ones that tend to stick out in customers minds. Remember that you need more than 10 good interactions to erase one bad interaction. If you leave a customer hanging for a long time while you’re trying to sort out their stuff, they’re not coming back unless you do a lot of work. Let’s prevent you from having to pay a lot of price for a bad customer interaction.

Roland: You sold me on the 2×2 matrix, J-M.

J-M: So let’s talk about the last piece of the puzzle, and I wanted to go into a real example that we actually did for one of my customers. That was a big manufacturer that I worked with and they were bringing process and task mining together, particularly on orders, internal orders. They wanted to order services from the organization. They wanted to figure out materials that were necessary and very manufacturer, as you might imagine, that’s really important. You wanted to see the lead times, throughput times, what steps were required to make this order come to life. Now when we walked in there, the first thing they told us was that stuff was taking too long. People are left waiting for their projects to go forward because they don’t have the things they need. People are complaining and screaming because they can’t get the resources. So we wanted to dive deeper. What was leading orders to rework outcomes? Where was all the time going? Snd to do that we need to see two things. First is what the automation was. They had a big ERP system that was running a lot of tasks and they said, well, pretty much standardized on how we do things or were quite automated. So why is all those a problem? And the second thing we wanted to figure out is what user actions were being done underneath each of these steps to make the automation work? And there’s two things that we found out in the first comes from process mining, and the second came from task mining. The first is that when process mining kicked it out, we saw a ton of tasks in the process mining side of those automated tasks. A) there were crazy variations B) they were leading to unexpected outcomes. So we were having to do tons of rework. The same transaction would happen three or four or five times in a row. We wondered why in the world this is happening? What’s going on with that? And that gave us that list of suspicious processes. These are ones where definitely something’s going on, and what we found is that there was a huge bottleneck of manual work that was required for their automation to work. Surprise, surprise, they weren’t actually really automated. Everything was a “touched” order. We look at, percentage wise, it was really astoundingly low what percentage of processes that were actually touchless because they had some wonderful hero of a woman who literally was going in and fixing problems in every order manually to go and make them work. And I met her. She was a gem of a human being, but boy, that Herculean effort for such a huge organization they were having to fix all of these processes in the mix. And it was both a combination of manual-level work that had to be done to make those automated steps work and also routing. So those tasks are put in her queue over and over and over again as quote unquote automated processes failed. 

So how do we fix this? How do we propose solutions? So we first looked at three things. This bottleneck was coming in three ways. Number one is users. Users were causing problems. They were failing to enter information or selecting wrong categories, or the fields were including special characters that weren’t properly being managed, so they were submitting bad orders that required correction. Once again, that went to a human being to fix the problem. And second, we found vendors that we’re having a problem. You talk about that. Roland, we wanted to look at whether or not there were issues caused by external sources. Well, boy, do we find that when we look at the orders that were taking a long time or having problems in them that required rework. It was because vendors were submitting bad information or the vendor wasn’t responding in a timely fashion, leading to longer throughput times because orders got shuffled to secondary queues where they just weren’t addressed for awhile. So now we can go back to those vendors and say, hey listen, we have an SLA with you, don’t we? Maybe we should look at putting penalties in there. And the third is compliance to steps. We had a ton of orders that didn’t have purchase requisitions. How crazy is that, Roland? But I’m sure you’ve seen the same thing where people just go do Maverick buying and boy, when that comes up you know that process is going to get delayed because someone is going to have to address it. “I don’t see a PR for this. What do I do about it?” So let’s put our process expert hats on. What changes can we make based on that information we found? 

Roland: Look at automation in regards to order checks. Get criteria behind an agreed upon set for orders. Look at the evidence for that automation: does it work or not work? The next thing I would look at is the touchless vs. touched orders and group them into buckets. You will better be able to address spikes in demand – think of Monday afternoon orders killing productivity, so that’s the practice part of it.

J-M: Yeah, everyone gets in on Monday morning and realizes: oh my goodness, I don’t have any of the things I need. And by Monday afternoon this one person’s queue or this small number of people’s queues are completely overflowing. We can probably do better staff balancing. Oh yeah. 

Roland: The last thing I would look at is what I would call transformation. Look at if you can redesign your process to learn from user actions you see. Align your daily practice to high-performing variances you’ve identified, while harmonizing best practices.

J-M: Yeah, I get that, and so we can take those insights forward and even push them out to our partners and vendors to really make a big difference. And so Roland, you’ve been asking these questions to our lovely audience. I’m going to ask them, hey folks, we’ve talked about process and task mining and it’s just dipping our toe into the water. But let’s theorycraft if you were to have your druthers, as I like to say, what hybrid manual and automated tasks are you ready to dive into? What do you think is important to you? What are you worried about finding? And what are you hoping to find in these tasks? We will leave you for just a moment, and then we’ll come back with our conclusions and thoughts for the episode and lead you to the next one. 

J-M: Alright folks, welcome back and hopefully you have a clear path forward to getting your first taste of the value from process and task mining put together. Thank you so much for listening and Roland, have you enjoyed this one? I had a good time! 

Roland: I did, and it was very entertaining! As a little recap of what we spoke about, we were talking about your journey to actionable insights. How does process and task mining correlate? Then we talked about the topic of first steps. What data do you have to capture and what value does that information have? Then last but not least we spoke about stitching process and task mining together – what are the capabilities and insights that you get that will drive transformation in your organization. So as always, thanks for listening. Please reach out to us at hello@whatsyourbaseline.com or by clicking on the link in the show notes that brings you to a website where you can leave us a voicemail.

J-M: Yeah, I think we’ve already gotten a couple of nice little pieces of feedback from viewers, and we’re looking forward to getting more from folks listening today!

Roland: Yes, and don’t forget to leave us a rating and a review in your podcatcher of choice. And as always, you can find the show notes of this episode at whatsyourbaseline.com/episode9.

J-M: Well folks, as always, I’ve been J-M Erlendson

Roland: And I’m Roland Woldt

J-M: And we’ll see you in the next one.