At the recent Shibumi Virtual Summit, Chad Aronson and Saeed Contractor presented UBER’s journey with Intelligent Automation, including RPA, OCR, Chatbots, and more. What follows below is an on-demand recording of the presentation as well as a transcript of the video that highlights the partnership between Uber and Shibumi as they seek to scale the impact RPA has on their organization. You can read more here about how Shibumi can help your organization to scale your results and accelerate ROI.
Chad: Our mission ― we are a centralized team that started in finance, but our goal is to build an Uber-wide capability to bring value and efficiencies to our business partners through mostly RPA and moving up the intelligent automation curve with OCR, machine learning, and NLP. We did dabble a bit with chatbots earlier last year and plan to pick that back up later this year. Saeed will walk us through our core capabilities and why our team is so critical to the success of Uber.
Saeed: What is robotics process automation? RPA is a tool that lets us emulate human actions in the context of work being done on a desktop. Agents are often working with multiple applications like spreadsheets, browsers, ticketing systems, email. You often get data being transferred from one application to another. Often, these actions are performed where just strictly business rules data, if its invoice number is very high, it’d send an email or an alert. But also, they’re usually very tedious, mundane, and repetitive operations.
RPA can definitely help automate such tasks. I’ll go counterclockwise. Going on, we added this new dimension for our automations in the form of artificial intelligence and machine learning. Basically, what that does is it lets us make more judgmental kind of decisions on top of the strict business rules that RPA allows us to execute. Basically, what we need, usually, is training, an instant trainer machine, or a neural network ahead of time with samples. Then on an ongoing basis, we actually do continuous improvement while we keep training the machine to get better and better.
Basically, for RPA, the examples of machine learning and artificial intelligence are in the form of NLP―natural language processing or natural language understanding, that lets us, for instance, auto-respond to emails or even its using chatbots; which I’ll speak about in a minute, and bot entity recognition. Then I’ll jump to OCR―optical character recognition in that context. Optical character recognition, basically, has been around for a really long time. The idea is to typically extract all the text that’s in a document.
Usually, the output is in this form―this one big continuous string that represents the text that was on an image. However, if you wanted to extract values corresponding to the keywords, you need another layer of machine learning or artificial intelligence that’s called Named Entity Recognition. That’s how we’ve recently gone into OCR. We’ve been able to extract data from images that are being scanned and sent to us. That’s a very useful enhancement for us.
Then, of course, chatbots. We’ve dabbled in it. We’re definitely looking for some really interesting use cases where we could implement chatbots, which is really an extension of RPA. If you think about it, chatbots is where an agent or a human is speaking to a bot instead of an agent. The bot is engaging in a conversation. From the bot’s perspective, you can split it up into two separate areas. One of these, the NLP, the Natural Language Processing or understanding and the RPA. Once the intent is understood, then the RPA can actually go ahead and create the action, whether it’s stretching a balance, or updating an account, or doing something. These are the effective enhancements that we want to add to work capabilities.
Moving on to the next slide… We have an increase in the quality but some of the needs for intelligent automation in the 24/7 availability– basically, bots don’t need sleep or need to go take breaks and eat, and stuff like that, right? Easy to scale up and down. For instance, we had a Super Bowl ad and add the full number of bots to process the volume that was generated instantaneously; or we could scale it down during the load periods. There’s improved compliance and audibility because we have logs, traces, reports, dashboards. Again, no human error, increased productivity. That also increased staff satisfaction because you’re taking away some of the mundane jobs and letting the staff focus on higher-level processes. Then I’ll hand it back to you, Chad.
Chad: Thank you. This slide represents our journey to date. First, I’m going to set the stage and mention that Saeed and I report up through our CFO which puts us in Finance. There are arguments that we should sit in IT, but here at Uber, it’s not the case, which makes our journey a little bit unique. If you look at the bottom, we started our journey by building a POC, leveraging Deloitte as our delivery partners. It was the PRC check―Purchase Requisition Check automation, that at the highest level, it approves or denies a requisition based on the number of validations that the bot performed.
When I joined in September of 2018, at that point, they were on the third version of this automation. During that time, we continued to add more and more checks into this automation. Now, we have 14 validations in production. In parallel, if you look, we delivered 11 more in 2018, which brought us a total of 12, where we were solely focused within finance. Around the same time, we were deploying these bots, we formed the COE. I was actually the first hire, which was interesting. Building our team from the ground up was definitely a fun experience. Our COE was composed of four FTE and two vendor partners at the time. It was Deloitte for intake and support and Accenture for development and tests. I’ll provide more information on the teams and how we’re organized later in this discussion.
If you look towards the right, in the middle, we went live with Teamdot. Uber is a large and disconnected organization. One of our major milestones was to market ourselves to the broader organization leveraging this tool. Again, we started in finance and we wanted to broaden out. That was a useful way for us to be able to do a roadshow. Having any organization within Uber have an understanding of what our team is and how we can better help them.
If we look up from there, you’ll see that we deployed our support team. When we started this in 2018, we had a 24×7 support; but due to our recent restructuring around COVID and the business, the business-critical automation would be switched to 18×5. Now, the good thing about our model is we have a flexible model where we could shift up or down and we could change our model based on the business demands.
You’ll also see, to the right of that, “Refreshing Pipeline.” This represents the fact that we always are updating our pipeline as Uber is constantly changing and growing. The automations we look at doing may be very different from six months from now. This is an important factor when we look at how to prioritize in our process in general. We try to ensure that our processes won’t be changing at least for one year or more.
We learned a lot over the past couple of years how to manage, track, and deploy our bots. As you can see, we have 34 bots in production right now. We have 10 more that we aim to be live in the next month or two. Hopefully sooner, but there are some challenges around, some access issues that we’re facing. But again, we should have about 44 automations in the next month or two.
The final point I’ll make on this page, if you look at the top right, we are targeting 100 plus in 2022. We change our thinking a little bit. In the past, when we first started this business team, we were targeting a thousand automations. But as we looked at what makes sense, and from a KPI perspective, we care more about the value that we’re bringing to the business versus the number of bots in production. Again, I’d rather see a hundred bots that saves us 12-15 million versus two thousand bots that saves us a couple of million. That’s the change in direction we’ve had over the past year.
Now, we’ll talk about Uber and Shibumi. For the first year and a half, we’re using various types of tools to manage and track our opportunities. We tried Smartsheet, several variants of Smartsheets, elaborate Google Sheets with heavy driven formulas. We used multiple versions of that too I found it to be messy and hard to maintain as we grew. Ultimately, we’re an automations team, and we felt it was best to find a tool that would help us manage our pipeline. This is where Shibumi and Uber started their journey together.
When we looked at what we wanted to do with our pipeline, we needed three main pillars, that Shibumi really do so well. The first, it’s powerful executive dashboard where we can showcase to our upper management our program at a snapshot with insights and key metrics such as transaction volumes, estimated savings, actual savings, headcount data―which a lot of the business needs to approve our projects―and much, much more.
The real power behind this is the ability to filter this information by lines of business. Again, I said earlier, we started in finance; but over the past year, we’ve expanded out to six different business units. The second pillar is our actual pipeline. Before Shibumi, we used various Google docs and Google forms to kick off intake process. Again, this was messy and hard to scale. Shibumi came to our rescue with a robust tracking mechanism that easily helps us manage our pipeline―no more searching for files, managing versions of master trackers, and just having easily digestible information at hand.
With Shibumi, we are now able to quickly pivot and with a click of a button, have our pipeline slice and dice by various statuses, such as progress On Hold, Assessment, and even Cancelled. We’ve had around 11 cancellations of our automations– Again, as I mentioned earlier, we are aiming to have the hundred-plus but as we all know, there may be a need to default an automation if it’s no longer needed. It could be a stopgap.
The last and important is tracking metrics. Shibumi has the ability to integrate with Kibana and other source systems that help us showcase our daily transactions and value against them. Prior to Shibumi, this was a manual nightmare where we had to manually produce that information. It’s literally a click of a button with Shibumi. Again, we can slice and dice this information and make real-time data decisions.
The next few slides are going to be breaking down these pillars into a bit more detail. The single view of our executive dashboard is probably the most powerful page within Shibumi. It brings down our pipeline by showing the number of automations in each status. As you can see, we have 11 in progress; 23 live, and so on. If you look at the bottom half, we’re able to see our savings on that, which is the breaking down of our value. We look at our savings in three buckets―hours saved, FT equivalent savings, and hardcore dollar saved to the business.
Looking at the next slide, you can see that we could tailor our executive summary, which is another powerful avenue that Shibumi offers. We can show the pipeline by status. You should see we’re only showing the number of automations in progress with the associated estimates annual savings and associated costs. We can also show the automations by all statuses with all their associate annual savings. This really has helped us make real-time decisions based on current automations in our pipeline with the ease of being able to make a data-driven decision.
The second pillar is managing our entire pipeline. This is where we really were struggling. We’re able to systematically capture key fields that are needed to properly prioritize our automations. It may be a bit difficult to see from the snapshot but for every opportunity, we’re capturing our business units so that we can roll the costs and savings, process owner names, which is important to keep as we grow our pipeline. There’s a lot to manage, a lot of different business users. We have a high-level description of the process status and a bunch of key metrics fields that make up our value and complexity for each of the automation.
That in there lies the power within Shibumi. It’s critical for us as an intake team. We can easily select our next set of automations by looking at the costs constantly to build and then looking at it in terms of the value saved. As I said earlier, with Shibumi, it’s as simple as clicking a button. One of the most robust features within Shibumi is the visualizations that it enables for us. As you can see, we’re looking at our current automation at the top half, broken down by the complexity on the X-axis and the savings in the Y. Clearly, we want to see the higher value with lower complexity, but this is the snapshot of where automations lie in our spectrum. This is useful in many ways such as planning and being able to switch gears when we need it based on the high-value use cases coming into our pipeline. Again, as I mentioned earlier in this presentation, we’re constantly evolving and refreshing our pipeline.
The last item around the pipeline is the ability for us to slice and dice by the cost, by lines of business or functions. Right now, we’re self-funded but I do see us moving into a line of business-funded model where this information will be very helpful for those types of conversations. Saeed will walk us through the third pillar, which is tracking metrics.
Saeed: Thanks, Chad. For tracking a metric, from the very high level of strategic tracking that Chad spoke about in the previous slide, we can actually drill down into more details; for instance, volumes per week. This slide shows that actually, two of the top and the bottom have the same data. They’re displaying the same data but there are two different ways of visualization. The first one helps you visualize the trends on a week-by-week basis for each process, whether the quality is going up or down from the previous week. The second one lets you visualize the relative size of the volumes for each process and the overall experience, really.
Know that this data comes directly from an integration with our RPA platform, UiPath. It is pretty much near real-time because we send this data very frequently. Moving on to the next slide… In addition to a whole host of very useful reports that we can get out of Shibumi; for instance, the first half of this slide shows the report that we created for a particular process owner for whom we’re developing automations for three separate processes.
Each slide shows us not only the status of the process where we are in the life cycle and other very useful information for our automation portfolio. The second chart shows more of a Gantt Chart for indicating activities and timelines in the life cycle of that process. There’s a bunch of very useful tools, and metrics, and charts, and reports that we can get out of Shibumi to help at every level of our company. Go ahead, Chad.
Chad: Sure. Thanks, Saeed. This slide is just representing some of the key accomplishments that we’ve had this year alone. Last year, as I mentioned, was building the foundation. This year, we’ve been focused on value to the business and enriching our pipeline, which is where Shibumi has helped us out so much.
All of these numbers you see on this slide are driven from Shibumi. Some of the key highlights that I want to call out on this slide are, at this point, we’ve saved 28,000 man-hours. We’ve seen that number grow by 4,000 every month to date. As we continue to deploy more and more automations, we expect the number to grow exponentially. With our current production bots, we’re currently looking at an annualized savings of 3 million, just this year alone. To give you a reference, last year’s annualized savings was around $400,000. You can see the scale that we’ve done over the past year is pretty impressive.
The 3R projections are constantly changing as we refresh our pipeline and deliver on our Paradise automations. At this point, we see 10 to 12 million but that could change up or down depending on what we select as our automation. Another big achievement that Saeed was talking about earlier was our two OCR automations. Our BOL and Compliance automations which we use as our core capabilities going forward.
The last thing I’ll call out, and why I’m here today, is the addition of Shibumi. Again, that was a huge milestone for our team early this year. I just want to say, again, thank you to Sean and the team. Saeed will be walking us through a few more details on the next slide.
Saeed. Thanks, Chad. I’ll quickly cover some of the significant highlights for the past quarter for our COE. We deployed another six automations and 13 enhancements. He has spoken about we produced OCR database and expression as the core skill for the COE. We definitely deployed Shibumi. That’s why we’re here and it’s been a really helpful tool for not only prioritizing interactive life cycle of the entire pipeline, starting from the intake process all the way to the ROI, so it’s just been a very helpful tool for us as we drive our business for our company– or our processes.
Technical debt…I’ll go into that a little later; but basically, we have to put in a lot of thought towards making sure that our bots are only making things more compliant and secure, and not the other way around. This does not only include handling of sensitive data that the bot makes across but also securing credentials, controlling access. I will speak to that a little later.
We also had to move away from a 24×7 support model due to the financial crisis Post COVID. We implemented a pager system for emergencies after hours. Then we went through a significant upgrade of our RPA platform that’s called UiPath. Again, we’ll talk a little more about the technical dept but these have been our accomplishments for the last quarter. Back to you, Chad.
Chad: All right, great. We’re going to talk a little bit about our team. If you remember, I mentioned earlier that when we started the journey, we’re a team of four; but unfortunately, due to COVID and business-critical issues, we’re now a team of two. As you can see, we have three pillars. We have our intake, which are mainly our business process analysts who work with the business for idea generations and process assessments. They’re the drivers of Shibumi. They’re in there on a daily basis working with our clients on these ideas.
After a process has been selected to take on, they work closely with the business throughout the requirements and design phase. When they are nearing completion, they hand it off to our second tower, which Saeed manages, our development team. Once the development and testing is complete, it’s handed over to the third, which is our operations team, who supports it after we go live.
If you look at the bottom, we’re mostly using Accenture for our partners noted in white. We have a small tiger team of UiPath here marked in orange. They’re almost done with their engagement but they’re delivering five automations for us. They’ve been helping with code standards and just helping us out during these hard times. I’ll call out to them as well. Thank you for that.
Moving on to our governance, this slide is busy but what it really represents is a third-party review of our governance, procedures, and controls for our program. With new programs or any set of excellences at Uber, the audit NFR teams, they need to do a complete audit on our matured loans, our risks, and areas where we need to reduce that risk and to improve our processes. They looked at our organizational structures―the slide you saw earlier, the roles and responsibilities for managing all of the automations, our entire process.
How we approach identifying and assessing opportunities, our development procedures, SaaS compliance? As we do touch financial systems, we do use PII for our bots, so they need to assess those very closely. They’re also assessing the value we bring to Uber and how we maintain that value over time, which is why that slide I showed you with the numbers is critical.
If we go to the next slide, we can see that all the hard work is paid off. If you look, the majority of the errors, we are fully functional and optimized. We have some action items to get to our COE to make it fully optimized. We are taking steps to get there.
Moving on… Our project life cycle. This slide is just our project life cycle. We’ve gone through many iterations over the past couple of years on this model. This is your typical application development lifecycle, but what we’re trying to represent here is the typical automation timeline by phase. We have small, medium, and large automations. As you can see, we’re looking at roughly seven weeks for a small and 14 weeks for a large.
We’ve had a couple of extra-large, which is BLL and it’s a good example of that, which takes a bit more time. But this is helpful to show our business partners our delivery model and expect timelines based on their automation pipeline. As you can see, with RPA, it’s a little bit different of a standard technology project. This has been helpful for us to overlay this with the business expectations.
Finally, we’ll go into our key lessons learned. The first two are what we find most critical over the past year or so. The most important SaaS factor is having a top-down leadership engagement. We’ve been doing a lot of outreaches, roadshows, summits; but what is critical to long-term success, is the top leaders are driving automation to their teams. I’ll give you an example from another company I know very well.
Before headcount is released, they need to prove that they looked at their processes with RPA, and to prove it that it can’t leverage RPA, they need to look at it from that lens. Only then will they get headcount. This is a key factor when we’re kicking off and starting these centers of excellence. It’s pivotal that you have a robust steering committee of executive sponsors so we are all working on common goals. Many leaders have their own agendas. By creating a cross-organizational steering committee, it’s a way to achieve a complete alignment.
The second gap we are looking at is leveraging process mining to understand where there is a need for automation. We did do a pilot in our PDP space earlier this year, but the project was put on hold due to COVID. The value here also is that we can understand process efficiencies and variants. We found it challenging at times when trying to deliver bots where the tasks weren’t performed the same. With process mining, we can easily cover this and deliver an intense automation. This will help normalize processes, especially, with Uber being so global, it’s an effective way to streamline it. Saeed will walk us through the final two.
Saeed: Yes. Continuing on the technical debt, we continue to have extensive discussions on our bot IDs. Basically, this is a relatively new concept to most companies because bot IDs are somewhere between the service account and human login ID. The more we need the kinds of access that the Government needs, the more we need our bot IDs to be more like human logins. There’s always these discussions that we’ve had to have in terms of what it is that we’re doing; why we’re doing it; and what tools we have; and also, how we handle the passwords and entitlements.
We use a secret wall and we have started moving all of the passwords over there. That’s where we continue to improve on our technical debt. We also work hard for these passwords and the access or the credentials that are provided by the RPA tool. We, in fact, make sure that the bot is directly accessing the credentials from the wall. We’re also implementing a 90-day password rotation policy for our bot IDs.
As far as data goes, each company has its own classification of what it considers sensitive, confidential PII data. It also has its own policies on how to handle such data. For us, it’s data and transit. It usually requires layers of encryption and input about data at rest even if cloud usually requires retention periods and stuff like that. We’re implementing all such provisions before taking on the corresponding automations that have sub-data.
Moving on to point four, we have extensive best practice documentation. We keep updating it, for instance, our machine learning continuous improvement capabilities needed extensive best practice thought on how we would make sure that we are not only just training our models initially, but we monitor and train them on an ongoing basis as agents provide their validation of their machine learning where it’s right or wrong.
We also have an extensive CICD process, which is very Uber-specific based on the domains and access and Windows versus Linux machines, and stuff like that that we have available for us for source control. That’s pretty much it. Back to you, Chad, for wrap-up.
Chad: Yes. I think that’s all we have for this session. I just want to say thank you again to Shibumi and to Sean for letting us share our journey and our connection with Shubimi.