Description
DevOps makes your Salesforce releases faster and more stable. With Gearset, you can now benchmark your release pipeline against Google’s DORA metrics, using your preferred dashboard.
Join DevOps Advocate Jack McCurdy and Software Engineer Julian Wreford for a conversation where you’ll:
- Learn about DORA metrics
- Learn what challenges our Reporting API solves
- See a live demo using a Tableau dashboard
- Understand the value that Gearset brings to your business
Learn more:
Transcript
Cool. Alrighty then.
We will get started. Thank you for that again. Again, thank you very much for joining us, on this webinar this evening about Gear Sets reporting API, and a little bit of a dive into how you can measure your Salesforce performance. I am Jack McCurdy.
I am one of the Salesforce DevOps advocates, here at Gearset. My job is, spreading the good word of DevOps throughout the community, and hopefully sharing some best practices and tips, in and amongst, in and amongst, the community and, spreading that good word and, hopefully, sharing insights that will help you when it comes to, like, accelerating and improving your Salesforce delivery life cycles, no matter the shape, size of organization that you work for. And today, I'm joined by, one of my colleagues, Julian Redford, who is one of the software engineers here, who has worked very closely on the Gear Set product, for the last, the last year or so, maybe even a bit longer than that, and, is here to to show us a little bit more about, what the reporting API can do to you having worked very closely, on that side of the product.
So, over the course of the next half an hour twenty five minutes, half an hour or so, we're gonna go over a little bit of a brief history and, dig into what the reporting API solved and why we built it and how we created it. We're gonna cover off on the Dora metrics, that the reporting API, was built for, And then we're gonna jump into actually showing you how you can actually measure your DevOps performance, using Gearset, with a product demo with some time for q and a there as well. So that's the shape of, the next twenty five minutes or so. If you do have any questions as we're running through here, then like I say, you've got the chat box. You've got the q and a box there as well. Feel free to drop, your messages and any questions that you might have in there.
So let's go into why why did we why did we build reporting API and what what challenges is this solving? So, from a history perspective, every software delivery team, and every Salesforce teams should be envisaging and looking to understand how their performance can be improved. That's always been a challenge on the Salesforce platform, especially when it comes to DevOps and how you can accurately measure how well a team is performing other than, you know, the potential late nights or, you know, multiple Slack messages or, you know, word-of-mouth about how things are going, from a purely qualitative perspective.
It's often a challenge to put, quantitative metrics and measurements on all of these things, to help report on their effectiveness. And that's especially important when it comes down to proving the ROI of your DevOps process and proving to senior stakeholders, what the value of having a good DevOps process is and why they might want to invest in it, whether that be through training, tooling, all of those kind of things.
DevOps can be really hard to quantitatively say, this is where we are at. This is where we are looking to go, and, you know, this is how we can can look to change that over time and detect those areas for improvement.
And what we also find as well is these teams, especially as if Salesforce sits inside a wider technology group, they need metrics that are industry standard for them to, to to be able to draw the parallels between other areas software delivery in those business. So, that's what we looked to set out, set out to solve when we said, how can we give our users what we want, and how can they get what they need, so that they can improve their internal processes?
And, Julian, what did we do when it came to, actually building the thing?
Yeah. So, we we love listening to to you folks. We love hearing the product feedback that we get, and we love solving the problems that you guys are having.
And as part of that, we went on a gear hackathon. So nine of us went off to a, nice bit of England into the countryside, nice little man nice manor house. And the aim of the the aim of the week was to, sketch out an idea of what reporting should look like for Salesforce teams. We focused on the door metrics as a a great way to great way to, measure across industries. And we spent some time re digging into the problems and what we could do specifically for Salesforce teams and teams that use Gearset. So after that, we had a had a spike, had something that was working, and meant that we could start, getting this information out out and about.
And then after we build that out and we'd and we'd understood the value they had to you guys, we ironed out some of the kinks and turned into a full fully fledged part of our part of our product. Meaning you can access it today, under the enterprise licenses.
Thanks. So, the Dora metrics. So let's cover these off so that we have an idea of, a bit of a reminder about what they are, and what it is that we're looking to measure. So we have deployment frequency, lead time for changes, mean time to recover, and change failure rate as those four metrics.
And we're using these ones again because they're industry standard. They are excellent measurements of how you can track your team's progress in a current state and over periods of time. And, again, coming back to understanding that business value that DevOps can provide, all of these metrics are critical to to doing that. And what these metrics will also allow you to do is analyze and reflect on your process as well.
So when we jump into the demo a little bit later, you'll see how what you can get through the reporting API is gonna be really helpful for when it comes into identifying process areas for process improvement and where you can really start to make a difference, in streamlining those things. So if we start there with deployment frequency, so we're measuring how often your team releases to production. The aim of most teams is to release in smaller packages if they can, and that's gonna provide tighter feedback loops and allow for user feedback to get back to the dev team quicker. Frequent releases are often an indicator of a high performing DevOps team as well.
Allowing to ship in small slices has a whole bunch of benefits, notwithstanding the fact that, large packages being deployed with Salesforce can often be quite clumsy and require quite a lot of manual effort pre and post deployment anyway. But that is really about being able to, deliver as many packages that we can in a continual fashion that we can, for those type feedback loops and make sure that what we're developing is really what people are wanting. And to do that as part of the reporting API, start and end dates to retrieve a list of your deployments to the orgs, target and source with an ID.
We have lead time for changes. So we're measuring the amount of time it takes a change to get into production, which is measurement of the velocity of our delivery. And it kinda goes by saying that mature teams are more efficient in this area. So in gear set, specifically, what we're talking about here is from the moment a piece of work is completed and a first PR is opened against a branch and how long it takes that feature to actually be shipped to production.
Some, some places, the lead time will mean something different, and that will be actually be at the start of when the development actually started to when it gets into production. But in gearset terms, we're talking specifically about when that first PR, first PR is opened as part of your pipeline. So how quickly those changes can happen, or how quickly that those changes can be propagated through an environment is really important. And by analyzing those, what we'll have the opportunity to do is look at specific areas of the process and, you know, analyze whether, there's particular areas if things get in stock in UAT, staging, QA, wherever it might be, those lead time changes.
Those lead time changes are really useful indicators of of that velocity and identifying where bottlenecks might be. So, again, in gearset, we're gonna see this a little bit later, but set up start and end date, to retrieve the list of those PRs merged into those environments.
Your mean time to recover is measuring the average amount of time it takes for a team to get back on their feet after a failure. So after a failure is detected, either, either a rollback, operation, for example, something identified from production monitoring, that mean time to recover is really critical to getting everybody back to working in an efficient manner as possible. Our mean time to recover should be, should be the highest priority thing that any any team can do. So So the quicker that you can actually do that and get back to more meaningful work, the better.
So another really important metric of, and identifying how resilient you are as a team, when it comes to comes to things going wrong. Because, ultimately, things do go wrong. That's the fact of life and the fact of software delivery. But the faster you can confirm it, the better.
Again, we're setting up start and end date, and we'll retrieve, how long each production issue takes to be resolved, in gear set there as well.
And final one here is change failure rates, so the number of releases that results in rollbacks or any type of production failure. So this is an interesting one as well because, this is really, really the the the mark of the quality of the code or the declarative development that you are undertaking.
And the time that it takes, your team testing and debugging and building new features will contribute to reducing that change failure rate.
As you might imagine, this could have an impact on lead time, but if you look at these two thing, lead time and change failure rate closely, you should see see that if your change failure rate is particularly low, your lead time might be the thing that needs to be improved, and you can focus your energy and efforts on, on doing those things. So when you combine all of these metrics together, they can become really powerful, powerful tools for analyzing what you can do as part of your DevOps process to start streamlining those.
So, again, Ingear sets up start and end date, to retrieve a list of your production issues, to help identify how often those things are happening.
So those are the metrics, and I think it's about time for us to hand over to Julie and actually, show us how this all works and bring it bring some color to it.
Great. Thanks very much, Jack. So as as I said earlier, you need a guest enterprise license, to get going with the porting API. And it's really, encouraged by the pipeline's functionality as well. So the lead time for changes and the change failure rate in the meantime to a store are based on the information we can get from having a pipeline, really understanding what your process looks like and understanding where your production orgs lie.
So pipelines really makes this great, but there's still functionality you can be getting out of reporting API without using pipelines.
And then the other question of which dashboards are supported? Where can I be servicing this data? Well, the answer is almost anywhere.
The idea is that with an with an external API, you can be querying it from any of the really common, data management tools, data warehousing, data visualization tools.
So you can start at a kinda low level and use things like Google Sheets and Microsoft Excel to to really just do, like, a fairly manual process of looking at the looking at the data. But where you wanna start automating things, you can start using tab Salesforce Tableau, Google Data Studio, Microsoft Power BI, all those sorts of tools which are really designed for, a con continual flow of data with dashboards that will update automatically every day, every every week, making sure that you keep a really regular progress on, a really regular idea on where your team's progress is. So So the great thing about the API is that you could be using all those things.
So digging into the demo. So as you'll have seen, around the app a lot of the time, you can come to your account page, and then there is an access token management page.
I'm gonna generate a new token, and give it a snazzy little name.
And I'm gonna let it get it to expire in three months. That's a security measure, so we make sure that we aren't, leaving tokens out in the wild forever. And that if they ever get ever get breached, then we can do something about it. So I'm gonna use the just the reporting API, and I'm gonna create a new token. So with that, I've created a new token. I'm gonna copy that over to save in a really secure place so I'll be able to see that token ever again.
And I'm gonna pop into our interactive, interactive documentation.
So one of the great things about the API is that it conforms to the open API standard. So whether you're someone who uses Postman or some other tool like that, which, you use for testing our APIs and query, you can download this API, open API specification, and that will pretty much boot straight into Postman or one of the other tools that you might be using. And you can go off and query and test out, all the endpoints we're talking about today.
In addition to that, though, you can test them out straight from this documentation. So this is actually interactive. So I'm gonna authorize myself, by keying back in my token, And I'm gonna show you what the lead time for changes looks like. So I'm gonna try it out, and I've got my pipeline ID from earlier.
And I've just set up a bit of a pipeline. So me and one of my colleagues have been working for last week on quite a few a few features, and we've just been working through a pipeline, getting it through to production all, and you'll see see how that all works. So then you have a start and end date as Jack was saying. And because it was last week, last week was the twenty first to the twenty sixth.
Alright.
And I'm just gonna no. I still don't want that one. I'm gonna press execute. And what this is doing is this is sending a request off to the Giza API, and we're, kind of putting a bunch of data back together. I'm linking up the pull request to your deployments and making sure that it all kind of sits nicely and you can see it in a nice format.
So this is this is the JSON. I'm just gonna I'm gonna copy it over and put it into a JSON viewer so that we can explore it a bit, and I can explain a bit of what you're gonna be what you see. So in this JSON blob, you have an, an entry called pull request, and within that is an array of objects.
So if I pick up this first pull request, we get a lot of information. Some of this is directly related to lead time. Some of it's gonna be useful for some of the wider analysis we're gonna speak about in a sec. So the environment name.
So this is my hotfix environment. So I've put a few pull pull requests through our hotfix, because I made some errors in production. You'll you'll again see that in a little bit. So you have the creator that, merge that, and deployment completed that.
So the idea here is that the creator that is when that work has been finished, when someone has has done the dev and they've made the PR. Merge that is when someone's it's gone through the approval process, then you've clicked merge on the pipeline's functionality. And the deployment completed that is then the the space for it to actually get deployed to your Salesforce org. So with automated DevOps, it's really great that the merge we expect the merge that and the deployment completes that time to be really short because the distance is just an automation click, and the deploy the, process goes off automatically.
We've got the deployment ID, which is gonna link up to the other endpoints in our, in the kind of reporting API. And that'll mean that you can see, it all kind of connected together with, like, your change failure, and your deployment frequency.
You've got title, PR number, description, all that sort of useful stuff. And here, we've got, the feature name, which is the feature that we track through the pipeline.
This is also attached to the author name and the author username. And finally, we have the Jira ticket reference. So the useful the great thing about this is that it means that we can see the process from start to finish.
As as Jack was talking about, there's other word definitions for lead time. And, actually, some sometimes you're gonna be interested in the amount of time it takes between the developer between a ticket first being created and then develop picking up and then actually a developer getting that pull request out. So this allows you to have the gear set view of it, but also add that extra information by querying other APIs and really combining that information together, meaning that you can get a full view of your development process and understand how your dev ops process is working. So after that, this is this is kind of what a manual fix sort of, request would look like.
So we can go in, and from here, we can put it into a, Google Sheets or just a normal spreadsheet. And with your spreadsheet software, you can do some fairly some fairly manual analysis. And the great awesome thing, about the API is it can then be automated. So I've gone into Tableau, and I've made a couple of, just little example dashboards of what this could then come out as, and how what we can be showing.
So you see this is my Dora stats as we spoke about. So I've got a team target of two deployments a day. And you can see that the start of the week weren't quite hitting that, but by the middle of by the middle of the end of the week, we're really, really getting to our stride. As I as I said earlier, this is just like a really potted view of what what you can be doing with the reporting API. This just shows the last week's progress that you can do over the last month, the last year, and really see trends as your team develops changes as you implement better DevOps processes, and as you, like, learn more about the about about the ops, culture.
Over here, we've got the lead time. So, again, I have a have a team target of two hours. And you can see that some some days, my, lead time was a bit off. But some days, I was we were really nailing it, and we were getting PRs through in in hours, which is really great.
Similarly, we've got a change failure rate, and I've got we had a target of ten percent. And, again, there's some days where things weren't quite going as well. And you'll see that this we we were trying to push for a few deployments, and you kinda see how when you start pushing through loads and loads of deployments every now and then, you will get these old failures. And that's why time to restore becomes important.
So it's really important that if each these failures do start cropping up, we can start restoring quickly. So our time to our our target time for store was an hour. And as you can see, we were missing this on one occasion. On the other day, we absolutely nailed it, and we we saw that out pretty quickly.
So I'm gonna dive into the lead lead time deep dive. So after you've seen that, like, high level graph, you might be wondering, well, why why is my lead time changing so much? What's going on? And the great thing about the API is that you can take the kind of, all the information we give you, and you can put it onto a lead time graph Great.
But then you can really dig down into what's going on. So I'm looking at lead time per environment here. And I've got three environments. So I've got a hotfix environment.
I've got a main, and I've got a UAT. So you can see that my hotfix environment, the lead time is super fast. Whenever I'm putting a pull request through through my hotfix environment, it's taking next to no time between the work being done, the test being run, and then we're deploying it. Wham bam.
Lovely stuff. In our UAT, it's taking a little bit longer because the test is taking a bit longer. It's a bit less urgent, and, things just a little bit slower because these aren't seen quite as as emergency as the whole fix. What you really see, though, is that the reason my lead time is looking so high is that my main my my master branch going into my production environment is is taking a lot longer.
We've got a lot of manual tests running, and, the two hours or two hours or so it's taking for that code to get into master is a lot of manual testing work. And to me, when I was looking at this, I'd then I'd look at this graph and say, oh, what can we what can we work on, in our testing? What can we automate?
On the right, we've got PRs created just understanding where the work's going from. So I've been working with my colleague, Sam Williams, and we've been putting work through this pipeline together. And you can kinda take that stat and do lots of different stuff of it and look at who's making pull requests, when they're going out, how often they're getting to master, and stuff like that.
And then right down here is a really, really specific view, which is that it's every it's how long every single feature took. So you'll see that my real outlier was this ad validation rule. It took me eighteen hours to get my ad validation rule feature through through the pipeline.
And that is like, that that could have been to do with testing. It could have been to do with manual processes. It could be that we found some errors, and we have to go back and fix some stuff up. But this just allows us to have a bit of a deeper dive and look forwards to the rest of our, like, how how we can improve. So one of the great things about Dora is that you can look at your retrospective performance, but you can also dive a bit deeper and understand how to improve your performance and how these stats can get get better. And that will ultimately, increase your ROI as Jack was saying. Like, it's all about talking about what the what the business and what the business needs from that and what would be most value for the business.
So that's been a lot about the Dora, but, also, you can get some other stuff out of the reporting API, which isn't necessarily as related to Dora specifically. So over here, I've got the total deployment usage. This is talking about how my users are using GearSet, who's using their subscriptions, who how many people are sending out deployments? So you can see, here that I've, I've you've kinda got a daily running basis.
And there was forty three successful deployments on the twenty third of August, but twelve failed ones. And that that feels quite high. These are to all our environments. So this isn't to developer or just into, like, git branches.
This is everything. And we can even start looking at the deployment user breakdown. So who's really making use of this? Who's or who's not who's not using their subscription as much?
So Alex was churning out almost two hundred appointments this week. So that includes his CI deployments. That includes his manual work, and he's had a busy week. Whereas some people might have been on holiday, or you can just get an understanding of how much people are up to and, like, what where your team are really, really working.
And finally, to finish off down here, we've got the deployment status breakdown. So these are the pairs of all orgs, from GearSat. So you'll see here you've got Julian dev to my, various Git branches.
So this is all my all my, working pipelines. So I have our sources to dev, and I move it into my Git branch. And you'll see that I've been fairly successful. Lots of my deployments have been working correctly.
However, there's been a number that haven't been working. Some people have pairs that just seem to be constantly failing, and this is a big red flag. This is something that you you wanna be really aware of and say, there's a real problem that someone's someone's trying to work on here. Maybe they need a bit of extra support as a dev if they're getting really stuck in the development process or if they're one of the admins is new. It's really great to be able to keep keep a track of how they're going on with the deployment. Are they understanding, everything, or do they need a little bit of an extra point to also make sure work with some of the more senior devs?
So that is kind of an example, really quick view over why what a what a week week to week, Tableau dashboard would look like. This was all enabled with the reporting API. It would all update automatically. So, if I came back and wanted to change which week I wanted or understand a different bit of data, I could do so.
And we can look at that really, really raw data still. So this is, like, an example of what we think is important, but your your company might have their own processes. They might have their own things that that they see.
So that was kind of that was kind of the main main chunk of the demo.
So if you have any questions, please shoot away. I see we've got something in the q and a.
Is there Yeah.
There is. Hong is asking how to find your environment ID.
That's a great question, Hong. So we walk through. Sorry.
We walk through this a bit more closely in the documentation, in. However, if you go to your pipeline so I've just loaded up the report, Julian's reporting API demo. So if I click on an environment, you'll see that the environment ID comes up just here.
And, similarly, you can find the pipeline ID just here.
So that's how you can find it, and that's explained a bit more thoroughly in our blog post and documentation. I just didn't wanna take too much time in this.
Is there any other questions?
Well, I'll take silence for the moment as there's not there's not immediately pressing. We'd love to answer your questions. If you wanna come through Intercom, if you wanna come through our in app chat oh, can I ingest data in Salesforce apart from Tableau? So, I'm not an expert on this.
I as in, I'm not an expert on the different data platforms. There's so many for us to, look at and support that we can't possibly support all of them. You'll probably have your third one, and you'll probably have an understanding of how they link, between the one you have and where you'd like to show it. So I know that Power BI is a really common common, one that's not Salesforce based, but they often have loads of plugins to kind of end up all over the place.
So I don't think Tableau is the only way to get into Salesforce, but it probably depends what you're already using.
Thanks, Julian.
Cool.
We're just coming up to, coming up to to the, the the half of the hour here. One last thing that we would like to leave you with, here as well is, we've spoken a little bit today and dived into the product, a little bit more. So, on the left hand side, QR code here, if you want to find out more about the reporting API itself, have links to the other documentation and start to use it yourselves, then, that QR code on the left is gonna take you to a a blog post about the reporting API. And as Julian was mentioning, Gear Sets, reporting API is best used with pipelines.
So if you are unfamiliar with pipelines and what that looks like, you can find out more about pipelines with a QR code on the right, two awesome awesome, fairly new things, that that we've released recently that are that are going down. An absolute treat. So be sure to check those both out. As Julian says, if you have questions for us, feel free to to either drop us, drop us emails, or yourself separately or, like Julian says, the in app chat is always the best way to get ahold of anybody at Gear Set.
So feel free to come by there. But thank you very much for spending half an hour of your day with us. Really appreciate it, and hope to catch you again on another webinar or in person sometime soon.
Thanks very much, guys.
Cheers, everybody.