Description
Discover how Provar’s latest developer experience streamlines the entire Salesforce DevOps pipeline from code to deployment. In this session at DevOps Dreamin’, Michael Dailey, Director of Technical Product Marketing at Provar, showcases:
- How automation can supplement each phase of the software development lifecycle, helping to shift companies left
- How these developments can be applied to real-world scenarios to facilitate continuous integration and deployment processes
- A live demonstration of these integrations in action, providing a clear view of their impact on daily DevOps operations
Learn more:
Transcript
Thank you all for coming. I know I'm in the way of the full forty five happy hour, so appreciate your time for those of you that are here.
All right. Today, I'm gonna be talking about how you can take your code to deployment and how you can streamline your entire DevOps pipeline with ProVar. So before I get started, a brief introduction. My name is Michael Daley. For those of you that don't know me, for those of you that don't know ProVar, this is who we are as a company. So we've been around about ten years now. We are a test automation company that are based out of the UK, but we have offices globally.
We are the global leader in Salesforce testing, and it's been that way for ten years now. That's where we found kind of our target area, Salesforce centric testing.
We were founded by experts actually in the Salesforce ecosystem. So someone saw the need in testing Salesforce applications and kind of the gap they're in, and that's where we've found our big market really.
But enough about that. Let's move straight into the agenda. So from here, I'm gonna be talking about some DORA metrics, and we'll be talking about kinda how this pertains to quality and testing as well. So some interesting things there from a testing perspective.
And then we'll talk about bugs, everyone's favorite thing in software and software quality as a whole. And that'll kinda lead into the different roles and responsibilities in the DevOps pipeline and who's responsible for quality.
Is it just the testers? Is it the QA's? Is it the admins? So we'll kind of talk through that some. And then I'll just talk about developer experience. So this will be from the Salesforce and from a ProVar perspective, what we mean by developer experience and ways that we can improve that going forward.
And just as a disclaimer, I don't mean this DORA.
So So the first one is deployment frequency. So how how many of you are familiar with DORA? Heard this term before in the DevOps space? So it's an acronym.
Forgive me for the usage of acronyms. They're everywhere today. But it's basically just different metrics that people use to measure maturity on the DevOps kind of spectrum. Right?
So to see how mature you are as a DevOps organization.
One of the big things that they measure that by is by one of the metrics is deployment frequency. In other words, how often are you deploying to production? So this is one of the fun ones. So, quick straw poll here for science for those of you in the room. How many of you are into this first category deploying less than six times a year?
Okay. That's good. No big bang releasers out there. That's good to see. How about every sprint, like two to four weeks? Anyone in that release window?
Okay. Couple of you.
Anyone deploying one or more times a week?
Seems like that's about equal there, distribution. What about multiple times a day? Anyone using Gear Set?
Awesome. Good to see. Yeah. So as you kind of go down here, this is just one metric.
Right? No metric should be taken out of context of the whole. But this is one thing to evaluate how mature you are as a DevOps organization is how often you're deploying to production. Faster you can deploy things, faster you can make changes.
Makes sense.
Another key metric in defining the performance for maturity of your DevOps organization is the lead time for changes.
In other words, how long does it take for one line of code or one config change to make it to production?
And this when you're thinking about this, how long it takes, it's a process. Right? It's kind of a journey. You know, you're taking one line of code or one config change from your developer sandbox and moving it to production.
It's usually not all in one fell swoop. Right? There's different processes that are in place to make that happen. So as you're kind of thinking about what the answer of that is, there's if it's just one method or if it's just one field change, something like that.
If it's just one method or if it's just one field change, something like that. But there's also gonna be many different environments that you're also going through. So where does this journey really start? Does it start at the planning phase?
Does it start in development? Are you just going to start building things without going through those processes?
But also where does it end? You have kind of this question mark here. For those of you that are familiar with the DevOps infinity loop, right, it is it's a cycle. It doesn't just get to this monitor and support phase and then we just keep the lights on. Usually, there's a feedback loop, and then we start the process over again. So this is a very important metric to measure how responsive your organization is to requirements for the business, to make these changes and to push them out in a meaningful time frame.
But where does it end? Right?
So we get in this situation sometimes where we get maybe a little tired and disenfranchised by always these changes that are coming in.
So that being said, another key metric that I want to point out from DORA, this is kind of the third one here, is the change failure rate, which is the one that I'm the most interested in being from a testing company.
This is probably the one I'll spend the most time kind of focusing on. This is really a calculation of the number of incidents that you have divided by the number of deployments. So it's pretty easy to calculate. You're really trying to find the total number of deployments to production that are resulting in failures, right? So this gives you a good metric to kind of think about how often are we deploying code that's giving us a bug or something that's going to cause something to fail in production that's pretty catastrophic.
So I want you to think about NASCAR for a minute. Now stay with me.
For those of you that are not NASCAR fans, I'm not either. Is anyone a NASCAR fan in the audience before I continue?
Okay. Alright. We we at least got one. That's good. So, there is a bit of a misconception with NASCAR. NASCAR. I know there's all different types of racing like Formula One and Indy.
But NASCAR, there is this misconception that we just turn left, right, which is seems really easy in practice.
But there's a lot that's going on here. These cars are going in excess of a hundred and eighty miles an hour, surrounded by dozens of other cars that are also going over one hundred and eighty miles an hour.
Every track is different. The elevation, the offset changes. There's different manipulates the throttle, the clutch, the brakes. There's all the mechanics doing the behind the scenes prep work to make sure that everything goes as planned and planning for that race.
Right? And not to mention the sheer physics involved in being in one of these cars. If you've ever seen the movie Gran Turismo, highly recommend it. But it's a good demonstration of what this looks like in practice.
Right? So what's my point here? The point is the goal of driving in NASCAR is to go fast, is to win, is to be the fastest.
Right? When you're thinking about DevOps, the goal is to deploy quickly, is to release changes quickly. Right? That's the whole goal is we're trying to reduce the time that it takes to do the job.
But there's also this other goal, right, in NASCAR and that is not to crash.
Same thing with DevOps. We're trying not to break things. So kind of the to sum all this up, the goal is to move fast, but don't break things.
So how do we do that? It's a balancing act.
That's where we get into talking about bugs and software quality.
So we all have bugs. Right? Some of us may call them features, but we all know what they really are under the covers.
But what is the actual cost of a bug? Well, before we get into it, it's important to discuss the reasons, the other reasons why we may test something. It's not just to eliminate bugs in our application. It's to ensure the quality as well. So to make sure the user experience is good, to make sure the performance is good. You know, there's different types of testing that we're involved with here.
But perhaps the hidden reason here is it's kind of the cost associated with testing. So this graphic here will kind of showcase. And this is just a relative approximation based on different metrics and research and analytics and things that people have done. But you can look at this and see that as in your in the development environment, and let's say you have dev to QA to UAT to staging and then maybe to prod. So five different environments here. And as you go up through these environments, there's a exponential increase in the cost of not only finding but also resolving these bugs.
So the main reason that we test is not to just eliminate bugs or to improve software quality, but it's to save money. Makes sense because the cost of finding these bugs increases as we go through the pipeline.
So how do we get better at that? Well, you may have heard, you know, shifting left or we need to shift testing left.
Test early. Test often. These are all best practices that we often hear in the QA industry because if we find things earlier in the process, the less money it costs our business. That makes sense.
But there's a very realistic challenge at play with that. It works out in concept, but often the way it actually plays out is very different because if you want to shift testing left, what does that involve? That involves more time, more resources you don't have That could artificially increase your time it takes to release changes to your environments. And that's kind of having the unintended effect that we want.
Seems a bit counterintuitive.
So we don't want to slow down our release schedule, but we want to test more often. So what are some ways that we can go about doing that?
The first thing you'll need to do is kind of evaluate who's responsible for what in the DevOps process.
So this question that I ask here is very important to think about. Is quality everyone's responsibility, or is it just the responsibility of the testers and the QAs?
So this slide kind of shows the different roles that are involved and the different responsibilities that they may have. So we're breaking down the software development life cycle and the entire release process.
These are just some things that I put together to kind of define it. So we look at the developer persona here.
Developers, they're primarily focused on what to build and how to build it. Now I know there's different things that they're also doing, but that's kind of the primary focus. Right? They're focused on the overall quality of their code.
That's why they write unit tests. That's why we have pull requests. That's why we have static code analysis tools. That's why code reviews exist.
All these things are to improve the overall quality of our code.
Testers and QAs, what are they responsible for? They're responsible for the what of what they're testing and the why of why they may be testing it. So what is the actual end goal? Why are we trying to reach eighty percent coverage? Are we trying to reach ninety percent pass rate? What is the actual strategy around how we're going to test these things?
So the primary concern is the overall quality of the application.
Now, DevOps engineers or site reliability engineers or release managers, there's all different types of titles that you could use in this case.
They're focused on when they need to deploy something, what they need to deploy, and how they need to deploy it, which all kind of curtails into the quality of the release and also the stability of the release. Right? So all these things are very important.
And we're looking at this from the admins or the business analysts or the, kind of the business side. They're also performing similar roles, you know, admins and VAs at different times. But they're really focused on the why they may be doing something. So translating business requirements into actual changes, through agile or some other methodology.
They're focused on how it should be done. So should they use a flow? Should they add another multi select pick list? I pray to God not.
Or should they add another field or something like that?
And then we have stakeholders. Stakeholders, they may not be as involved in the day to day, but they're kind of taking a step back and focusing more on the why. Is it going to improve our bottom line? Is it going to make us more successful? Is it going to improve the satisfaction of our customers? So they're focused on the quality of the user experience.
So when we're looking at all these, there's a lot of moving parts. There's a lot of different personas that are involved here. But there's one commonality between all of them, and hopefully I've made that obvious by bolding the letter and saying it, and that's quality.
So going back to the question, is quality everyone's responsibility?
Yes.
Good. You're all paying attention.
So how do we put this all together? So this is a diagram that illustrates different personas that may be involved and how this actually plays out into what we call the quality life cycle at ProVar.
So you're all familiar with the software development life cycle, which is kind of the stage of your development. Right? You know, all the different stages. You know, the planning, the building, the testing, deploying, monitoring, and then that feedback loop. But if we're thinking about that from a quality perspective, it may look a little different. Now a disclaimer here, I'm not saying that this is necessarily the only way to do things or even the best practice necessarily.
This is just what one may, one implementation may look like. So we have at the beginning, we have our user stories being created, right, by the business analyst or the product owner or something like that. We have them taking those requirements and that's where the work begins to happen in parallel.
The parallel work is where we find the most productivity gains because rather than waiting on the business to do step one, step two, step three, things are happening at the same time. So the same time the developer or the admin is implementing the user stories, we're also creating manual test cases. Right? So this is agile methodologies that work.
From there, in conjunction, we're also able to build automated tests as part of that. Because once we have a manual test script and once we have an implementation to work on, we can start building automation for it and start testing it ahead of time in that same developer sandbox where it was written. So we're moving the testing earlier on in the process.
Then from there you may get to a point where you want to actually schedule these. So once you have a test plan in place, you have the release manager, communicating with the testers, the QA's. When do we need to run these test cases? Which test cases do we need to run? Which orgs do we need to test? All these things. They release.
The test pass, the test don't pass, and then that leads into the feedback of, okay, we have a bug if the test fails, and the process starts over again. Right? So there is something missing from here. Or not missing, but something that could be possibly improved with this diagram.
And one part of that is the role that the developer plays in all of this.
There really does feel like there could be an opportunity that they could be doing a little bit more in this process.
A lot of times developers I see in different organizations are more like order takers where they're just taking requirements and then translating them to the code or building something. But they're not often as involved in the planning or involved in the deploying or the testing, but there are ways that we could make them more involved to utilize their skill sets. So that's what we mean really by developer experience.
Now, does anyone agree with this image? Any, any, anybody?
Just me?
A couple of you? Okay.
So this is understandable, I think, in most organizations. Our end users are definitely our focus target. Right? That we're they're the ones that we're trying to make everything work for.
We want to please them. That's who buys our products. Right? Our end users.
Now I put some disclaimers here at the bottom in fine print.
User experience is typically number one, and it should be. But also admins or BAs, don't get mad at me if I say that developers are not as loved as you are. It's just my opinion. It's not fact. It's subjective. Right?
Moving on.
Okay. So user experience. We've all heard that terminology before. All it really means is how a user interacts with your product or service, what how intuitive it is, how easy it is to use, right? What's their perception of how productive they can be? Pretty simple definition.
Developer experience, kind of pulled this from some different sources. And this is really about what I believe is giving developers more freedom to build great software more efficiently.
So you see the word more there twice because they're really trying to get better efficiency out of the development team through tooling and platform enablement and things like that.
So that being said, with that definition, they had this really big, cool percentage slide in the template. So I thought I'd just leave it in there.
That's the meaning of life for those of you that have seen Hitchhacker's Guide to the Galaxy.
Or we'll move on. Salesforce developer experience. So the first experience that I'll talk about is Salesforces. And I'm not here to do their job for them, but I'll just outline a couple of different points here.
Obviously, for those of you that have been in the space for a long time, you know that this has improved dramatically over the past five, six years. Right? We've had the Salesforce CLI. They've combined it now.
We have the full developer guide. Code builder is now GA, so you can use basically, you know, Visual Studio code inside of Salesforce, which is great. DevOps Center, which is enabling more teams to get involved into more source as truth styles of working, which is great. From TDX, you know, they have Einstein Studio that came out with Copilot, Prompt Builder, Model Builder, all those things.
Einstein for developers.
And as far as what's next, some things that I'm excited about in particular are the enhanced metadata API support. So extending the metadata APIs to other things that they say are on platform but are not necessarily as on platform as they could be, such as marketing cloud and experience cloud and things like that.
But anyway, all that is to say there's definitely a catered audience here, right? They're trying to improve the overall life or the lifestyle of developers who are in the Salesforce ecosystem.
So piggybacking off of that, we have made some recent changes in our applications and our tooling to better suit the developer persona.
Now part of that is our own CLI that we have that utilizes Salesforce DX. We have our own plug in for Salesforce DX, which is called Provar DX.
And really the goal of this is to enable developers to run automated tests from the CLI.
So this could be used by anyone. Doesn't need to be a developer, but that's kind of the target here.
And ultimately, the goal is to be able to scale testing beyond just the scope of testing it in QA or UAT, but also in the developer sandboxes or even scratch orgs in this case. And these tests can be run can be triggered locally or they can be triggered as part of a pipeline through a gear set web hook or something like that. Right? And it's really about validating our changes with more than just unit tests. Right? Because we all love writing and executing unit tests.
Everyone nods their heads there. So we can do better though. The capability to validate your tests by more than just doing unit tests is really about doing more functional testing, using automation solutions that are more tailored for that purpose.
And the last point here is that it is the same interface as Salesforce DX. As I mentioned, it's just a plug in, so you just install it through the Salesforce DX CLI. You can utilize it in a GitHub actions pipeline or anything that you're utilizing Salesforce DX with.
Another aspect of this is the, the APIs that we now support through the Provar APIs. So really what this allows us to do is trigger our test cases from anywhere. So now we can run test cases from things like Gear Set or from things like GitHub Actions without having to run them directly on that platform, we can just trigger this API to run. So what this gives us the ability to do is abstract away having to install different tooling and different things like that directly where the pipeline is being run, but now we can just trigger it. We can run it anywhere. So it makes it much easier to integrate in that way. We also have the capability to create custom execution templates basically, which I refer to here as custom schedules.
What it allows you to do is create a, execution template that can run against different orgs, different you can target different browsers, operating systems, all things like that through different JSON formats.
And of course we have different authentication methods to fit whatever makes sense for your requirements such as OAuth two point o and API key authentication.
Okay.
What's next? So to close out today, because I know, again, I'm in the way of, happy hour at four forty five, so I'll keep it brief. But really, all this is to say that we're trying to do more to kind of bridge the gap between QAs and developers.
A lot of times when you're dealing with automation, you're dealing with QAs, you're dealing with testers, we're dealing with a lot of admins. But developers do work in conjunction with those teams, so we want to make it easier for them to work together. So not just through CLI and APIs, but also enhancing our current products to be better suited for those collaborations. So code coverage and quality reporting is gonna be inbuilt into DX in the future.
And we're also going to have capabilities to be able to see data, so which is actually a really highly requested feature from a lot of our users because they want to make sure that they have a proper data set before they run certain test cases. We were talking to someone earlier where they, you know, they spend maybe one to two weeks just staging sandboxes post refresh to make sure it has the appropriate data. This is not something you want to be doing all the time on a repeated basis. A lot of time sync.
So being able to do that through the CLI is very, very useful.
And of course, you wouldn't make it through any presentation today without at least one mention of AI. So the capability to generate unit tests through, A. I. And DX is also something that we're going to be introducing in the near future.
And then there's some additional enhancements to our API and database support. So you can do full end to end testing with ProVar as it stands, but we're going to be enhancing this in the future to be a little bit more friendly to developers and to our users as well.
So with that being said, I will end today. Any questions for anyone?
Go ahead.
There we go. So when using this, especially, like, in Scratch Orgs, you mentioned data seeding for setting up some existing data. What about for the users themselves? If we wanna test different profiles or roles or personas, is there ways to automate some of that, or is that something that you kinda do pre testing?
Such as creating users Yeah. Specifically? Yeah. Yeah. So you can technically do that through the current implementation to just using SOQL and things like that that we already have on platform.
Yeah. Okay. Cool. Yeah.
Yeah, go ahead.
Yeah, so you mentioned that on your unit testing, how is that different than what we were seeing on chat CDP and how that is actually playing out? Because I could I could go to chat and say, write a unit test for this particular code. How are you guys gonna differentiate from that?
Yeah. It's a good question. So one of the ways is just making it more accessible from the command line, and kind of having another added layer to that would be, some different capabilities that have to do with ProArt tests as well. So unit test is kind of the first phase of that, and then the plan is to make that more of a, kind of a regression and functional testing as well.
Yeah. How do you guys work with experienced sites and testing in that particular subset of Salesforce?
Good question. Yeah. So typically, the way Provar works with on platform things like sales, service cloud, and things that are on salesforce dot com is we utilize the metadata APIs to recognize where elements are on the page and different things like that. When you get into experience cloud and marketing cloud where the made to date APIs are not as available, we have to utilize our own custom framework that we've built for that.
So it's a little different. It's still just as robust in the ways that those locators are built to not have to be maintained and updated over time. But it's just a kind of a different way of handling it. So yeah.
Because we had to do the same thing when lightning web components were brought into the scene. And so we've kind of just shifted that focus to be more around experience and things like that.
Any other questions?
Thank you all so much. And, this is my LinkedIn if you would like to connect with me.
I sometimes post things that are interesting every now and then. But, more importantly, University of Provar is kind of our own version of Trailhead. So you can go onto provar.
Meregister and learn more about QA testing and DevOps. And yeah, that'll be all. Thank you so much.