Description
High-performing Salesforce teams don’t just deploy – they manage the entire DevOps lifecycle proactively.
In this session, Andy Barrick (Senior Technical Architect) and Adam Boon (Account Manager) walk through how to benchmark your DevOps maturity, pinpoint hidden bottlenecks, and identify practical changes that improve delivery speed and reliability.
The webinar includes real examples and live benchmarking insights showing how other teams use the DevOps lifecycle to reduce bugs, deliver faster, and scale with confidence.
Speakers:
- Andy Barrick, Senior Technical Architect
- Adam Boon, Account Manager
Transcript
So I'm gonna dive into a little bit of housekeeping off the top just to kick things off. As you probably would have all seen already, this session's gonna run for an hour today.
We're gonna share the recording with everyone afterwards, so that'll include all of the relevant links to book any follow-up sessions you want to join. As part of the call as well, we're gonna be running some live polls to gather some of your thoughts on some of the challenges teams face today across the DevOps life cycle. Just keep an eye out for those popping up during the session. We're gonna bring those up at the start of each of the life cycle stages so you know what to prep for.
As you've seen already and a lot of you are engaging in, the chat is open to everybody.
If you wanna drop any comments in there or talk amongst yourselves as we're as me and Andy are talking today.
And on that note, I shall pass over to Andy to do a quick introduction to himself.
Thank you very much, Alan. Hi, everybody. Great to have you all along. I as you can see, I'm Andy Barrack. I'm a DevOps architect at Gearset. I've been with Gearset about three years well, over three years. I've been working for on the Salesforce platform for for over ten.
And, like, in in the roles there, it's like engineer or or technical consultant architect, seen a lot of different development and deployment processes and ideas and things that worked well and things that didn't. And what I'm hoping to what I bring to my role and what I hope to bring to you here in this session are some, best practices for how to go about making a a better process, let's say. We'll dig dig into that, what that means over the course of the next day.
Amazing. Thanks, Andy.
And I'm Adam. I'm one of the team of account managers here at Gearset. We act as a long term point of contact for all existing Gearset teams, so everyone using Gearset today has an account manager. And so I've I've got a few familiar faces in here. Thanks, Lindsay. I'll see if I can do anything from my side to try and sort that out once we go.
So I've worked with a a number of teams using Gearset. We partner with our teams that are using it today to mature DevOps process and evaluate the wider Gearset platform to help solve challenges across the wider DevOps life cycle.
So so really the agenda for today's webinar, primarily the DevOps lifecycle in action. That's gonna be the bulk of what we look at today. So we're running through each stage of the DevOps lifecycle. And as I mentioned, we're gonna be doing that through some live polls as well to get your feedback here.
We're then gonna dive into ROI and strategy review. So what does all this actually mean for your team, and how could a strategy review help?
We'll then dive in and leave some time for q and a with Andy towards the end. That'll leave you plenty of time to ask all the burning questions, and then finally cover how to actually go ahead and book a one to one DevOps life cycle consultation with your dedicated account manager.
So to start with a bit of background about Gearset, maybe if you are not using Gearset at all today, GISET was started over a decade ago now, primarily focusing on comparison and deployment. As as many of teams know, chain sets do make comparisons quite hard, and the metadata API can make deployment hard as well.
So after tackling that challenge head on and helping customers embed that solution into their workflow, we've gone beyond deployments to shape and optimize the DevOps life cycle process right away from automated deployment pipelines to our recently released AgenTek org intelligence product. So in that decade that we've been around now, Gearset has grown exponentially both in terms of the product itself and, of course, our user base. We now work with over three thousand customers, partnering with them to explore different approaches and ideas, and that's given us a pretty unique perspective amongst the Salesforce DevOps platform vendors.
And allied to this, our own experiences isn't delivering changes to our own Salesforce applications as it resulted in finding fairly common behaviors across the really successful teams.
We've brought all of those behaviors together into our version of the DevOps life cycle, which we're gonna introduce to you during this webinar. You may have seen some concepts like this before as well.
These are the principles, the cornerstone of DevOps done right.
Your implementation may actually contain different items or additional pieces, but this is the core of which it should be built on.
It's a mechanism not only for guiding DevOps process, but also for understanding how and where it can be improved.
It might seem like a pretty bold claim, but we're gonna look at some of the examples today and give some idea of best practice here of other Gearset customers that have been adopting this.
Over to Andy.
Thank you very much, Adam. Absolutely. So let's let's have a look at an example. And a a great example here of of a full life cycle approach in action is is Grenicus, who are a global leader in cloud based civic engagement technologies.
And they've got, like, over two thousand employees, and with a growing Salesforce footprint, they found that their development process well, it it used to be completely manual. And they found that deployments took hours or often late at night, and recovering from mistakes could take them weeks.
And when they partnered with Gearset, that changed really fast because by building out a complete DevOps life cycle from version control and automated deployments through to testing, backup, and release management, they moved from those late night chain sets to a repeatable, scalable process that saves them hours every week. And that's fundamentally the power of a full life cycle approach. Less chaos, more confidence, and a platform that scales with the business.
So we've heard there about the sorts of results that customers have seen. But having seen that end state, the next question is usually gonna be something like, well, how how do we get there? And to do that, we've got to firstly understand what that life cycle that we just saw is and what it exactly represents.
So rather than thinking of a collection of tools, we should think of each of these stages as a set of behaviors and with a contract to be fulfilled in order for an item of work to progress to the next stage. Now we often frame DevOps as the confluence of people, process, and product, and that's exactly how we need to look at these stages. It's not just about relying on product functionality even though there's often plenty to help. What matters most, the actions that teams take within their processes. Now the tools should support these behaviors but not dictate them. But it certainly helps when, like, product suite supports every stage of the life cycle.
Now the first step, on this journey is to, think about your current ways of working, to explore and understand what actions you take in each of these stages.
And when doing this, it's critical to also bear in mind that outer loop that contains behaviors which have got to be considered at all stages of the life cycle.
And those behaviors exhibited in each stage, it might be different. Right? We'd look at testing there. The testing that you do in validate is certainly gonna be different to the testing that you do in operate, for example. But you still want to ensure that you're regularly testing your actions to make sure that they're both valid and valuable.
So to dive into each of these areas to to to understand a little more about them, and let's start therefore with plans to put some of this theory into practice.
So, let's start with a question. Adam?
And hopefully, if if we've done everything right, there should have been a poll question that's just popped up on screen. Although I might need someone to actually confirm in chat that that's actually I can see. I can see. Amazing.
I did see some answers coming. It's amazing.
So as we kick off some of these sections, I'd I'd love to start gathering your thoughts around any challenges you're facing right now.
So as you can see there, we've put a number of different answers in there. This may not cover every challenge that you're facing or every interpretation of it, but hopefully, it should give a pretty solid base.
These are a lot of the common things that we hear from teams that are struggling with the planning process.
So not having a solidified planning process in place today. It's a lot of alliteration in there. Lack of understanding of business requirements that cause delay, understanding of existing org structure and impact analysis, or poor out of date documentation.
Excellent. Cool. Well, I'd say we've got sixty five percent of, respondents now. So I'll I'll carry on with the section.
We'll we'll jump back to the end and see how the poll is set. So we we've planned here then. We we we might be starting with a brand new requirement or we might have feedback on existing features. But, of course, we're speaking about a life cycle that's a continuous loop, but we've got to start somewhere.
And it's at this stage that any sort of individual item of work is most likely to to begin.
So here, we're looking to specify the requirements of the item in such a way that the team or teams responsible for creating and delivering the required change have all the information that they need in order to do that successfully, however we want to define success.
Now in the plan stage, high performing teams make decisions based on real data, not guesswork. And in those teams, planning becomes a collaborative process. By identifying technical debt and security gaps upfront, teams set themselves up for faster delivery and fewer surprises later in the life cycle.
So let's consider some behaviors that are evident in teams who excel in planning, and you can see those as the bullet points at the bottom there. They fully define the functionality. You know, they they know exactly the job that the user needs to be done by the thing that they're building. They ensure quality. The expected use cases that are defined have no bugs in if it's to be built as specified.
They consider time. This is an interesting one. If you're working in a in a a cadence of of set time periods, let's say, weekly sprints or something, there's no point in creating a feature that's gonna take two and a half weeks to to to deliver. So that collaboration with the development team to say, oh, it's too big really, really helps there.
Their ability to predict impact, this is a huge one. They have no regressions of related functionality. That that's if you think about the life cycle, you know, you can build something, but sometimes it's much further much closer to release where this sort of regression stuff occurs. If you could see that up front, that would be a huge a huge benefit.
Excellent. I think we're I think we've settled on seventy three percent of responses. I think we can end the poll, and that will show you the results. And the most popular answer we've got here is that we don't have a solidified planning process, which is, you know, I I think that's that that not having that is is evident to cause problems in in the plan.
And we we we talk about shift left. And if we think of this, if we're gonna open up that loop into a straight line effectively, then we have planners as sort of first stage. And it sort of stands to reason that plan is the stage at which a lot of, like, critical success or deviation from success can be built in early on. So it is vital to have a really well established and strong planning process for sure.
Yeah. Absolutely. Agreed. Cool. I'm hoping then if I just do this, should be able to kick off a question.
Yes. For the next section here. So thinking about building AI, I think this is a a hot topic that's on a lot of minds at the moment.
Certainly, we're seeing a lot more teams adopting AI as part of their workflow today.
So I'd love to hear your thoughts on this. Are you leveraging AI tools specifically to write code today or use it in development?
Couple of answers in there. Again, this is not gonna cover absolutely everything, but give you a bit of food for thought on what we're seeing with Teams so far.
So, yeah.
Back to you, Ani.
Cool. Thank you, Ani. Excellent. So whilst you're answering that, we'll we'll we'll have a dig into the the build phase.
So having having been through plan now it's at this stage that we're gonna really see not only the reliance of any one stage on those before it, but the importance of the strength of that contract that I mentioned, which defines whether an item can progress into this stage. So it stands to reason here that if the feature hasn't been specified correctly, then we we risk building in divergence from the job to be done and ultimately errors and bugs at this point. But let's for now take this phase in isolation, this build phase. We've got the features planned. We trust that it's correct. What is strictly within the scope of the build to ensure success here?
And, again, we've highlighted some things at the bottom there. Org alignment. Now does it have the correct data and metadata to ensure reliable development? Do you have a defined test set of data? What if everyone's development org is different, then it is it the chance that you're gonna get a different output depending on who picks it up? That's not, the sort of consistency that that you ideally want. Tests.
Is it clear what type of tests need to be created? Are we only creating unit tests for this? And if so, what type? Apex unit tests, JavaScript unit tests? Do we need any UI driven tests, for example?
Static code analysis.
Does any code that might be written need to comply with defined architectural and stylistic standards? Are you building in, bulkification issues potentially at this point that could be spotted? Touching on that as well, code review. Who's gonna be responsible that?
Do does it depend on the type of metadata in the PR? You know, if it's if it contains Apex, it goes to these people. It's flow to different people with types of changes made. Is it gonna be manual or automated?
All these sorts of decisions impact the success and speed and quality of of the build phase.
So jumping back across the poll, no, but we're exploring options is is the most popular answer.
In terms of any seven percent don't have any any AI use at all, which obviously reinforces very much how the adoption is is well on stream. We've got just about as many people who are using it as as as are exploring it, which is which is great to see. But I think the the sort of natural follow-up to it is is that those guidelines. Right?
Those the the review, the static code analysis. Can AI take in the full scope so as to produce strong, proficient output? And or and do you have sufficient guardrails in place to see where it's done gonna arrive, or should they started hallucinating or or duplicating things because it couldn't take the whole scope into account. Having a a strong process there is is still absolutely critical.
Yeah.
Awesome. And and to expand on that as well, if anyone wants to share the kind of tools that you're looking at in terms of using AI for development, coding, feel free if you pop them in the chat. It's a really interesting topic we're talking about at the moment. So awesome.
Well, let's launch into validate then. And, again, you'll kinda get the theme here. We'll start off with another question for you folks.
And, again, this is somewhat AI related, but also looking at automation in this kind of area as well.
So are you using any automation or AI to ensure the quality of changes as they move through your development pipeline?
No. We're fully manually testing and reviewing everything.
Some automated UI testing tools like Provar, ExcelQ, Eggplant, any automated static code analysis tools like PMD or SonarQube or other kind of more generic AI tools, GitHub Copilot, ChatGPT, Gemini, Agent Force Vibes, that kind of thing.
But I did see today actually Vibes Vibecoding's been named word of the year in the UK this year.
That is true. Yes.
That's an interesting tidbit.
It is. It is indeed. Excellent. Okay. Well, whilst everybody completes that, let's let's have a think about validation.
Now this is one place where if you've seen DevOps loops before, you might you might find this slightly different Because often testing gets put up, in here, whereas we've obviously got it in the in the outer ring as we spoke about. But this stage here so what what do we mean then by validate? We're not talking testing. Well, this essentially relates to all activities post build and pre release, which ensure that you've built the right thing in the right way.
Yeah. We we don't want you to think about a waterfall style delivery necessarily where testing only comes after the build. We've got testing as a separate item, as we mentioned.
Now validation definitely includes testing, but it's more likely to be regrettable, wider based testing like regression. But it'll also cover items such as user documentation, scalability, pre and post deployment steps, everything that you need to get something successfully into your production environment.
There is naturally a large amount of subjectivity here given the various different potential scopes of features and validation processes internally that you might have.
So this isn't just about validating the build. This is really about validating everything that's occurred so far, the plan as well as the build. The next stage, as we can see, is production deployment. So we don't wanna find out at that point that we've built the wrong thing or that it won't deploy. So let's think about some of the things here then that will confirm that the correct thing has been built.
Testing, there's a whole suite of testing here. Right? We we didn't mention it's part of it. It's a big part of it, but what type of testing? We spoke about unit testing or UI based testing potentially on the feature, but here, okay, we'll test against acceptance criteria at some point. Regression testing though. Has this changed affected anything else?
And then there are sort of widest wider items of testing that help validate that you're building the right thing, but also in the right way. I mean, if you have UI based changes, there's there's concepts like hallway testing where if you created something brand new and and, used within the UI, you'd literally refer to stopping somebody in the hallway, giving them a test to do it. Can you do this with with it with no other instructions? Have you built it clearly and intuitively?
Now if you're if the change was created to make something more efficient, either in terms of the platform, you know, given the limit usage in a process or maybe in the UI, fewer clicks, Have you got ways of measuring this? Can you prove that that more it that if aim gained aimed for efficiency has been achieved. We got guidelines for success.
And then deployment. Have we fully captured everything that needs to be present for the item to be deployed successfully?
So there's some of the things which, which which are critical in this sort of phase, but we've got we've got the the poll results are in.
And here, interestingly, fully manual testing and review is the is the winner, which on the we it's it's it sort of makes sense when you put it back with the last one. I mean, we we spoke when we touched on AI there about still have the importance of human in the loop, and maybe this is the area where we still feel that the the human in the loop is adding lots of value.
We can automate UI testing absolutely, but, you know, the the UI driven frameworks are are there. They do a lot of good job, but they obviously need to be configured again by by humans. We don't doesn't look like we're seeing a lot of AI usage in there at, right now, which is, I think a very clear differentiation between build and and validate there.
That'd be really interesting if we see, if we were to run this again next year, what the trend difference would be there. Certainly, as I mentioned, AI is definitely on the rise in our kind of area of the minute.
Awesome. So I'm gonna stop sharing that one just hops on to the next, which, of course, you can see from the life cycle life cycle diagram there is release.
Probably one of the most important questions for teams here and the thing that we really see teams targeting is successful production deployments.
So we'll see what we do with compare and deploy and the automation pipeline can help massively here, but there are obviously other areas that impact successful production deployments.
So interesting to hear. How often are you releasing to production successfully the first time today?
Every time, majority, rarely, or never, in fact.
Excellent. Okay. Well, we'll see what we'll see what happens there.
So, really, something we teed it up in the last phase. So we're talking about deployment into production, but that, you know, it's not just clicking a button and letting it go. There's still a lot to to think about that. We've got construction and validation of the deployment package, maybe collation and validation of those pre and post deployment steps, potentially the presence or not of a staging or preproduction environments to a dry run. There's there's often administration like ServiceNow tickets or other administrative sign off processes connected to the the act of a production deployment that all need to be, managed and and and put together.
Now it's it's increasingly the case that teams are are looking well, sort of high performing teams are looking to separate these ideas of production deployment and release.
Now we'll we'll we'll touch on that in a second. But where there isn't that separation, this this means effectively that the release of some functionality, and what do we mean by that is sort of making it available to users, is owned effectively by the deployment team. You know, once it's deployed, it's there ready to use. Now in some cases, the business still want to retain ownership of release, for whatever reason. If they can't command that deployment cadence and say you've got to deploy this when we're when we're ready for it to be released, then there's gotta be another way of separating deployment and release. This is typically where we have things like feature flags or associated concepts like custom permissions or something like that come come to the fore. So what about some important behaviors then in the release phase that gonna help ensure success?
Visibility, the easy creation or identification of the deployment package and its status and quality and and and, viability, which which leads naturally onto confidence. Right? The click that you have clear evidence of the status of all the quality gates that need to be passed or all the administrative tasks that need to happen in order to release into production.
Security.
Right? You've got all these flags and gates and whatnot.
Want these to be enforced so that you don't accidentally deploy production. So it sounds odd, but if if if the if the action is reduced to a click of a button, then that's quite an easy thing to, you know, to do. Now is that click enforcing all the pre validation checks that, that you've set up?
And clarity and certainly consistency. We want clear deployment process documentation. Ultimately, one of the the core behaviors around successful deployment is that whatever you do, whatever it takes to get that thing into production, you shouldn't be doing that for the first time when you're deploying into production. It's too late. You should have it shouldn't be a surprise at that point.
Okay. So, the poll, we've got a we've got our largest majority of on any question I think here, which is which is, irony, majority.
Wonder why I came up with that word. Well, with that, that's great. I mean, it is great. This is a this is a really good question to sort of dig into, when when you go back from this and start thinking about these these processes that the majority of cases that is a successful production deployment.
Now we go through features go through sandboxes and various situations, testing, whatnot, such that ideally, there are no, you know, that we get to every time. Now there may always be issues, but majority is great. I mean, we've certainly spoken to large teams where it's not not been the case. But identifying, like, how could we have found this sooner is very much the spirit of of this.
So it'd be it would be fascinating to hear potentially in the chat of the sorts of things that do if if you do find them, do recur and, mean that you don't get successful every time it deployments.
Awesome. And I've just popped a, a quick, message in the chat as well to say if anyone does have any questions at all as we move throughout the different stages here, there is a q and a panel you can kind of pre put in some questions we can address as we get the q and a segment towards the end.
So do feel free to drop any questions in there, and we can pick those up after.
Lovely.
Awesome. So as we move into the the right hand side of the loop here, this is where we tend to find most teams are a little less familiar with the life cycle.
The operate stage kinda covers the behind the scenes running of the process to ensure everything's running smoothly and can quickly be restored if anything goes wrong.
So launching into our next question here, actually, on that topic, do you have a tested disaster recovery plan in place today?
Couple of straightforward answers in there. Yes or no?
One is, what is a disaster recovery plan? And we'll absolutely touch on that for those who don't know.
And finally, yes. And we've had to use it in anger to actually restore an environment or restore some deleted items.
Cool. Excellent. Right. Well, whilst everyone answers that one, let's let's have a think about this operate this operate stage. So as as Adam said, it it's some often the case that, teams can be less familiar with with what we what's what it means to be in these stages. So with post release, we wanna ensure that the application is usable and that it's used as intended. There might be on on that latter part that it's used intended, that we might need training that needs to be carried out or an an org to to do that in that's not production ideally or communication at least around newly released functionality, especially that where this can potentially change the way that users interact with the application.
Onboarding and offboarding of users, especially those which may impact any integrations, for example, has got to be done carefully and, again, with full visibility. If you have users changing roles permanently or temporarily, that can result in different permissions being needed and can then these be applied without giving far more access than strictly necessary.
So these questions might not come back to specifics DevOps platform functionality, but the ability to achieve them successfully can very definitely be influenced by decisions taken earlier on in the life cycle.
So some examples of of what we have here. So we touched on, backup, and disaster recovery planning. Right? Backup and restoration. How resilient is your application to unexpected events?
The the presence of and testing of a disaster recovery plan, your recovery point objective, your recovery time objective. How much control do you have? How what's your flexibility on that? We'll touch a little bit more on that when we, look at the poll, I think.
Processes for handling updates of the Salesforce platform. This is this is often one that, that catches people out. Right? Manage packages, upgrade Salesforce platform release up updates, changing the metadata that you might have in your version control system. What's your process around being prepared for these sorts of changes? It doesn't suddenly slow developers down because there's huge diffs in their branches because suddenly the the the org has, has changed.
Archiving as well. Data is gonna continue to grow generally if it's not addressed.
Do you plan to continue to expand your org storage and pay more to Salesforce for that regardless how relevant the data is? Do you do you have a plan for how you address these sorts of things?
So that's just a flavor of of the sort of actions and thoughts that go into the operate stage. Let's have a let's return to the poll and see how we've, how we've got on here. Let's have a look. So it's it's an overwhelming no. Now I would strongly encourage people to well, he he says test it. So there may be something there that said we have one, but we haven't tested it. But there may be some who said, yeah, we haven't got one at all.
There there is is a ten there can be a tendency, I think, to Salesforce to have their own backups, you know. Salesforce goes down, Salesforce back up, don't they? And to a degree, that's the case, but there there have been cases where, you know, you you don't necessarily wanna be reliant on Salesforce's own cadences. There was there's been a couple of incidents in the last four or five years where I mean, the if anyone was around in twenty nineteen when the Pardot upgrade wiped out or gave everybody view all data permissions and Salesforce couldn't didn't have a backup of what permissions were were set on all the users beforehand.
That that that was a horrific time for anybody who didn't have any sort of backup or or recovery plan from that. If if you've got a version control system in play, and you use you use Gearset and Delta CI, for example. And there there there are Delta operations outside, Gearset if you're using, CLI alone. There's a there's a there's a plug in for Versus code that works out the deltas for you as well.
So if you're on these Delta based deployments, it can be quite eye opening actually to actually try and deploy your whole branch to to an org or just validate it. Would would all my metadata, my branch actually deploy to an org? And if you sort of increment API versions and don't pull the whole metadata down again, you can end up with your repository being actually quite a an inconsistent state that you might find at a critical time doesn't actually deploy to an org. And again, as we said with production deployments, finding out that you that you have a problem with it at the point that you really need it, is not the best time to do that.
Okay.
So let's launch into the final stage of the DevOps life cycle, which is Observe. So and I'm gonna kick off the last poll question here.
Observe is the kind of monitoring and oversight into your org's performance errors, that kind of thing.
So the question I've popped in there, how do you find out about production errors today?
Are you relying on users to let you know when they encounter an issue? Are you using the Salesforce flow and Apex error emails? Are they going to one inbox, or are they going to individuals?
Do you have dedicated logging or monitoring tools in place today, like Nebula Logger, feeding into Datadog or Splunk?
Or were you an early adopter of Gearset's observability platform?
Interesting. We shall see. We shall see. Right. So observe, as as Adam set this up.
There there's there's two aspects to this. So there's firstly understanding when the application isn't functioning as expected, whether that's through errors or the old approaching limits, which might in turn lead to errors. But secondly, to gather feedback from the released functionality to help guide the next iterations of of development. We've seen exactly how Observe feeds back into plan.
So you release a feature, it's out there, it's being used. You may if you're especially in a waterfall delivery, right, it was all specked up beforehand. We knew exactly how it was gonna work. Perfect.
That's all done. We don't often live in a in a world where applications can be delivered successfully via waterfall. We'll release it. We'll iterate on it, and we need to get data out of its usage to understand how we need to increment it successfully.
So but when we think about which behaviors need to be present in order to achieve these elements, a few items come to mind. Right? But so we spoke about platform limit monitoring. Your production or breaching some of these can mean standstill in some business processes for twenty four hours. You wanna be ahead of where you are on these things.
Collaboration and communication going to the outer loop there. You want to have processes which exist to gather and assess this feedback. There's no point waiting till it's broken to know this.
Requirements, nonfunctional requirements. Do do your features and your epics and your processes have nonfunctional requirements which user users are expecting to be met? Do you have requirements to process a certain number of accounts in a certain amount of time, for example?
Data and performance monitoring.
Does the way that you build ensure that the data is being gathered to monitor performance if specified? And we're touching there on sort of login usage as as the transaction flows through the the Salesforce platform.
And then, critically, visibility. Can you access this information about failures or other other unexpected behaviors or performance data for that matter?
Okay. Let's have a look then at the poll.
Okay. So we've got oh, one vote for Gearset. Fantastic. Thank you so much. One for Nebula Logger, which is, a Datastog Splunk, which is interesting.
And then, well, the joint winners, with let users letting you know users or Salesforce, I guess, letting you know, which is definitely the first stage of this. I think, I spoke about two stages at the beginning. There's there's sort of being and I I think I think the division in the answers is is sort of the same as well. The top half, I think, is is reactive. It's it's happened.
We need to understand how it happened, replicate it, can we get a fix out. Whereas the bottom half is heading towards proactivity. So we need to know, are we in danger of this this thing happening? Now, obviously, Gearsons observability can ingest the emails, so it's it's reactive in that sense.
But as much as possible, the a shift towards proactivity is gonna obviously avoid the issues happening in the first place, and that's sort of one of the core aims of what the life cycle's doing here. Trying to shift this left ideally before release and get get things solved before they get into the production in the first place.
Cool. Awesome.
So, thank you very much for your participation in those in those questions. I think, hopefully, we've all we've all learned a good deal about how things are out in the world there. So we've we've explored the life cycle. We heard at the beginning of some of the benefits seen by customers who adopted this way of thinking. If you can recall how Granicus were able to move from those hours of hours change set deployments towards a version controlled backed pipeline where they're free to deploy items as and when they need.
Consider, you know, the time and effort savings that will have arisen from being able to do that.
However, we're now that we've explored those stages and what they mean, the obvious outstanding questions now really relate to the practical steps that can be taken to transform that output into similar results. Like, what does what does that process look like, and how do you start on that path?
So firstly, you wanna think about the qualities that we defined for each of those stages earlier. How close are you to exhibiting them?
If you don't already have such a breakdown, start with ours. Start with the breakdown of the stages. Start with this one and add in anything that you think is missing as you encounter it.
For each stage, though, if you're not compliant with the behaviors that we saw, what you think about what might need to change in order for you to be able to do so because that's a really strong starting point. If you do that across all stages, you'll potentially have a, you know, a good long list of changes.
You know, some potentially depend on others. So work out a viable order in which these changes to your processes and products and potentially people could ham. And by people charge, it's not just changing the actual people, just how they interact with the process and the product.
But, you know, try to estimate how big a change is needed. Know? Is it product? Is it people? Is it process? Is it a combination of these? Would it be a really radical shift in how things are currently being done?
But we should continue to bear in mind the overriding principle of shifting left. There's lots of evidence about how the cost of remediation of anything increases later in the delivery process and an issue is found. Now issues are obviously gonna happen, but you should use these as as opportunities. When you determine the stage in which the issue was found, as we mentioned before, I mean, ask your teams about how that issue could have been found in the previous stage or perhaps even earlier as well.
Was there a missing process of some sort that meant the issue escaped earlier detection?
Even if these are sort of large and complex issues, then then capture them. Common example we see a lot is when, like, the application architecture prevents creation of a reasonable Apex test.
Data setup that's needed is too much, and you start breaching governor limits in Apex tests. And this can lead to the test simply existing to provide code coverage where and then the functionality testing has gotta be, UI driven, either manually or via automation or in the org post deployment.
And over time, you're gonna build up a collection of these sorts of reasons, and some themes will start to emerge both in terms of what the issues are and their symptoms and in which stages of the life cycle they're occurring. And those that occur most commonly would feel like obvious ones to tackle.
Now, of course, the ideal would be that these are all changes, right, which are small in scope and have huge impact so that you could get a load of value really quickly. Now the reality is likely to be a little bit different.
As you study these ideas for how to shift left, you're likely to find that more stakeholders might need to be involved. So if you think about the plan stage, who might be responsible for at least some of the actions there, you might have business users, product team, executive input. And if you start to add more rigor into the processes in that stage, then anyone who contributes to it would, in theory, need to be notified or consulted.
So this might start to sound like a lot of effort, but don't forget that the reason you've identified this as a process improvement is because you have examples of the problems which arose because you don't do whatever the proposal is.
Now if you can also quantify the costs, whether that's gonna be time, money, or whatever other metric, then you've got a tangible amount as the proposed benefit from this change to take to the stakeholders.
If you can equally estimate the cost of the process change and show that to be less than the benefit making it, the cost that you don't then incur, then it becomes easier to bring stakeholders on board with the proposals.
So to come back to the process change rollout strategy, those which impact plan and build may have significant impact on the overall process as we mentioned at the start.
This is where the, you know, the more guardrails you've got earlier in the process, the less risk there is of the delivery heading off in the wrong direction entirely.
So that's far from saying though that there's less value in improving any of the other stages, but there should be a continual aim to shift left, find things earlier, allowing those later processes to focus more on their aims than being interrupted by bugs.
So it's all it's all well and good to talk about the positive impacts available through this approach. But how can you be sure that you're seen?
Now the fundamental mechanism here is through defining and capturing consistent metrics across your delivery pipeline.
Before you start any of the above, you should take a baseline of them.
Perhaps, though you you, you know, you aren't capturing any metrics, in which case defining them has to be the first step.
But which? Right? Which metrics to capture? What's the most important is what matters to you?
Deployment frequency and change failure rate, as we can see here, are two of the four DORA metrics, and these two tell an important story.
How quickly you can deliver changes and how stable are they?
Now there's little value in improving your delivery velocity if your quality decreases as a result. Right? That's, at the very least, gonna stay the same.
Now these two metrics in particular closely match the goals of the DevOps life cycle. Spot issues earlier, shifting left, as we spoke about, so that fewer get to production, but also of overall process efficiency gains.
Delivering a higher quality application at a greater speed, who wouldn't wanna do that?
It might not be a single step, though, from where you are to that state. If you if you need to take multiple steps, then certainly focus on the quality element, the change failure rate first.
There's an old, there's an old quote around software development about it goes, first, do it right. Sorry. First, do it, then do it right, then do it better.
And that's pretty much how to tackle this. Right? I suspect you're all already delivering an application. So that's the the first doing it bit.
Doing it right then is about the quality improvement.
So that might slow you down a little to to put these guardrails in place to ensure greater quality earlier on. But it's important not to worry too much about that at that stage if at all possible. Once you've got process in place which guarantee quality, then you can speed it up.
The reverse approach risks delivering more bugs, greater speed, which I suspect isn't a situation that anybody here. It's a so, of course, there there might well be other factors that you want to assess.
Right? The DORA the other DORA metrics, meantime to restore, lead time for changes, we we find customers also tracking those often, but also that a number of teams we work with have a variety of internal measurements that they ought to capture as well or different definitions of those Dora metrics. Now, ultimately, this is gonna be driven by a mixture of what's important to your organization as well as the standards across the industry to allow a degree of cross comparison.
And the final point to note here is that even if you don't intend to immediately go away and start introspecting your processes to adopt the DevOps life cycle, then baseline the metrics you want to improve.
Only if you know where you're starting from can you then measure progress as you make changes to these processes.
And be mindful of, making too many changes at once. If you make five changes and see an improvement in metrics, you can't really be sure that all five are positive. It might be that one of them has been detrimental, but he's outweighed by the other four.
So to start sort of wrapping things up, taken quite a bit of a journey here. We've gone we started the high level theory, gone right through into the specifics of measuring the impact of process changes.
Along the way, we've encountered a lot of ideas and theories.
And I've got we we we hope it's given you an inspiration to, at the very least, start thinking about how you can iterate on how you work and to do so within a framework that's engineered to direct you towards greater success in your DevOps processes.
Adam will speak, shortly about how Gearset is here to partner with you in that journey.
But first, we'd like to pause here to take any questions that you might have or have been posting during the session. So I'll have a quick look at that.
I was gonna say maybe we wanna look at the q and a from the, the panel first, and then if anyone does have any ad hoc questions they wanna ask, feel free to, raise a hand, if you wanna come off mute and ask a question that way as well.
Oh, yeah. The q and a. Yes. Okay. Yeah. There's a couple in here. So, Alexander, we have operational companies, and they have their own Salesforce admins developers, but they're approaching only from dev sandboxes to the staging full copy sandbox.
It's our corporate responsibility to approach production. So operational companies don't wanna pay for Gearset licenses since they can deploy from their dev sandboxes for free without paying for Gearset licenses.
Okay.
Not quite digging the the the question itself out of that. I mean, process harmonization and and standardization is is always gonna be useful. You know, something on your side here. It's it's very difficult as we as we said there about making process changes and measuring the impact and benefit of them if there are multiple different processes taking place within the same within the same delivery pipeline.
And equally, if you're measuring things, can you be certain that the same things that you're trying to capture are are are being applied to to all the contributors? Right? Some people are bypassing one process that's capturing some data about stuff. You're not necessarily getting the full picture.
It the the the whole question of, you know, do do we wanna force developers into a into a UI driven way of working when they're completely familiar with the CLI is one we hear a lot.
That that there's plenty within gear set that is beneficial in ensuring shifting left, you know, making do the there's all the problem analyzers, the comparison screens that you don't get in the CLI.
Although it it is a trade off that you need to think about is, the the comfort and speed of delivery of developers if they're familiar with the CLI versus the amount of things that get passed because you're not using things like problem analysis.
So, yeah, I I think I think process harmonization is always or should always be the goal at the very least. Okay. We have a an anonymous same question.
Currently, we don't have enough licenses to build a full CICD pipeline. We're using Gearset for deployments in the process mostly manual. Now we'd like to start using ADO for repositories. Are there any best practices we should follow?
We want to ensure that our repository serves as the single source of truth. Oh, now this is a good question.
So single source of truth, that's that's a a loaded term. In non Salesforce development, it's it's easy. Right? The if you think of your if your application is just compiled code, you know, it's it's it's very simple. That's why version control systems came about became very successful.
Salesforce, I I a lot of these things, there is a an easy answer of saying, oh, Salesforce is different. And I don't like that because in a lot of cases, isn't. For this though, it it is to a degree. There there are things which the metadata API doesn't support or actions that can go one way but not another.
You know, you can add a record type but not remove one. There are certain types of business users don't want to have tickets created for and wait for. I think creation reports and dashboards we often see as things where the the the org is still the source of truth. And it would be very difficult sort of as a cultural change to say, oh, no.
You've got a rather than reports being created, don't allow your users to create reports, get them to submit a ticket to the development team, neither do the development team really wanna be spending their time creating reports. So the the the the the mantra we want to use is that each piece of metadata should have a single source of truth, which might be the version control system. It might be the org in some cases, but it doesn't change it both. Like, if it's if it's owned by the version control system, all changes to it start from there.
And anything that changes directly in the org is it should not be accepted. It's it's no coincidence that you can't edit Apex in a production.
And that's that's Salesforce's sort of concession to this.
They could, in theory, stop people editing flows and all you know, now that they can do business logic and validation rules and all this sort of stuff, they don't.
But that same theory should apply. You should ideally assume that that is the case. And the reason is they want Apex to be tested, validated before it gets put into a production order because it can break business processes.
So anything that you put in the version control system should be ideally that which you you change regularly by development, that which you want to test before it goes goes into the production org, and that which is related. There's some nuances around the metadata. It gets a little bit complex with, like, custom fields on standard objects and this this sort of thing. But literally anything that anything that you own and you are responsible for delivering through deployments rather than creating or editing directly in the approach org ideally should be in your version control system.
Now there are some architectural sort of side challenges if you have, like, a hundred fifteen thousand roles or something like that, you know, and huge, huge profiles because of the way the metadata API works. But we can always have separate conversations about those if you've got some edge cases that might cause performance issues, let's say.
But yeah. I the the rule of thumb would be anything that you anything that you've customized that you've changed that that you would expect to deliver by a deployment because you want to test it before the changes made in production should be in your version control system.
Cool.
Lovely. Well, thank you very much for those questions. They are both the ones that we had in there. So I think we're bang on time to jump back across to Adam and round things off.
Yes. Perfect. I shall, I shall wrap things up from here. And I will say as well for a couple of the questions that came in around sort of licensing requirements and things like that. This is absolutely what myself and the the rest of my team are here to discuss with you as well. So do feel free to reach out if you have any kind of questions or concerns around that or how the packaging works. Happy to discuss with you and and find a solution that works.
But to come back around to the the DevOps life cycle, we've kind of seen what a mature DevOps life cycle process looks like today, and you've seen the impact of the kind of life cycle thinking can have for teams.
And we'd love for the next step to be to do this for yours.
So we offer strategic reviews to give you a tailored one to one session with our team to map your current processes against this DevOps life cycle, look at high ROI improvements that we can make, and build a clear road map for scaling that delivery with confidence. So same framework that you've seen Andy talk through today applied directly to your team's setup, and you'll leave that then with practical prioritized actions and not just the theory.
If you're ready to kind of undertake that and see where you stand, how to move changes faster with less risk, the account management team can help arrange that strategy review with you today.
As I mentioned at the top, every team using Gearset right now has a dedicated account manager assigned to you, and we can guide you through that process.
After the webinar today, alongside the recording of this session, we'll be sending out a link to book in that tailored session with your account manager, and hopefully, we see as many of you there as possible.
Thanks everyone so much for for joining us to run through the DevOps life cycle today.
And if you have any questions at all or again wanna book in for that session, do just reach out via the link you'll get in the email follow-up.
As for that, that is the end of the webinar today. Thank you so much, Andy, for all of your insight and thoughts around this, and thank you everyone so much for joining.
Likewise. Thanks, everybody. Have a great day.