The do’s and don’ts of CI/CD: practical steps to get started

Share with


Description

Get guidance on implementing CI/CD in this practical webinar. Richard Owen, Senior Product Manager at Gearset, explains how to get started.

Find out how to:

  • Adopt Salesforce CI/CD successfully
  • Scale your CI/CD process
  • Adopt a shift-left approach to release management
  • Iterate on your workflows

Learn more:

Related videos:

Transcript

Richard, I shall hand over to you. Thanks very much, Amy. And hi everybody. Thanks very much for coming on to the session today.

Good morning. Good afternoon. Good evening wherever you are in the world. So yeah, as we said, this session is titled the do's and don't of CICD, so practical steps to get started.

The first quick introduction, I'm Richard. I've been a senior product manager at Geerset for just over two years, now. And prior to this, I worked for ten years in financial services, making marketing credit risk software for enterprise teams. And my focus at Gitta is functionality, which helps teams work better together.

And as part of this, I've been the PM in toddler pipelines functionality for the past eighteen months or so. So today, I want to share with you some of the things I've learned about how to help you be successful as they adopt CICD.

So here's what I'm gonna be talking about today. We're gonna dive straight into some of the do's and don'ts we recommend to help teams succeed in getting up and running with CRCD in the first place. We'll also look at some of the best practices we recommend for teams to to become and stay successful, along with some examples of tooling and functionality that we've developed just help you a bunch to do that today. And then at the end, we'll have a quick look at the next steps, or what teams should do when they get to a good state with their process.

So with that, let's get started.

So let's start with the big the the really big question. How can teams get themselves set up for success when implementing CACD.

So the first thing most important to get the start is to assess your starting point. You need to know who you want to bring into the process, what are their roles and where they where are they happiest working? Because some of your team members are gonna be happiest living in v s code version control in in this DLI, Whereas others need a user interface, a GUI to understand and push changes, or require that top down view of the whole process so that you know what you need to do as a release manager.

Secondly, don't move too far, too fast. If you don't have anything inversion control that is using chain sets or just starting with walk to walk deployments, Moving straight to a full fledged release pipeline in one shot will be really, really hard to do successfully, so define the steps you wanna take in the long term.

Now those two initial steps will give you a really good grounding in the process, but they can only take you so far to become a really elite performing team. You need to be shifting left where possible, ensuring that your team's first instincts are to work using these best practices.

So in this stage, you wanna encourage transparency, good communication, shared ownership of core processes, and shifting processes left allows errors to be caught really early in the process. And it encourages teams to keep all of their environments in alignment.

And then once you've got to that stage, once you've adopted those steps that it's absolutely vital not to stop.

The elite teams are always iterating on that process and they're measuring their improvements so they can understand how they can keep improving over time.

And unsadly, the lead time for changes, the success rate for deployments and return on investment is absolutely critical for the long term success for your teams.

So as the product manager on the Gayset pipelines, I've worked with it's coming up from two hundred teams now at at different stages of the journey, looking to get the release process up and running. So if you go through these steps, I'm gonna cover some of the tips, best practice recommendations, and share some of the functionality that we've built to help those teams succeed.

So let's go straight into that first step, the where are you starting from step. So to make the best decisions as to how to get started, you need to know where your starting point is. And we usually do this through the DevOps interior spectrum. Now here's kind of a cartoony view of it, obviously with the teams we're working from from novice, so it's basically using chain sets exclusively through to expert where you're relying on version control. And as teams move along the spectrum from left to right, they move from change sets, auto deployment for future in the initial adoption of version control, and then finally relying on it as a source of truth. Their process enables faster deployment shorter lead times, and lower failure rates, the real core of the dorometrics.

And what's more if things do go wrong, they can recover far more easily from this.

Just to understand where you sit on that spectrum and how to get started, there are a couple of questions that we need to answer.

Firstly, who's part of the team working on their cell phone systems or instances? How many admins or devs do you have in the team? Do you have or need a separate release manager, and are there other roles involved in the process? Because the shape of your team will make a huge difference to your first step getting up and running.

Secondly, how experienced is your team with DevOps practices? Are they already well versed in using version control, or are they more familiar with using change sets to do an orthoal deployments?

Now these two questions are inextricably linked, really. And the answer so the the answer to this question is very much linked to that first one. And you may have a very wide range of experience of technical knowledge across the team.

So there's a spectrum of technical backgrounds, and if you're looking at this, you'll be able to picture exactly where where you sit on this. At one end you've got folks who really live by the Flixnot Coast mantra, and at the other end you've got more traditional pro developers.

And this leads to a split experience across a team where different people got different diverse approaches to creating changes.

Because users on your team may have a very widely different level of familiarity with source control and demos processes.

If that's the case and different groups across the team and end up working in silos with poor communication, and the larger your team gets, the harder this gets as well, because with small teams, it's easier to maintain assistant level of knowledge and technical capability, keeping everybody on that same page. But as teams get larger, the number of folks at both ends of the scale will also increase.

And as a result of that, teams will often lack a common, clear, and externalized release process.

So that these are still first big, don't. Don't assume a process is one size of its all, because making people work outside of their comfort zone and intuitive tooling for them is really unlikely to be successful. Remember that spectrum that we were just looking at, users on each end of the spectrum, I'm not gonna intuitively reach for the same tool link to perform that perform the task they need to do a part of a release.

The best thing we want to do here is to figure out where your starting point is, who you need to be involved in the process and how do they want to work because developers may be more comfortable using version control and CLI's admins and configurators using a using a UI.

And here is where we really see the pipelines, you either develop coming in, because admins are usually less familiar with git workflows and terminologies.

Will naturally return to see reach for that geary, which effectively translates functionalities into steps that they need to take to progress an item. Lot developers made for it to deal directly with the PRs in git or checking directly in VS code. And pipeline becomes almost like stone for those kind of cases. So it helps all users on the team have a common view of what's happening at different stage of the process, all the way from those first commits and those that initial working of dead end boxes all the way through into production.

Now for the second step, start small scale up. So one of the most stark stats from the state of Salesforce demo reports that we released earlier on this year is that just nine percent of elite teams adopted DevOps with a big bang approach. And it's really important to realize that that big bang approach, in other words, launch your devops in a single step comes with a really huge risk of failure. His devops involves so many variables and it's better to use an incremental iterative approach. To implement DevOps, and that enables the organization to focus on continual improvements, ensure that all groups are collaborating every step of the way.

Now as you're moving through the demo's maturity scale, we're talking about going from chain sets to auto deployment starting to adopt version control, relying on it a source of truth. Now, the main thing that implies that you wanna put everything into version control and rely on that for all sales metadata. Right?

Well, not exactly.

Because the Salesforce metadata structure means that some metadata types are simply not suitable for keeping under version control, and others, like dashboards and reports, are very environment specific, so we don't want to be putting a testing dashboards, straight through into production.

So choose the message that you want a version really carefully and test out changes through continuous integration jobs. And what we typically see is that team nonviolent success with CICD use version control for roughly twenty five to fifty metadata types. Which covers the vast majority of changes that they wanna push through. That spans Apex, some of the metadata types like some objects, profiles, layouts, things like that, things that cover the vast majority of things we'll be working through on a day by day to day basis.

We also recommend for for the best success pushing changes through our Delta CI or commit by commit deployment process. So that way, you're confident the only changes which are getting deployed to your org are the ones that you've committed directly to your branch, and your deployments remain as small as possible.

And that brings us to some of the recommendations as to how teams can scale up successfully here. So we say we want, but it's gonna be a good idea to start building up your pipeline from core environments. To start out with those core environments, you need in the pipelines because adding new environments, branches, and tan boxes is really easy to do later on. So lots of easy to work that we work with start from that core path integration to UAT production, and then they build out hotfix branches, additional desk hand boxes into that process. As they as they go through the implementation.

And making sure that you get that corpus embedded first and understand what's getting pushed through, make sure that you're comfortable with those changes as you as you move through the process.

As starting small, also applies to feature structures and changes that you're pushing through the process as well.

However, in that case, we want to keep features as small as possible that many might have the chance of hitting merge conflicts as you go through your release process.

Now for a lot of best data types, including layouts and record types. And here we've got an example of a layout being deployed through gear set, we support deploying only the parts of the objects which have changed, just as we support breaking down profiles into the components for several years now. And this makes it a lot easier to avoid conflicts and just deploy the parts of the object which are needed to complete the story you're working on without having to bring other user changes along with you.

An Apex code is another case where it's really good practice to reduce the size of colossus being used because the smaller the classes, the less likely it is that multiple users are gonna be hitting the same objects.

As you said earlier, there's only so far you'll get with those first two steps.

It also makes you a really elite performer you need to ensure that they're working to those best practices naturally.

And this is better achieved by adopting some of the core tenets of DevOps, which we've proven over many years.

Because DevOps isn't a new thing. It's been around in general software for over an decade now because it works and teams using these processes tend to work better together, and they move changes faster.

One recent team has dubbed these processes is to improve communication and break down silos, because if you've got development, ops teams, release managers who don't have good lines of communication between them, it will inevitably introduce delays in the process.

And when this is improved, teams release faster, and they release more likely.

And this works best when you've got lightweight in team approvals. You're moving as much of the process as early as possible, shifting left to raise issues early so they don't come as a surprise when you're trying to deploy into production. And on top of that, you've just got much better knowledge across the team as to what's going on across the board.

So really we're looking to move from somewhere where you got separate silos, independent work by each role in the team. Into a situation where you got a much more inclusive workflow, you got all team members involved in the process, everybody knows where stories are progressed to and how your work interacts with others in the team. And you just get that better and closer communication across the team as well.

And this is summed up in, it's a quote which I've I've referenced a few times in the presentation over last year.

If it hurts, do it more often, bring the paper forward. Because if you're able to test early, get your code in the early environment under source control and understand conflict issues as early as possible, then the chances of having items fail later in the development cycle, where it's gonna be a bit of a higher stress situation anyway.

Is dramatically reduced.

So there are a load of aspects of how Salesforce teams can achieve this. From improving visibility of changes across the team, dealing with merge conflicts effectively, and getting the visibility of that early and keeping environments better in sync through to integration with application lifecycle management tools like Jira, as you know, ops. And defining a release schedule that works for your team. So these are some of the primary areas where the pipeline's functionality that we developed comes in to help teams adopt these best practices.

So over the next few slides, few slides will see some of the tooling workflows that we've enabled for these teams, as well as some of the pitfalls that I've seen teams run into over the past couple of years and some things that ideally you wanna be avoiding when you're when you're cutting out an implementation.

So first visibility across the team. It's really important for everybody and everybody in the teams have a common view of the state of the world, and sort of bring up the pipeline to view again show how that helps here because your release manager needs a high level visualization of all changes in flight, including nurse status, conflict resolution, validation, workout and tracking every member of the team needs to see where their stories are and how they interact with work being done by their teammates so that you can respond to any challenges as early as possible.

But even once you've got that in place, once you've got that situation placed, then there are some technological challenges that you might when you're working with Salesforce or quirks of the platform, let's call them. Salesforce is a great user driven platform, but traditionally it wasn't built for collaboration. And it can be quite difficult to see who's working on which items and features. It's common to step on each other on other people's toes and override other people's changes. Say lay out some profiles or key examples where that happens.

And that means that you can often get merge conflicts arising.

And finally, we've got complex dependencies and metadata API idiosyncrasies which have the overcut as well. So this all means that we need to consider what happens when new story interact with each other, and when conflicts rise very carefully when we're adopting the CFCD process.

So there's a little bit about about that merge conflict part. So we want to surface merge conflicts early in the process. To help unblock your teams and avoid delays when they push the legislature on the process. Because merge on those concepts are not an inherently bad thing.

They stop you from overwriting somebody else's work without asking them, and they make sure that you're working constructively together. Now it's critically important to flag up gender emerged conflicts that they can be resolved early in the process. And that way users can talk with each other early and they can resolve them before they become a much bigger problem later on. So we make sure that when you do hit emerge conflicts, then we've got a UI that shows a very simple example of it, exposing the context allow you to resolve them easily.

And of course a bunch of conflicts that you normally get in git are also down to get not having a great understanding of excel type of metadata.

And for that case, we've developed our semantic or tree based mode algorithm, which has knowledge in the Salesforce metadata structure. It allows us to unravel those Things which aren't really conflict, but with git finds as conflicts very easily. So whereas you may see hundreds of conflicts in git. Often, that will be reduced to only the the genuine conflicts which exists when you look at India set.

Next, environment alignment across the pipeline. So a key cause of problems in the release process is an environment drift. If there are loads of changes in the master or in your production environment, which aren't in your lower environments like UAT or integration, new features will often hit home validation errors when you try to merge them in. So we recommend merging back from master to lower environments frequently to make sure the whole press is remains clean and ensures your environments are well aligned, so don't let your environments get too far out of sync from each other. Some of the cases where I've seen teams get into trouble is if they let those environments drift further and further out of alignment because this causes a lot more friction as you go through subsequent releases.

And this is also for the other way around. Right? Because if you've got lots of changes, which have been abandoned over a long period of time, So your lower environment, integration in UAT can become overgrown with lots and lots of changes, which can never make it into production. And every one of those changes a new difference.

It's a new little landmine. It's a new source of drift between environments, which has the potential to cause further conflicts down the road. And what's worse is that in those conflicts, it becomes more and more likely that you need to resolve them in a different way on different environments. And that leads to further drift at least them becoming more and more out of sync.

So our recommendation here is identify those features, revert them out, get back to a state where you're well aligned with production.

When it comes to application lifecycle management tools, like Jira or your DevOps, we recommend having really good integration with those systems to allow information about changes to be surfaced to the wider business, to people who wouldn't necessarily be working directly inside geyser or working directly on changes.

And there are two different areas that we focused on with pipelines to help teach here.

One is putting information about promotion but to happen to the tickets themselves. So with links to the PR, the license has just been deployed as you'd see there. And as well as that, we allow the Jira status to be updated automatically as features make it into key environments, so that you don't have to manually update those tickets. We found that this is already helping to unblock teams because if you know that feature is ready for testing, you can pick it up without needing a manual prompt from your team mate. And it's become so much crucial as your team gets larger and communication gets harder.

And finally, when it comes to releasing into production, know what you want to achieve. So different teams have different requirements for this step.

Talked about people doing feature by feature deployments and trying to release as quickly as possible.

But doing that is not always viable. For large teams. And this can be for many reasons.

Either you just wanted to know when changes wanna hit or get hit their production environment with certainty, requirements of business processes, or just simply team's preference to release out hours or release on a regular cadence.

So our recommendation here is to choose a process that works best for your team. You can promote feature by feature if you want, or you can build up early save a time, validate it continuously, begins to a production environment, and then when you're ready to go, you can deploy it in a single block. And you can still, even if this model can push individual features, our top fixes to make sure they have to wait for any state to push out the really urge star.

So how these changes in process end up impacting teens?

Well, one team members have a clear understanding of the process changes approved for implementation, it just drives high performance across the board. And you see that, again, it's on Salesforce DevOps report increase productivity, more frequent releases, improve release quality, reduce lead times, it just helps teams work better together and get changed out faster.

And in terms of ROI, it's quite amazing that ninety eight percent of teams reporting significant ROI from adoption of COCD processes.

So once you've got this place, you've got a release cycle source and you've got feature moving into production quickly, a really high success rate for deployment. It's great communication.

What's next?

And tiers are any size, but particularly large teams can't rest on their laurels. They've got to be reassessing and iterating on a process to ensure efficiency.

This is borne out from one of the recent Gartner reports that from Bill Holt saying that successful dev ops operations have to embrace continuous learning and that they're never pacified, they always strive to get better.

There are many different ways you can do this. Taking task with a manual stay and continuing to automate them will remove operational risk in the process.

Making sure that you're testing what you need to test at each stage, you're backing up important networks regularly and you're monitoring changes to make sure people aren't circumventing the process is also really important. Just catching any unexpected changes early is mesh to the metal here.

And finally, keep measuring as you go. Because how can you tell the impact of a change when you're measuring it? The benchmark for this is Google store about tricks. There's ones we mentioned earlier around release frequency lead time success and recovery rates, and our reporting API can be used to go it goes a long way to understanding this data and being able to interpret it and output it.

So here's a quick example of this. We can output results to dashboards, to track all those key door metrics, and expose them to management to clearly measure and define return on investment over time. And we can also push those results to dashboards and have below within your own Salesforce instance. And that makes it really easily accessible across your whole business.

The last message I would leave you state and I know it's a bit of a cliche, but it's so so true for this process.

It's it's a marathon sprint. You need to know what you want to achieve and move there step by step, because you'll see the improvements in process and the reduction in time to take them to push changes through as you go.

And keeping the team happy with those changes is absolutely critical. Having the team working to those best practices will make the whole process a lot more streamlined.

And with that, I'll wrap up the session today, so thank you very much for listening. And please do get in touch if you have any questions or drop some questions in the chat. And you always go to gear set and get started with it as well and try it all fixed up yourself.