Description
Worried about managing increased demand on your Salesforce team? Discover how successful Salesforce teams are scaling their Salesforce development. Join Jeremy Foster (Manager of Salesforce Development, The Pilot Company) as he walks through his experience of strategic planning, implementing robust tools and technologies, and harnessing continuous improvement and learning to grow your Salesforce operations.
Learn more:
Transcript
Everyone's ready to get started, Jeremy, I shall hand straight over to you.
Thanks, Amy.
Just double checking y'all can hear my audio well.
Yeah. Perfect. Cool. Thanks.
Alright.
We'll go ahead and get started.
Thanks everyone for joining. Today's webinar session is over scaling your Salesforce development, how we went from manual deployments to a CI pipeline.
The objectives I hope to accomplish with today's webinar, we'll have a quick intro. I'll give you a little background on myself, and then we'll shift into reasons to scale your development, how we got where we are, the people involved, how GearSet enabled us, and then we'll wrap up with some time for questions at the end.
So hop right into it. My name is Jeremy Foster. I'm the manager of Salesforce development here at the pilot company.
I've been with Pilot nearly five years, anniversary next month. And then I've been involved in Salesforce between eight and nine years, nearly to the nine year mark. I've been involved with a variety of different experiences working within Salesforce.
Started my career kind of as a little bit of everything. We jokingly called it the Swiss army knife. We did administration, development, QA, scrum, BA, release management, all of it. And then once I transitioned to working here at Pilot, we've got very defined roles here, and that's where I came up through more of a development path. Started as a developer, went through, different roles to principal engineer and then now managing the development team here at Pilot.
Just as a slight warning, I guess, I could talk about DevOps and processes like the webinar today for hours. I've had the monumental task of condensing this down into a a reasonable presentation, but I put a QR code on the screen, that goes to my LinkedIn.
Please feel free to reach out to me there if you have questions after this webinar.
I'm more than happy to answer the processes that we've gone through, our experience with GearSet. I do enjoy talking about it a lot, so, feel free to do that.
So as for the meat of the presentation, why scale your development process?
For us, it all started with a simple question.
A few years ago, our director at the time was talking to me, familiarizing himself with Salesforce and GearSet and the capabilities of the tool, talking about oh, sorry. Talking about where we were with our process, how we were working on, deliverables, how we were managing projects, familiarizing that as we had transitioned leadership.
With that, he had insight to more work that was coming toward our team, and we didn't have that. So while talking to him, he brought up the question of using Gearset and simply stated this process isn't scalable. Can Gearset do something for us? Referring to manual deployments that we were using, and was there an option to do something more scalable to accommodate the workload and the teams that we'd be working with. At the time, I knew Gear Set had CI and pipelines, but as a team, we had never experimented with those.
We were using manual deployments and doing what was comfortable and working for us. It allowed us to operate within our existing workload, but company demand was growing, and we needed a way to scale to be able to support both the teams that we'd be working with and the new projects coming for us.
So how did we get there?
I like to use this example to talk about the discovery, we went through to get from manual deployments to CI pipelines.
The picture on there is from World of Warcraft.
I often thought of different video games I've played throughout the years, with this example, but there's a term called the fog of war.
The example you can see is like a map, in the game, shows areas that you've been to, places that you've explored and experienced.
Obviously, you can see all of those, have full understanding of them. But then you can see that there's areas on the map you haven't explored, but there's also objectives in those. Things to go learn, things to go experience, and uncover more of the map as you get there. I think this is a very apt example of our process to get to CI pipelines.
We knew what we were doing. We knew that there were other features out there. We just weren't sure about those. So you can know something is there, but you're not sure what it can do for you until you experience it and uncover that piece on the map.
As a more concrete example of this, this is where we started with manual deployments in gear set.
So around twenty nineteen, we did an evaluation of various tool sets and determined that we needed to really move away from change sets in Salesforce.
Anyone familiar with those or still using those can understand that that's a bit of a headache. It can be a lengthy process. There's a lot of churn involved in getting your deployments through org to org change sets. If you miss something, if something fails validation, you've got to make a change or include something new. It's a long process to develop that in your dev box, upload the change set, wait for it to get there, rerun the process, and hope that you didn't miss anything or hope that your, testing was all validated correctly.
So we moved into gear set and started using manual deployments, And I'll show some examples of what these look like shortly in the environment.
With manual deployments, we were using gear sets compare and deploy.
The entire team, admins and devs alike, got familiar with this process. It was much faster. It allowed us to do comparisons org to org, to save our deployments as drafts, to build packages and save those, to build and validate, but wait for deployment later.
It's very robust toolset, but it can take some time to get everything built.
So we were, at that time, working with just our internal team, and we had a single implementation partner helping us with another project.
We use what we called at that time deployment trackers, which were simply Excel sheets with all the metadata or a release stored in them. And that allowed us to build a package, run validation, and deploy it as needed.
It was time consuming the larger the package got. And the longer our processes went, the more work we were involved with, the more our projects grew, the longer that time became just as the nature of the work.
It took sometimes an entire day just to put together a project and validate it simply due to the size of components involved in the process to get everything prepared and and check it into the environment.
So after we had talked about that question from a director of what can we do, I knew that there were CI jobs in the system and kinda moved into experimenting with those.
Unfortunately, I thought that an individual CI job was the CI process.
I knew there was a pipelines tab on gear set, but didn't really explore it very much. I thought that once you got the CI jobs up and running, then that process would just iron itself out.
However, we definitely weren't where we needed to be with that. It still required manual intervention. I now realize looking back at it, I was essentially the pipeline for the process.
I created individual CI jobs, had them run validation for us, prepare deployment for us, test everything that we needed, but it still required me to go out there and initiate the validation, to initiate the deployment, all the steps that should have been automated.
With that, it's a disjointed process that we were in, and it was fairly complicated to understand all the pieces of that. So it was solo work on myself, which isn't good, creates a silo in your team. And it also creates a bottleneck in that your team and anyone that you're working with has to rely on the individual with the knowledge to do the work for them.
With all of this, we were also trying to work in the repository that we had.
As you can notice on the manual deployment section, our repository was still just a backup.
We had a metadata API repository that had been created, prior to my joining the company, and we just treated it as backup of our code. We had backup and archiving solutions for our org, but it was just a convenient fast way for the developers to to check-in their code and do any reversioning as needed, in there. But we weren't really treating it as a source or anything beyond just a a quick check-in or commit, maybe a tool for the devs to do PR, but that wasn't really fully utilized outside of the dev side. So our admins weren't making use of the repository at all.
With that, we in the CI jobs, same process, still was using it more, because we were using PRs with it and more people were getting involved, but it was still treated mainly as a backup.
Ultimately, we didn't know what we didn't know. We my understanding was we were moving in the right direction and thought that this would get us there, but it still seemed to not be quite what we wanted.
So, Gearset hosts, DevOps streaming at conference each year. And in twenty twenty three, myself and my director at the time attended the DevOps streaming conference in Chicago, And that was the moment for me. That's where everything cleared and the final step in the fog of war went away, and I understood what we were doing. The biggest moment for me during that was when we got to do a hands on session. So all of the sessions that we attended were great as far as hearing however others were doing things, getting notes, getting information on that, getting to shift our mindset based on the feedback from those. But when we actually got to do a hands on session building pipelines, it really connected the dots in how we could use the traditional development practices So coming So coming back from DevOps streaming, dove straight in and started actually building a pipeline instead of having the individual CI jobs that we were running.
We developed that, got it up and spinning. We had, team adoption and team training, as the very next step. We knew that it couldn't be a siloed process. It couldn't just be me understanding it.
For the full process to work and for us to run at scale, We had to get the admins involved. We had to get the developers involved. We had to be able to know it well enough that we could practice it ourselves, but also be able to teach the other, implementation teams that we'd be working with so that this could become a common practice for Salesforce at the company, not just for our team.
From that, we did start rolling out support for our implementation partners.
We built some best practices in. We built some guidelines. We got to get the other teams and the other projects on board, then we got to invest both our team's effort and the, the implementation partner projects into the pipeline as well. And that's really where we got to see that once the process was running and we had people familiar with it and using the pipeline, the scale didn't matter.
Our team internally consists of roughly twelve team members across various roles, and then we had three different implementation partners that we were working with that ranged from teams of three members up to sixty, nearly seventy devs.
So with those, once everyone was familiar with the process and definitely a learning experience, you have to take your time and make sure everyone has a chance to ask questions and get comfortable with how it's working.
We are able to see that it worked at scale for teams of all of those sizes.
We were able to get them implemented and get them deploying their changes between environments without our intervention. We removed the bottleneck, and we were able to work at scale with various teams on various projects.
Then with that, we also were able to shift our repository to become our source of truth. I know that that's a huge shift in thinking when you're talking about Salesforce. Predominantly, you go org to org, and your orgs are your source. So everything that's in your production instance will most likely be your source instead of, say, a main branch on your repository.
But with that, we were able to actually shift to the repository and get everyone involved, admins and developers, for using the repo, using the pipeline, creating PRs, doing reviews, getting to use branch protections, getting to use merge commits. Everything that you get as a benefit of using a repository is enabled once you can adopt a full pipeline.
So just as a quick run through of what some of this looks like, the manual deployments I was speaking of, are using this compare and deploy that you can see on the screen.
This allows you to select metadata filters on the left.
We often have predefined filters, or ad hoc, depending on how the individual wanted to work. But as a team, we had some predefined filters of very common chunks of metadata that we would move.
Within these, you can select metadata instead of it running wide open because, you know, custom object could be a massive chunk of metadata from your org. You can apply filters to the individual metadata to help your comparison run faster.
And then for anyone not familiar with your set, at the top, you can see the different pill icons for all items, new, changed, or deleted. This lets you filter your metadata based on the difference type.
You can use the filter up here to actually search for metadata API names, and then you do get a very user friendly preview at the bottom to show the differences between your source org or repository branch and your target org or repository branch.
We use this process for several years, but this is the one that ultimately became a bit of a burden with the length of time required to deploy these and the bottleneck of manual intervention that these absolutely need.
The next step that we had moved to, as I mentioned, were individual CI jobs. I mentioned that there was a pipeline tab that I wasn't using correctly, but instead was building out individual jobs listed here.
You have validation only jobs and deployment only jobs, but you can run them ad hoc with the little play icon over here. And that's essentially how we were processing it. We were using our orgs. We were comparing it to repo branches, then we were comparing those branches to the next Oregon line, doing all of our processes that way to try to validate and then deploy, in this is an example of history showing if validation or deployment succeeded in these manual jobs. As I've mentioned, this is very disjointed, requires manual intervention still. It does automate some processing for you, but it's definitely you as an individual operating the pipeline instead of letting the system operate it for you.
Where we are today, however, is this example of a full CI pipeline. You have your dev environments on the left hand side and your production environment on the right hand side.
You have your connected stages as they flow through to production. So in this example, you can have multiple, disparate AF boxes flow into an integration box, typically a partial copy sandbox.
They flow into the next stage, which is typically your full copy or preproduction, and then they go into prod. You can also have the concept of hotfix as from branching, to where you do production bug fixes in a hotfix environment and flow them through to production.
There are a litany of benefits to CI pipelines, one being the automation of this flow. So as you commit your feature into your dev box and do your PR to the next stage in the pipeline.
Gear set will build the promotion branch for you. It can run automation for you to do validation. It can do static code analysis. It can do unit testing and give you feedback on all of those steps.
And then once everything clears any of the checks that you have in place, as an example, we have, peer reviews are required as part of your deployment. So that's a status check that has to succeed before it can go to the next environment. But once you succeed, it will deploy to the org in that stage. So you can see the connection here. Integration branch goes to the integration Salesforce org. So once everything is successfully merged through the branch after passing checks and validations, it will deploy through the org and create the next stage in the process for you.
Again, that's the full automation that we were missing out on of having it migrate things to the next step as you work through the pipeline.
With that, even the things like a hotfix can also do, as you can see on this bubble here, back propagation.
The way something that goes through one stage of your pipeline but didn't come from another org, once it hits prod, hearset will go ahead and prep the other org as long as you've selected the backpropagate and allow you to move things in.
Very quick, very high level overview of the pipeline process, but this is where we've moved to for scalability with our teams and projects.
One thing I did wanna bring up when talking about this with both the pipeline process and our evaluation of it, And especially for us, this impacted us on our repository, the sunk cost fallacy. This is the phenomenon whereby a person is reluctant to abandon a strategy or course of action because they have invested heavily in it even when it is clear that abandonment would be more beneficial. I'd mentioned that we were using our repository as a backup and not really our source of truth.
With that, it was a metadata API repository.
Those of you that are familiar with the structure, there is metadata API repositories for Salesforce, and then there's Salesforce DX repositories.
It's just the biggest difference is the structure of the repository.
But with that, we ran into a problem where we were constantly experiencing environment drift and having to deal with a lot of heavy syncing between our branches. So it was causing another bottleneck and a headache that we were having to evaluate this all the time. We were having to make sure we were syncing both the branches and the environment as as often as we were releasing.
So with that, myself and the engineer and our team sat down for a day. And in a conference room, we just ironed a lot of this out and talked about all the different different possibilities. We were really trying to save it.
The portion I highlighted at the top is just because it's how you've always done something doesn't mean it has to stay.
The nostalgia of this repository, as I'd mentioned, it was the repository I started with at the company.
It was really blocking my vision on, like, what we needed to do to progress past this to have a more stable repository, more stable process.
Ultimately, through discussing this and talking through potential fixes, I presented the idea of, well, what if we just started fresh? If we shifted from metadata API and kept that old one as an archive of our changes, but picked a time between projects to start a fresh repository, pull everything from production, and treat it fully as our source of truth and build the pipeline around that.
That ultimately proved to be the best choice for us and has cleared a lot of headaches and allowed us to have a lot more efficient work within our pipeline. And, ultimately, making a clear decision on a key part of that process saved us time and provided stability to the work that we were doing. So just wanted to pass that along and some advice because I know it will be hard. I know it will be hard. So just wanted to pass that along as some advice because I know it will be hard to determine when you're making a shift as large as moving from change sets or manual deployments to a processes, altering as a CI pipeline.
Gear set provides this slide on a DevOps journey and levels that you're at. I wanted to bring this one up after going through all the process that we went through to kinda show you it's easy to progress, and it's no shame in starting as a beginner, learning to walk and then learning to run, but outline the different characteristics of these and kinda point out to where I think we are today.
Currently, based on the characteristic, I feel that our team is at an expert level. I know Luminary is the next step, and ideally, we'll get there. I do think that there's also stages beyond that because you should always be on this journey of constant learning and constant improvement.
But the reason I feel that we're at this expert level is based on our characteristics that admins and developers are working in a CI workflow.
We do use our GitHub repository as our source of truth. We do have continuous integration of our work, and we have the ability to consolidate releases to production. So all of these separate ERs, once they hit the production stage, can be consolidated and released.
We are missing out on some automated unit testing, but we're working to improve that. And as I've mentioned, should always be striving to evaluate what you're doing and work toward improvement.
And then we do get the benefits of changes are integrated to a version control system, merge conflict checks, ensure that work isn't lost. We have automated deployment of work to all of our lower environments so that there is no bottleneck to the teams working in Salesforce.
And then we get to control and have reliable releases into our production instant.
So all of that was the journey itself, but you can't get anywhere without the people.
So as I mentioned earlier, our team is about twelve individuals.
The core makeup of that, though, is we have five Salesforce admins on the admin team, and then the dev team consists of two devs, two QAs, and an engineer.
The key thing I wanted to point out here is that as you work through this process, the key point is that everyone should be willing to get comfortable with the process.
They don't have to like it at first or or feedback and discussion are valuable, but everyone should be open to at least exploring it and seeing if it will be beneficial.
Challenges, that's the fun part of the work.
Often at a previous job, I would hear change is good. You go first.
More of a, do we really want to change? But in this case, we did. So we shifted our process and our practice. It's pretty much never easy.
And then we've found that the best way to do that though is leading from front. We learned the process. We taught others, and then we brought them along with us. That went from me learning the process and teaching my team to my team learning the process and teaching the partners that we work with.
So it's an ever growing process.
Along the way, we created guardrails.
Also best practices, standards, the name of Spero and Go, on and on. They are created with the process. They are living documents that can change as everything evolves.
We didn't reinvent the wheel on a lot of it, though. We reviewed posted best practices, read what others have done, attended webinars and sessions like this, and incorporated what we found to work best for others that would work for us and built upon it.
Some of the things should be etched in stone. Those would be core standards that define the ways of working related more to limits and best practices that are established. But as we all know, you know, Salesforce, things can change over time. So it is okay to break the stone and start with a new one. But it's just the core tenets of that process that should be more in stone than the living document of how the how the process evolves.
And then as you're working, either with your team or with implementation partners, definitely share the resources that you build. Even now, we're evolving our process and changing documentation and updating that, and then we'll be sharing that out with team members to bring them along with us.
How Gearset enabled all of this. Ultimately, Gearset was the tool that gave us the option to do any of this.
It gave us the foundation to go from our manual compare and deploy to see our pipelines.
We were able to connect GitHub repositories to the pipeline and utilize it to deploy to our orgs instead of org to org. We started with the compare and deploy. We evaluated our metadata.
We used that to move into CI jobs to understand how they would do the deployment. And then once we understood more of how pipeline work worked, we shifted the existing knowledge into that structure. And we moved our repository to become our source of truth over org to org, and we built all of that into the process that we use today.
Let's, again, reiterate, we're always improving and so is Gearset. They do a lot of releases, sometimes every day, other times, multiple times a week, but that allows us to keep evaluating and improving as the toolset improves.
Oh, I know that was a whole lot trying to to finish here in time to give some questions, and more than welcome to stick around as we open this up. But I'll, turn it over to Amy and Will to see if there's any questions I can help answer.
Perfect. Thanks so much, Jeremy.
So we have had a couple of questions. We are gonna run slightly over, but I think it's definitely worth, asking a couple of questions if that's alright.
So I'm gonna start with maybe a really hard question, I think, but it's a really good one. So this one's from Laura. And Laura asks, knowing what you know now about CI pipelines, what, if anything, would you have done differently when scaling?
Yeah. That's a good question. It's definitely hindsight is twenty twenty.
I have thought about this a lot over the years just as we've moved our process.
To me, the best advice would be to start early.
That's if I could go back and and do the process again, I would start doing things earlier in that process, especially knowing what I know. And I know that I'd mentioned there's this whole concept of fog of war, and you don't know what you don't know. But starting even that experimentation earlier in the process and involving people earlier, just the earlier that you can do things, the quicker you can get into a process and get feedback on how things are working, get everyone comfortable with things moving that way, and evaluating if it's going to work for you or if there's gonna be roadblocks or not.
I also realize, I say start early, I guess, it also falls in line with the agile method of fail fast. Like, the faster you can test things and iterate over things, the faster you can get a more reliable product out. So I think that would be probably the best advice on that one.
Awesome. That was great. Sorry to put you on the spot like that. It's a it's a very good question.
Okay. I'm gonna do one more question, and this one here is from Ben. And Ben asks, how has this transition affected the roles within your team? Specifically, how has it impacted the release manager role?
Yeah.
With that, I'll also really quickly transition, and just leave it on this slide.
Kind of related to this, it's impact of the team and that the release manager currently for us is kind of a rotational, role amongst the team. I wanted to put this resource slide up because both Trailhead and DevOps Launchpad that teach you DevOps practices within Gear Set have been invaluable for that. So our admins and our devs got comfortable with this process. And, initially, it was just me.
I was on top of any other responsibility being the release manager. As the admins and the devs got more comfortable with the process, we kinda rotate it out. So internally, we'll have a discussion like, hey. This release is coming up.
Who wants to handle it? And, typically, someone is either more comfortable with that project because they've been involved with it or delivered for that project before, or someone might want to learn it. And so they'll step up to do the release for it, and that allows us to at least keep that silo broken down to where everyone gets experience with the different projects that are releasing. We do have plans to try to make a release manager role just so that there is more of a consistent person helping manage all of this.
We'll still share the knowledge of those projects. But right now, we manage it by internally rotating that option.
Awesome. I think that's gonna be all we have time for today.
I know Jeremy spoke a bit about, the resources on this page, but I'm just gonna also add there's a QR code on there. If anyone wants to test out any of the features that Jeremy has chatted about today, Giset do a free thirty day trial. So if you scan that QR code, you can go ahead and sign up completely for free, and and test out any of the features that Jeremy's been discussing today.
So I think that is all we're gonna have time for today.
I just wanna say thank you all for joining us, and once again, a huge thank you to Jeremy for presenting today's session and also to Will for answering all of your questions in the chat and the q and a. So we hope you found today's webinar helpful and hope to see you again soon. Thanks so much, everyone.