Description
Watch this session at DevOps Dreamin’ where David Cano, President & CTO at CloudQnect, covers the three levels of success in a Salesforce DX environment. These levels, along with the three pillars, automation, and unlocked packages, create a complete DevOps solution and enable Salesforce DX development teams to function at the highest levels. Join this session to learn best practices and hear stories from the trenches.
Learn more:
- Salesforce DX explained: tools, features and its role in DevOps
- Adopting Salesforce DX
- Powerful DevOps with Gearset and Salesforce DX
- Find out more about our next DevOps Dreamin’ conference
Relevant videos:
Transcript
Alright. Good afternoon, everybody.
Today, I wanna talk about the three levels of DevOps success.
I'm not gonna bore you with my credentials.
Probably the best thing to know about me is I am probably way more comfortable talking to a conference room full of executives and military generals than I am right now on this stage.
So I offer my condolences and my gratitude that you have all voluntarily elected to suffer with me.
I am with Cloud Connect.
We started two thousand seventeen because of a love of DevOps, a love of the Salesforce platform, and a complete disappointment about Salesforce DevOps when we first got into it.
Seriously though, Salesforce DevOps, when I first got in in two thousand eleven, was not very good at all.
So it was one of those things that we quickly realized that in order for us to help customers maximize their investment in the platform so that they loved it as much as we did was really just to fix their DevOps.
And so that's became the core of our vision.
So the three levels of Salesforce DevOps success as we see them are the three pillars, the automation, and the unlock packages.
So to be clear, this is not us talking about the DevOps maturity model. I'm pretty sure most people in this room know what the maturity model is. They know how to get through it.
This is actually for us. This is how we organize the way we think around DevOps implementations because that's pretty much what we do. We do a lot of them. And we basically had to organize how we approach the customer and guided them through their DevOps journey.
So let's talk about the first one, the three pillars.
Over the last five, six years, we've actually had to refine the way that we think about this.
I'm sure everybody in this room has always seen the marketing process, people, implementation, or people process, implementation.
You have to have all three.
That is an absolute truth.
But we, over the years, have come to realize that these words actually have deeper meanings, and there is a specific order in which they have to be accomplished.
Process really means discipline.
A simple repeatable discipline.
People means their muscle memory. Their muscle memory for following that discipline.
And then the last one is implementation, which really means tool or automation.
Discipline is probably the first and foremost thing. It has to be a simple repeatable discipline. Because if it isn't, you're gonna have source conflicts, deployment collisions, which we affectionately call the great clobbering.
I'm pretty sure anybody that's used change sets knows all about this.
It occurs.
Or the mysteriously vanishing code fixes or reappearing defects.
Simple repeatable discipline has to be established.
If the discipline looks like something disjointed, like the web of a spider on crack, it's not gonna work.
If it looks as grandiose and as complicated as Saint Peter's Cathedral in the Vatican, it's also not gonna work.
The discipline has to be simple and repeatable for two reasons. It's gotta be scalable. And most important to the DevOps implementation, it has to be adoptable.
People, muscle memory.
If you don't establish the muscle memory, the discipline is not gonna do you any good.
You're still gonna have the same issues.
But if you can establish the discipline and the muscle memory around the discipline, you can start to realize some of the benefits of good DevOps.
Development efficiency, shorter deployment times, more frequent deployments, and fewer conflicts and issues.
One example we have, our very first engagement, a large nonprofit out of Tampa, Florida.
When we showed up, their DevOps was a hot mess, to say the least.
They were deploying maybe once a quarter, four to five new features.
The deployments were taken anywhere from three to five days.
And it literally had to involve everybody almost on a twenty four seven basis for that three to five days.
By the time we got the deployment out the door, we would have quickly have to follow it up with sixteen more critical fixed deployments because the deployment didn't go right.
So very first thing we did was we backed them up. We established a simple repeatable discipline for all the developers to follow, and then we built and established their muscle memory around that discipline.
Even with doing that, even before we implemented automation or a tool, we went and increased their deployment frequencies up to, like, two to three times, within a given quarter.
We shortened the deployment time. Now it only took two of us to do it in half a day to a day.
So even just establishing a discipline and a a muscle memory around it, we were able to reduce a lot of the issues we were having.
The critical fix follow ups, we virtually eliminated them.
Automation.
The funny part with this customer is we then wanted to automate them. We wanted to set them up on a tool.
They resisted because anybody that's ever set up a tool knows that at some point, you've gotta pause development to do it. And you'll be surprised at how many customers are very resistant to wanting to pause development.
And we only asked for a week to do it. And the tech manager decided to try to keep going the way he was. He missed a critical deadline. And so the CIO stepped in, gave us three weeks.
We paused development, reset all the environments, hooked everything up, while at the same time, we sat and trained their people to basically now exercise their muscle memory with the tools instead of just doing it manually.
So what was the result of that? The result was we are now pushing out four to five new features every four weeks and then every two weeks.
We didn't need two people to manage the deployments anymore. We only needed one, and it literally took maybe two or three hours. We reduced the deployment from three hours to two hours and then eventually to forty minutes.
By the time I rolled off, they were doing two to three deployments discipline, muscle memory has to come first.
Then the tool can automate that.
And once the tool can automate that, sky's the limit at that point.
That's when you can really start progressing the customer through the DevOps maturity model.
So let's talk about automation.
Why do we devote an entire level in DevOps implementations to the tool?
Well, if you want the customer to progress through all of the DevOps maturity model levels, you're not gonna do it without a tool. The tool is critical to that. But that's actually not what we wanna talk about today.
What we wanna talk about today is how you manage expectations with the customer around the tool.
So this is very important.
Just like you see the three legged stool, without discipline, without muscle memory, the tool ain't gonna do nothing. But we have heard many conversations throughout the last few years where either, the customer or even sometimes the implementer the other end implementer would talk about, well, all you need is the right tool, and it'll solve all of your DevOps issues.
There is literally no truth to that statement.
None.
And if you say it, if you manage the expectations that way, you are setting everybody that's involved in that equation up for failure.
The customer, yourself, and the tool.
There is no tool on the market that can solve all your DevOps issues by itself.
That's not what the tool's intended purpose is. The tool's intended purpose is to automate. The tool's intended purpose is to start progressing you through the levels of the DevOps maturity model.
That's the tool's purpose. And in order to really reap the benefits of the tool, that's what you gotta focus on.
Discipline, muscle memory, and then the tool.
Second part, the right choice. Right?
Two things that we are very mindful of whenever we do DevOps implementations is release discipline maturity and technical gap. What does it mean?
You have to honestly assess the customer.
What is your release management maturity today? And what is the technical capability of your folks?
There's no point of us trying to fast forward them to the best solution in the world if their folks aren't gonna be able to support it.
There just isn't.
You're basically, again, setting yourself up and the customer for failure.
This is my favorite part.
This is the future.
Unlock packages.
I know some folks in the room are gonna be talking more about that after this particular conversation.
But the way I wanna talk about it is very, interesting because one of the things that happens when you start automating deployments in Salesforce, you start to see a whole another challenge.
Most setups require the deployment of the entire code base. And anybody that's worked at Salesforce long enough knows, depending on the organization, that code base can become three hours to do a single deployment to a single environment. It's just that's the challenge of classic DevOps on Salesforce. Now some tools have gotten good at doing ad hoc deployments so that the deployment doesn't last that long, but not every tool has. Most setups still deploy the entire code base.
So if you think about the quick numbers, and this goes to, back to muscle memory.
When you start talking about muscle memory and adoption, there's this thing that always happens. Some people want to circumvent the process, and you have to start asking questions. Well, why do they wanna circumvent the process?
Even seasoned developers coming from object oriented backgrounds sometimes want to violate the process.
If you look at the quick numbers, if it takes forty minutes to two hours to do a deployment, to get a change from my dev environment to QA can take up to three hours.
Right?
Anybody that's ever worked in development knows that the sprint cycle is two weeks. First week, dev. Second week, testing.
When you start getting into those quick turnaround testing scenarios with your QA, sometimes the fix is a really simple change.
As a senior dev, I know it's gonna take me three hours to get that change into QA for testing and to get my user story moved and to ready for release.
So what happens?
Two directives that the developer is under comes into direct conflict with each other.
There is the directive to follow the process, but there's an even harder directive to maintain the velocity of user stories through the pipeline.
So what does the experienced developer do?
They make the change directly in the environment. They copy it into source control, and then they PR it to hope that it gets incorporated into the deployment.
And when they do that, not only have they violated the process, but they potentially opened it up for a mistake to get through.
So why am I telling you all this?
It's very simple.
Package based development, which has been used in c, c plus plus Java, and dot net for decades has been the answer all along.
What is package based development? It's quite simple.
It is basically taking that monolithic code base and breaking it up into smaller, more atomic and easier to maintain packages.
The other aspect of package based deployment is you only validate it once.
Rather than validate it with every deploy to every environment, which is the normal Salesforce model, you literally validate it once, turn it into a package.
And that package can be installed into any release pipeline environment and prod, and it only takes a few minutes each time.
That that is the future.
And it's an exciting future because the tooling is catching up, and it's awesome. We're we are so excited to see that, Gearset and others, they're really getting on board with the whole DX as well as the unlock packages. And that's what we really, really wanna see because at the end of the day, the trade off speaks for itself.
Smaller, more atomic code bases for teams to support, Validation occurs once.
Deployment of smaller packages takes minutes.
But but the most important aspect of this is it makes following the release discipline incredibly simple, incredibly fast, incredibly easy.
And that, at the end of the day, is what we really, really want. Because the moment we accomplish that, the world's our oyster.
And one of our largest clients today actually took three years and migrated all of their classic dev teams over to DX unlock packages. And I still get to work with them every day, and it's a totally different world. For those of us who come from Java and dot net backgrounds, we we love it. Everybody loves it. And the leadership at Carmax, main thing they say is, we don't know why we didn't do it sooner.
So to to say that they will never go back is an understatement.
But DX unlock packages are where we need to be.
I thank you for suffering with me.
Very happy.
If anybody has any questions, I would love to to answer them for you.
No questions?
Oh, oh, there we go.
It's building muscle memory. Do you think it should be more top down or would you prefer your customers be trying to start from the developers up?
That is actually a very good question because we have those scenarios every time. Some organizations are very hierarchical, so top down tends to work a little bit better.
Some organizations are so large, like, one federal organization that we work with, they're massive. Like, they have ninety active applications on their org and basically, twenty to thirty active projects going at any given moment.
So we tried to do a top down there, and it didn't quite work because the SIs didn't want it. They didn't care for it. But what's funny is now the SIs are starting to push for DX unlock packages.
So it's starting as a grassroots.
I, through my experience, have learned that it's kinda better to have both, really.
You want to have a grassroots movement that wants to move to DX, unlock packages, and then you have to have the support of the right leaders at the right points in the organization.
And if you have both, you are most likely to succeed.
If you have only one or the other, your chances are about fifty fifty.
Yep.
Do you have any preferred, patterns or design approaches that you take when you're dealing with a large org where there may be multiple applications being built in parallel, all referring back to the same metadata?
You're talking in regards to, like, classic development, like, they just kinda segmented into packages, but there still is one big monolithic code base?
Not so much even the code bases, you know, the objects themselves. So I'm thinking, let's pretend you had five different projects going on, on, all referring back to the case object, all making modifications to the data model.
You know, you have to find a way to kind of separate their concerns and keep them from stepping on one another.
I'm just curious if you in your experience, you've had, patterns that seem to work better for that.
There are frameworks, in the ecosystem today. Apex Commons, Apex Mox, Force d I, a t four d x. I actually am good friends with John Daniel. We work on them together.
So the pattern really is kinda similar to what we did in Java and dot net. You're always gonna have, like, a core common package.
And then what you're gonna have are vertical packages, what we call vertical packages that represent each application.
And with those frameworks, you can actually put the custom fields that are specific to that package in that package, and it works fantastically.
That is typically the approach we take. It does require a refactoring of of their code, but generally, it does work.
So yeah. Even if they both even if two packages have, custom fields on the same case object, they can be in separate packages.
Now you also have to have a promotional process in place because if they both start referring to one field between the two of them, then you obviously will have to promote that field into the common package at some point.
But, yeah, that that's our preferred approach.
Thank you.
Anyone else?
Alright. Thanks, everybody.