Description
Karen Fidelak, Senior Director of Product Management at Salesforce, and Kevin Boyle, CEO of Gearset, challenge you to take a step back from individual processes and assess your DevOps lifecycle as a whole. Discover how addressing the entire DevOps lifecycle will improve security, quality and speed of delivery all at once.
Speakers
Karen Fidelak: Sr. Director Product Management, Salesforce
Kevin Boyle: CEO and Co-founder, Gearset
Transcript
Hello, everybody, and hello to Karen. Especially, thank you to Karen for for being here and doing this with us. Yeah. It's an absolute pleasure.
I've known Karen for a whole bunch of years, since she, led the development at Salesforce. And, yeah, there's sort of nobody better in the world to to talk with this topic with. You know, as a senior director of product management at Salesforce, she spearheads the development of DevOps center and then a whole bunch of other things. And she's she's right at the heart of driving DevOps adoption for Salesforce.
And she had been really you know, how how development teams, approach DevOps, on the Salesforce platform. So there's obviously nobody better to to do this discussion with.
Before I get chatting to to Karen in a couple minutes, so I I just wanna talk through, firstly, you know, take a step back and and talk about why we we do DevOps at all.
As you know as all of you know better than better than most, when you're building something, you go through those sorta classic sets of stages with planning out requirements, then actually building out solution with, clicks and code, validating that it works, and and finally shipping those changes to your users and and verifying that, you know, that it delivered the business value that that you had set out to do. And I guess, like, a a a key observation that that we have had is that, the later you find any issue, the the more costly it's it's gonna be, to to fix. The more wasted effort, the more work you have to rework.
Whereas if you catch problems early and, course correct, you can be much, much more efficient and and ship higher quality software. So that's that's kinda like the the process behind DevOps. And and with DevOps, a really common way to to visualize this is, the the infinity loop.
And we also add add this, operate and observe section just so you've got, like, the whole life cycle of of your, software.
And the point is, all this forms one coherent process, where each stage is designed to make later stages much more effective as you reduce the cost of delivery, you improve quality by catching issues early. And really, like, this is also very tightly, correlated with agile development and, releasing smaller changes more frequently. So a lot of these things kinda kinda overlap in some of the philosophy.
And for Salesforce teams, and I guess where we started at Gearset as well, the focus for a while has been to solve that that really, really high impact challenge of of deployments, making those successful so you reduce the number of bugs, making it through to production, while while also while it's also delivering faster. So it's an essential problem for sure, but we've sort of get a little bit over focused and over indexed on that, almost. And DevOps is so much more than that. It's it's about shifting life to embed quality, security, and and efficiency into every stage of the development life cycle, from that initial idea and all the way to production.
Many of you have probably seen this kind of kind of diagram before. It's it's a very common way of of visualizing this process.
But for some of you, it may be new and and and maybe it looks a little bit intimidating. There's a little you know, what are all these stages? There's there's a lot of stuff to figure out.
But, really, I just I'd reassure you that to some degree, you you do all of this already, you know, or something very like this diagram. You plan your changes. You build it. You validate it, and you get those changes into your into your production org. You maintain your environments. You get feedback, and then all of that goes in and, you know, get feedback from the business and all of it goes into planning and the the next iteration.
So these these stages are there. They're they're pretty much nonnegotiable.
And, really, the question is, are we being thoughtful and deliberate about how we execute each of those stages?
Are we thinking about how to drive quality and and catch issues earlier in the cycle when they're less disruptive to fix? And when you're deliberate with the connection between these stages and and how they build on each other, then you realize they form, you know, one virtuous cycle where every process is optimized and the the sum is greater than the parts. And that's the folks that came up with sort of DevOps philosophy. That's what they were trying to represent in, in this diagram. And that's what great DevOps is. It it isn't just a deployment. It's it's the right tools, process, and culture coming together to derisk each stage of your software life cycle and and make that later stage, the next stage, subsequent stages easier.
This is what we're seeing play out with some of the highest performance teams, building on Salesforce, and, you know, our view is it's totally achievable. Different people are gonna come at this from a different approach, maybe different order of tackling it, but everybody can be thoughtful about each of these stages and and how they can they can drive improvement in it. So, yeah, I'm gonna start talking about some of these stages with Karen, and and get her her perspective on it.
Yeah. I guess I'd just, like, start with, is this is this a similar way that you think about things at Salesforce? How do you think in terms of that full life cycle and and getting Salesforce teams to to adopt quality at every at every stage?
Yeah. Absolutely.
Yeah. Everything you just said completely resonates.
This is absolutely how we think about it as well. I think, like you said, it's really all about, derisking and cutting out, the pain and cost that can come when problems are found later in the process or sometimes we talk about sort of left and right. So the further to the right that we identify problems, the more costly they are, the more painful they are. And so, you know, the other thing I I usually like to say is that you you you hit on that, like, this this whole thing can look intimidating and can look like a lot.
And, you know, maybe all I care about is just, like, deploying my changes from, like, one environment to another. And so why do I need all this overhead? Is this just overhead, this whole process? And I always like to just reiterate that, like, this is not process for process sake.
This is not about, like this is the right thing to do, so we should all do this. It's really about minimizing costs, creating more productive and efficient teams, and cutting out those pain points that you all ex have experienced. I'm sure everybody has experienced finding a problem in production that has a big ripple effect because it was found in production.
And so that's what we're trying to address here. And so a little bit of investment, earlier in the process can really pay dividends down the line, and that's how we like to just think about the whole process.
Yeah. We're similar. And then some of it's technical and some of it's, you know, business business impact stuff as well. Like, I guess, how do you work with teams to try and encourage, the sort of the end the end users and the business stakeholders that are that are requesting changes to to sort of appreciate the challenges that Salesforce teams and that entire site life cycle that they go through? How do you how do you motivate how do you motivate and and give permission for those teams to invest in their process, invest in, in getting better at their craft?
Yeah. I think I think, this hits on a bit of just sort of the culture of it and and bringing this into the organization sort of at a cultural level.
And I think it goes back to just understanding and making peep understand what the value and what the purpose is and, like, why why these things are important. And, usually, everybody's been bit at some level by some kind of problem. And so, like, that's what we try to focus on is, like, remember when that happened? You know, you've probably been in this kind of situation before yourself. Right? That's what we're ultimately trying to improve.
And so, yeah, it's it's a mindset. You have to get the organization bought in.
And, you know, the other thing I I think a lot of times people ask, like, how do I how do I get started Yeah. With this kind of thing? And it's like, it doesn't have to be an all or nothing approach either.
We can bite off bits and pieces of this and and start putting in bits of process and tooling to support it.
It doesn't have to be everything, but I think it does require a mindset and sort of a culture of adoption.
And then you've got the processes and the tools to to support that.
Yeah. Totally agree. We often speak to customers where they they kinda wanna do everything at once. They have this idea that there's a DevOps process that's correct that they must adopt, and anything short of that is incorrect. I suppose we are always just trying to work with folks to understand to understand them, understand their team, to understand their process, and then look for opportunities to improve.
Yeah. So, yeah, for us, it's it's around trying to communicate.
Everyone's gone through this cycle anyway, whether or not you think about it, don't think about it. And when you start to be deliberate and start to be intentional, you can often spot opportunities for improvement that then compound and accumulate. And and over time, you just build that muscle at getting good.
Yep.
Yeah. I I do you see any any teams or is there commonality in the the things that folks maybe skip or without even skip unconsciously, but be are less aware that they can they can improve upon? Is there any of those stages where you see this? I guess, like, for us deployment, we sort of see, like, lots of teams feel the p in there and tackle that early. Are there any where, like, they they they don't?
I mean, I think testing's a good example of one where there's varying degrees of testing that you that you see across the overall life cycle. And, you know, that is just test early, test off.
Yeah.
And I think there's different kinds of testing that can be implemented at different phases of the life cycle as well, whether it's, you know, for unit testing or functional testing or performance testing and putting those into the right stages, really, basically, like, as early as you can, that's gonna that's gonna help you down the line. And I think it testing can be hard because you have to you have to create tests. Like, you have to you have to actually build framework build a framework of tests that you're gonna run. And so it's, it's often something that people feel like is, you know, kind of a pain or an added step that, like, well, I did some manual testing. It looks fine. Like, let's just go. But if you don't build that automation in, you don't ask break later.
You're you're relying on on repeated manual testing, which is just gonna be time consuming and error prone. So it's like that that level of investment, again, I think is worth it in the long run to build that in.
Totally agree. I I came to the Salesforce ecosystem probably later than most of the folks in this call. It was really around the time we started gears that the year before that. We done a bunch of Salesforce stuff and and fell in love with the platform, but kind of struggled with some of the DevOps stuff. We thought we we thought we had an opinion on some things. And I thought one of the really interesting architectural decisions that Salesforce made early was that enforced unit testing, just as a way to to make folks think about it and think about driving quality early. Do you do you often talk to teams about this sort of, you know, the classic test pyramid thing and and how to think about levels of testing and and when to do, the different types of testing, like when to do manual testing, when to do UI testing, when to do, unit testing, and and and how to think about it.
Yeah. I think, it's like I said, there's a lot of different kinds of testing and that and you can look at, like, the purpose of each kind of testing. Are we just trying to test the the, like, basic functionality as we're doing development, make sure that we're meeting requirements?
Yes. We wanna do that, but that's also in an isolated environment. And so that doesn't necessarily check the behavior when it's combined with the rest of the team's work in an in an integrated environment. And so that's where integration testing comes into play.
UI testing for anybody building wise is is also, you know, from a functional and Russian testing standpoint is is gonna be super important.
Things like performance testing, that scale testing, that may happen later that may happen later in the life cycle where we've got environments that are more production like, and we can actually verify performance and and scale.
That said, there's there's things that we can do further to the left Mhmm. To also address performance, and potential for for scale issues. So, like, we've been talking about testing, but scanning and static code analysis comes into play as well.
And that can identify things that will create performance problems down the line.
And so, again, like, catching those early can just save save tons of time.
So, yeah, did I did I answer that question?
Absolutely. Indeed, you had a a much, more nuanced and developed, developed wrongs without, like, lines in that pyramid that I had. So I often don't think about scale performance testing, but, of course, for Salesforce customers and the level of data they're operating under, like, that's a that's a critical part of understanding if it's gonna work in work in production.
And I guess for us, we've almost started trying to talk to customers about, like, validating almost more than testing.
Mhmm. And, really, it's around, code scanning. Big gear set has, but there's there's other options out there. There's ways to there's ways to use, software to help you write better software.
And and I guess code scanning almost, like, undersells the impact because when folks think about code scanning, they often think about, hey. Is this, like, syntactic? You know, Apex is syntactically correct or all that sort of stuff. But, actually, you can get really good software that'll help you understand the architecture of what you're building.
Does it conform to Salesforce's well architected practices? Obviously, that's also been very well thought through if you follow those guidelines. You know, it's all it's all gonna lead to goodness. And so we can have really, really smart software during our, build phase, help us validate that we're we're on the right track.
So, yeah. And and, obviously, Salesforce have, Salesforce code analyzer as well. So we we sort of encourage all all users to to try out those things and and embrace them.
Yeah. I totally agree on that. Just the the scanners performing different types of scans. And and that is one thing that our code analyzer does do. It incorporates a bunch of different scanners and a bunch of rule sets, and, and it does allow you to catch those things, again, part part of the west.
Yeah. Especially for, with Salesforce, it's that even the word code, you know, folks like Apex, but actually, like, hey. You gotta be doing this for your flows. You gotta be doing this for profiles.
The the the solution you're building is made up of all of these different things. I guess that that leads into some of the testing as well. You know?
Salesforce obviously enforced unit testing on Apex, but, hey. What about flows? What about all the other things? So how is testing a thing for devs? Or does anyone building the platform, admins, everybody, whatever your job title is is is sort of validating on quality the responsibility of of everyone?
Yeah. I believe I believe it's the responsibility of whoever's building.
Everybody's a developer. You know, we like, I I think some of these terms are overloaded and by our own fault.
You know, I I like to think of developers as both is either and or low code developers, a pro code developers, everything in between. These are the people who are building the applications, and they may be using low code declarative tooling. They may be writing code.
But they're creating the application, and they should be testing it as well.
And testing it doesn't just mean manually clicking buttons on a screen if you're the the flow the guy building the flow.
It means creating tests that can be run-in an automated system repeatedly, throughout the process. So what does that mean for those low code developers? How do we get these tests written? And this has been like, this has always been a thing that I felt is is a gap in the overall solution space. We make a big deal about how, you know, we are a platform of low code developers. We also make a big deal about how we, how testing is so important in the flow.
We haven't always had a great story of, like, okay. How do I create tests in a low code way? Or how do I create tests if I am not a coder? And, this is where, you know, we're getting better.
We are also leaning on partners in the ecosystem that provide more low code test generation capabilities.
I think this is also an area where AI is really gonna become interesting, and helpful to help create and generate tests, not just the test, but test data, see data, to to to test whatever it is you're building and however you're building that. So I think this is a huge, like, opportunity space for the ecosystem in general, is to really beef up that low code testing experience across all the tools.
Yeah. I totally agree. And, yeah, we've seen a shift in the last, you know, since we started using the last five five, ten years of the amount of logic that's not encoded in the cloud of solutions.
Gears that internally, obviously, we use Salesforce, and we we we said we love flow, and we love, the entire platform. But, like, over time, some of those flows have grown to be quite complex. Maybe maybe we took a shortcut here or there, and we we possibly shouldn't have. And you sort of look at it and realize, hey.
This is encoding quite as a quite a lot of business behavior that, we'd love to get some test around. So and not just for not just for protecting ourselves. It's it's obviously there. But it's like I'm I'm a builder of my background.
I'm a software engineer of my background, and, I often can't remember what I did a year ago, like, what I meant to do, like, what I meant thought thought I was doing a year ago. And so tests are giving me that additional, like, spec to help me remind, like, you know, what what what were those use cases I was trying to encode.
And as you say, like, everyone's a builder.
I guess the consequence of everyone being a builder is everyone's gonna release bugs to production. I don't know about you, but I I I I work hard. I try and shift left as much as possible. I follow all the right guidelines, and I still occasionally, get things into production. And a gear set when we build a gear set, they deploy gear set. A big a big part of what our engineering teams think about is observability of the the software that we're building and and how we're gonna understand how it performs in production and find out things before users do ideally. And and we think of that, you know, under that sort of umbrella of observability.
Can you just for folks that are maybe new to this term because it's it's not super prevalent in the Salesforce ecosystem yet, what is observability? How how do you think about it? And why does it matter to Salesforce teams?
Great question. A million dollar question these days.
What do we mean by observability? It's interesting. We've been we've been talking a lot at Salesforce recently, about observability.
And what's been really interesting is we are finding, like, Salesforce is a big company. We have a lot of products across our our portfolio and our platform, and everybody's thinking about observability.
And it kind of can mean different things to all to all these people, and that's totally fair. That's totally, like, right. Really, what it means to me is it's getting access to data and insights for whatever it is that's most relevant to you in your job and your function, and then being able to act upon that data and those insights and make decisions based on that.
So for example, you know, some of the types of things that fall under observability for us, app usage, like application usage. We understand how our users are using the things that we're building.
Performance, you know, how are they performing?
Scale can go with that.
Overall system health, you know, some people are gonna be on the IT operation side, might be more interested in monitoring health.
Operations, sort of from an internal business business side, making sure that our teams are highly, efficient and operating well.
And all of this is driven by different types of data in the system and is gonna be shared out to to the audience, whoever that may be, in different ways. So, you know, in my and I also like to think of that, there's sort of a push and a pull model when it comes to observability.
You know, there's there's, like, a monitoring side of this where we wanna be behind the scenes, have systems that are monitoring, and then are able to alert, are able to push Yeah. Notification out to users. At the same time, we wanna have dashboards available for whatever the stakeholders are that wanna, you know, just go see data when they need to and filter it and get insights.
And so, like, it's different kind of data for different kind of people and different roles within the system. And so everyone's gonna kinda have their own area that they're they're gonna be most interested in.
Yeah. I think that's a a very astute answer. And, I guess, with with the way we see it actually play out in practice for ourselves is those alerts like prompt and, an investigation into something and then the sort of deeper, richer data, that you're, you're real glad you had whenever you needed it, allows us to go off and investigate and understand. And as a gearset, we try we try to practice what we preach. So we actually ship gearset two or three times a day to all of our customers in all the data centers across the world. And we've done that every day for about ten years now.
And when we first started doing that, it was really easy because we had no users. So it was it was like if a tree falls in the wood type situation. Now we we have lots and lots and lots of users. And for us, a really big thing is the the error rates. So we can see we wanna be able to spot if a deployment has caused any issues, before any users have and then automatically generate that rollback package or make sure that, make sure that you can't be impacted before. So for us, the the, flow monitoring and and Apex monitoring are really, really key.
And I guess just to to finish, obviously, Salesforce is, like, the most trusted platform. It provides a a whole bunch of a secure foundation to to build upon, but even a hundred percent secure platform. Right? Like, if I I can make mistakes as a builder, I can introduce security vulnerabilities.
I can cause data leakage with it. And, again, due to the nature of the data that's in Salesforce, it's like the most sensitive data that that we hold as businesses.
How do you think about security within within that DevOps life cycle?
Yeah. So we I mean, we talked about how any we're talking about all of this because we're trying to reduce cost and minimize risk. Security problems are probably are the most costly, are gonna be the most impactful.
These are the kinds of issues that will, like, halt business, will cause the entire business. We we have it happen internally. You know? Like, everybody pivots because of some security vulnerability that's been identified.
And so, like, these are the most important, really, to be able to try to reduce risk on.
I mentioned earlier, like, the scanners and the static code analysis is is a good way to to identify things early.
We also have tools, security center suite of tools that they can help identify where we've got areas of risk.
We've got, you know, monitoring.
Observability fits into this as well because Yes, ma'am.
It it there's a tight relationship there when you've when you're starting to, like, see maybe, anomalies within the system or activities that, are suspicious and having the right monitoring in place there to identify that quickly is gonna allow you to then also resolve it quickly. So So I think there's a tight relationship there.
Yeah. It's amazing. Once systems start operating at scale, just how much, like, how many patterns exist within them. And so spotting, like, anomalous data access or, like, hey.
That user never done that before or, you know, that's a big mass export that's not normally expected. Okay. What's going on there? Like, how quickly those things can scream out at you if you've got the right the right systems in place to protect you?
I guess the the obvious question with this one is, how do you balance, you know, the all those who are competing interest? Balance the speed of how long you take to test, how long you take to to think about security and review those things versus getting stuff into production. Do you have any advice for teams as they as they try to, not grain to a halt? But Yeah. And I just the vulnerability.
I I I, it's a great question. I mean, I it was kinda going back to the earlier point. It's not necessarily the kind of thing that you need to stop everything you're doing and build this this big ginormous system.
Like we said, identify the areas that are of, like, most bottleneck for you right now or are causing that you've seen where we're causing the most problems and focus there.
I think it's also important to have a way go kinda going back to observability on the operational side to have a way to measure and monitor, the the operational side of this. So, like, how are we doing as far as getting changes deployed at what frequency and at what error rates? And and there's a a whole set of metrics that we're probably familiar with around around the Astora metrics, which is a way to, like, kinda measure and not just measure, but see trends over time. So you can see if we're seeing improvements.
Like, we can see if, okay, we implemented this kind of change to our process. Are we seeing improvements, in overall quality of what we're putting out there? A number of failures that we're finding in production. And I hopefully, ideally, like, that's what we're gonna see.
That's the result that we're looking for kinda taking it a little bit back to the beginning here, which is, like, why are we doing this?
And then we wanna see those results.
That's awesome. I'm yeah. We we're big fans of of the newer metrics as well. We, shared with my colleagues at Gear said that do the state of Salesforce DevOps survey, and, I've been tracking those over a number of years.
And I guess, like, this is this is a really, really exciting time for for DevOps for me. And when we started Gearset, you you actually couldn't use the word DevOps in the Salesforce ecosystem. It hadn't it hadn't really reached that, you know, critical understanding yet. We just talked about deployments.
And then for a bunch of years, we've been talking about DevOps, but, actually, there's still been an over indexing, I think, on on deployments and its its criticality. And I think as as the ecosystem evolves, you start to see all these other things come in. So, yeah, I could talk about this stuff all day with you and, and, getting your perspective. So thank you thank you very much for for taking the time to do this, and, really, really, really appreciate, you sharing your knowledge with ecosystem.