Description
Take a look at Gearset’s upgraded Salesforce metadata comparisons. In this video, Sami Hawkins, Business Systems QA Manager at West Shore Home, and Justin Zhang, Software Engineer at Gearset, showcase our metadata comparisons, making deployments faster and more intuitive. Sami and Justin walk you through Gearset’s new capabilities, including faster comparisons, on-demand metadata retrieval, and custom date range filters, which help you and your team make fast and reliable deployments to your Salesforce orgs.
Learn more:
Transcript
You so much for joining this breakout session.
It brings me joy to know that you haven't joined everyone else's breakout session, and you're probably here to see me, and Sammy, of course.
So this breakout session is about the metadata comparisons and, the upgrades that we've recently released around them.
My name is Justin for the third time, and I am an engineer here, which is probably explains a lot. I've been working on this, set of features along with my colleagues, for the past year and a half or so. And, Sammy, who is one of our leaders, is also here, from Wishful Home, and she's gonna take us through a little bit of a case study or explanation of how, her herself and her team have kind of worked with these, upgraded comparisons.
So I'd like to kick off the session by setting a little bit of, groundwork in terms of what we're gonna cover.
Why upgrade comparisons? This is probably a really big question on people's minds.
We'll go over that. We'll go over what's changed, and that's broken down into a couple of things of the comparison engine and the user experience.
And then Sammy will likely take us through, her presentation, and then we'll have a quick demo where I'll try and blast through a couple of key points. We'll, have a quick discussion on what's next. And then, if you have any questions, we'll hopefully have some answers for them. So if you do have any questions throughout the, course of this, breakout, just chuck them in the chat. If we can answer them, we'll get to them. If we can't answer them, we will get back to you, do our best to to get back to you after the summit.
Cool.
So the big question, why upgrade comparisons? If it ain't broke, don't fix it. Why are we why are we doing this?
To be clear, the legacy comparison engine is not broke.
But we've noticed that as the Salesforce ecosystem has grown, the community has grown alongside it.
As Jack was saying earlier, new roles and responsibilities have been created, and also the ways of working and how people approach Salesforce has changed. We've kept in touch over the evolution of this, with updates to the legacy comparison, with things like, being able to refresh or do partial comparisons, and, you know, adding types, you know, sort of ad hoc. But we've realized that, you know, we we reached out to customers, and we realized that we kind of needed a more radical change, to to help people, achieve what they they're really looking for nowadays.
So we went out and we spoke to a large range of users, such as admins, devs, consultants, architects, analysts, and many, many more.
These people came from team sizes that ranged from kind of one man bands, to massive development departments and and QA departments and everything. And they were also from varying stages of embracing that Salesforce DevOps, kind of mindset. So some of these people were just using manual org to org comparisons. Some of them, you know, had had dipped their toes into source control for this sort of stuff, and some had even, you know, been the more heavy users of, continuous integration, like pipelines in our, platform.
So we spoke to those people, and we landed on some key observations.
Our key observations were that Salesforce teams are now making their changes in faster and smaller slices. They're just being you know, the velocity is much higher.
Orgs have gotten larger and more complex, which has led to retrieval and kind of putting together the right parts, more finding what are the right parts much more difficult and take longer and kind of.
That third point is kind of making sure you can pick what you want, not what you don't need.
But, as things get more complicated and grow, the dependency on the user knowledge is much greater.
So what if we what what do we need to change? We obviously want to improve the speed of comparisons. That's that's a given because faster is better.
We also want a workflow that's more optimized for how teams work now as opposed to, you know, the few years ago when we, made last last made big sweeping changes to our comparison tool. And we wanted to help users get started faster, have kind of less of a cliff to get over when they first joined.
We want them to deploy their changes quicker because that's the job that they want to do. And we wanted to keep it simple so that it wasn't, you know, you had to know forty keyboard shortcuts to do one thing, but you could do it really fast. You just had to know it.
So that's what needs to change, but what have we done? We can we can work lyrical about, all the things that need to change, but what have we actually done? So we can broadly, categorize the things that we've done into two groups. One is the comparison engine upgrades, and one is a set of user experience upgrades.
So in, the comparison engine upgrades, we have started one of the most painful slow parts of the comparison is getting metadata from Salesforce. And by and large, a lot of that stuff doesn't change. When you compare, you know, custom objects or profiles, you're kind of really just looking for one custom object or one profile, you know, depending on how you operate your deployment strategies.
But you end up getting a lot of stuff that hasn't changed, and that's a lot of noise, and it's a lot of time spent waiting for things to come back. So we are now going off to Salesforce less, and we're trying to, be much more intelligent about what we request from Salesforce and what we can bring back to you, immediately and say there are no differences or we know there are specific differences.
Another change is what's on the screen now, which is an example of our on demand requests. So, when you're I'm sure everyone's experienced this before. You've compared a bunch of stuff, and you can see that one item that you want to deploy, but it's not loading and it's taking a while and other things are loading. You you just want to see that one thing. So you can now click on that item, and if it's not already compared, we'll prioritize it. We'll put it at the front and center of our, you know, priorities, and we'll we'll go and grab it for you so you can start comparing the metadata and, work on what you actually want to work on rather than just sitting there and kinda twiddling your thumbs.
We also have our own custom sort of version of source tracking. So if you have a developer or developer pro sandbox, or you're using, scratch orgs, the source tracking filter will be enabled for you. And that is, basically gonna go to Salesforce to find what the latest changes are, in the last thirty days, and it'll create a sort of surgical precise filter, upon which we'll start pulling data from Salesforce. So another tool to try and help people look at what they want, look at what they need to look at rather than having to sift through things manually.
In terms of user experience upgrades, weive got a lot.
So hopefully people have hopped into the upgraded comparison workflow, and they will have noticed the big sidebar on the left. So we've really moved some stuff of that around quite drastically.
Selected items is now in its own separate section at the top. We've got a comparison section where you can kind of, modify or choose what you want to look at. And the this is my custom filter. That's your standard or your preexisting safe filters. Those are all still there for you to use. No need to be skipped.
But, you can also add pipes on demand as and when you like them, or you can go and edit them.
And I can we'll dig into this a little bit, into in the demo. Hopefully, we've got time. We'll also upgrade our named item section, as people are aware of this or will be using it. It is another approach that we have for really getting surgical pinpoint precision on what you want, and it relies on you knowing what you want to compare.
And we now show kind of high level information, if we can get it, of when things were last changed and whether or, it only exists in the source, whether it only exists in the target sort of thing. So, again, this helps you, is the intent here is trying to help you see the things you want to see faster, do a lot of that kind of thinking for you and and present it up front to you.
And again, we will jump into the, demo in a little bit and we can go over these changes.
Okay. So, I will pass it over to Sammy.
Alright. Thanks, Justin. Let me share my screen.
K. Hi, everyone. I'm Sammy Hawkins, and I'm the business systems QA manager here at Westshore Home. Westshore Home works in the home remodeling industry, and we're striving to become America's most admired home remodeling brand. So with a goal like that, we're very technology forward. We're constantly experimenting and improving our Salesforce environment to better support our customers and employees.
We've been using Gearset to facilitate a continuous release cycle on a large team of primarily declarative admins. So we're mostly using clicks and not code. We've been doing that for about two years now, and we haven't yet made the leap into pipelines.
So we're using compare and deploy very heavily to move work directly between Salesforce environments.
Today, I'll be talking about the impact of some recent comparison improvements, including compare two point o for our team.
So I'll introduce our workflow at a high level, highlight where gear set is used, and then I'll dive into the impact of the new features.
So like I said, we're a large team of declarative admins working on a continuous release cycle. That means we're promoting work to production every single day. It's very fast paced. We're using Gearset and Jira together to do that in an organized and controlled way.
Our process flow starts with end users submitting requests, which are getting scoped out and prioritized by our business analysts. Sometimes with support from a technical lead depending on the complexity of what they ask for.
That work then gets assigned to one of our Salesforce admins in Jira to be fully solutioned out and developed in a developer sandbox.
When they're ready, the admin promotes their work to the integration environment for their own further testing, and depending on whether it's desired, they may also bring in the stakeholders for user acceptance testing.
When they're confident the work is ready to go live, the admin assembles it all into a gear set package, validates against the staging environment, and submits their Jira card for QA review.
QA performs our initial reviews when gear set within gear set, and if those pass, we deploy the package to the staging environment for functional testing.
Then at the end of the day, we put together a combined production release for all packages that passed QA that day.
So here's another view of that workflow with a focus on how the work moves through our various Salesforce environments.
Typically, an admin has their own developer sandbox where they're building out their solution in an isolated environment.
When they're ready to do deeper testing and see how it functions against some more realistic data and other live automation in the system, they'll use Gearset to deploy it to our integration sandbox.
This is also where they would typically grant that access to stakeholders if user acceptance testing were desired.
Once they've completed all their testing and verified their solution is functioning as designed, they're ready to submit to QA.
We tell our admin team that submitting to QA means you're ready to send it live, so QA is going to be assessing the quality of the solution through the eyes of the stakeholder.
At this point, the admin validates against the staging sandbox and QA tightly controls what is actually deployed to that sandbox for functional testing.
If the work were to fail functional testing, we'd completely roll it back from staging to keep the environment as closely aligned with with production as possible.
Each package that gets approved stays in staging and gets promoted to production when we build the combined release package.
So in theory, by the end of this process, we've verified that each package works on its own merit, It works with what's currently in production, and it works with everything that's gonna be in production tomorrow.
So here's a little bit of a deeper dive into each layer of review that q QA is performing and how Gear Set facilitates it.
It starts with that submission to QA in JIRA and validation against our staging environment.
We perform a documentation review review to make sure standards are met in Jira, but we're also looking at the gear set friendly name and the deployment notes.
From there, we do a security review.
We use the comparison screen, XML panels, and summary screen in Gearset to ensure the admin is sending the work they intended to send.
Then we review a couple of our structural standards, such as no hard coded IDs in the automation and making sure people are following our naming conventions.
Once they've passed all three of these types of reviews, we would deploy the we would deploy the package to the staging environment for functional testing.
So with so many people running gear set comparisons against the same two sandboxes throughout the day, processing time has quickly become one of our biggest bottlenecks.
With most of our comparisons being run against full sandboxes as well, we're not able to benefit from other time saving measures like turning on source control.
So GearSat two GearSat compare two point o has made a noticeable difference for us.
You might recognize this screen as the legacy comparison screen. My team is, unfortunately, very familiar with this screen.
Previously, they would very likely have been stuck waiting on a screen like this retrieving all the metadata from both orgs for upwards of thirty minutes before they could even begin making selections even with filters applied.
Compare two point o is designed to save time here by prioritizing the change items, which has really been a benefit for our admins.
Now we can select a filter before or after clicking compare now without worried about without worrying about getting stuck for ages.
This not only means that the components load faster and on demand but it also grants us flexibility we didn't have before.
We used filters to limit the amount of data pulled into the comparison screen so that we could try to cut back on those loading times. But that meant that if we forgot a metadata type in the legacy version, we had to go back, adjust our filters, and wait all over again.
Now if I'm adding a new custom field and I realize I forgot, say, a record type, I can add it right here in the same comparison screen.
Since QA is combining approved work into a single deployment at the end of the day, filters are still very relevant for us. So we want every individual package filtered as specifically as possible to cut back on combined deployment loading times.
We're still able to filter all the way down to named items and specific managed packages into in compare two point o. So it's important to us because this would have been a blocker to adoption for our team.
Compare two point o defaults to loading items in the source org with a last modified date more recent than the target org. This is nice because it's trying to prioritize the things you're probably looking for. You probably don't wanna send an older version up except for rare situations.
You can wait for the deprioritized requested items to load all at once, or you can click an individual component to load it faster. Compare two point o will then prioritize that item and show you whether it has changed or not. So as you can see on this screen, I've selected a flow and it's retrieving the data for me.
That's also what I did with this Apex class. We can see that it was originally deprioritized because it was more recently modified in the target org. The source org contains an older version, so we could assume it's less likely that you're trying to promote it to another environment.
However, if I had a specific interest in this class, I was able to click it to prioritize it ahead of all the other requested items, and I only had to wait a few seconds to see it.
Compare two point o has benefited my admin team by reducing their wait times and allowing that flexibility on the filter screens. From the QA perspective, things have not changed as dramatically for us, but the features we rely on have continued to work.
One of the things we were worried about during the pilot is that we weren't seeing deployment notes and Jira card links copy in from all the packages we had selected for our daily combined deployments as they did previously in the legacy comparisons.
We actually reported that to the gear set chat and saw that it was added and corrected during the pilot. So it was resolved before this feature was even GA.
That definitely removed another obstacle that would have been a blocker for us to adopt it.
Now that we've adopted compare two point o, it sometimes feels like our combined deployments are processing a bit faster, but it is hit or miss. We're generally still taking a couple of hours to put together the next day's release over here on the QA side. The benefit has mainly been for our admins building their individual packages. It could possibly be because we have some admins that haven't left the comfort zone of legacy compare, and maybe we're getting mixed package types in our daily builds.
But at least that tells us that combining a legacy package with a compare two point o package is also working smoothly because we haven't had any negative impact to our loading times or missing items in our combined release builds.
And that has pretty much been our journey with Compare two point o. We're definitely seeing noticeable benefits, and it's saving my team a lot of time on their individual package builds.
There are also a couple of other comparison updates that I just wanna touch on briefly that have been recently introduced and we're making use of them.
Part of our review process involves checking automation to make sure only relevant updates are being added, whether it's by a bad actor sneaking something in, or more likely from two admins getting mixed up and writing over each other's work.
We used to scroll all the way through the XML and hope that anything strange in a flow would stand out to us.
While we did that, we'd also be looking to make sure the automation didn't introduce any hard coded Salesforce IDs.
Flow navigator has made it much easier to jump straight to the change sections.
As you can see, the updating nodes are highlighted in a side panel, and we can click those to jump straight to the details.
For instances where we'd prefer to view the XML in its entirety, we're also still able to do that.
And then the last thing that we're pretty excited about, which came out very recently, is the ability to select which flow version we'd like to send right from the comparison screen.
As I mentioned earlier, one of the pain points on our team is that sometimes we still get mixed up and end up working on top of each other in our development environments.
Our daily continuous release cycle means that our work is pretty fast paced, and there are some functional areas that generate a lot of areas that generate a lot of ideas and improvements and want them rolled out very fast.
So sadly, in the old way, GearSat used to only pull the latest version of flow into a comparison, which meant that an admin who had finished their work and was trying to build a package to submit to QA might accidentally pull in the work of the next person in line if that person had jumped the gun and started a draft before it was really their turn.
They would then have to abandon the build, go talk to the admin to sort it out, resave their work to the top of the list, and rerun the whole comparison.
So this new feature is pretty exciting for us because although it's still our team's practice to wait your turn in line and for the active version to be saved as the most recent version, these mistakes do happen, and now the person who is all finished and ready to submit to QA doesn't have to suffer for it.
They can simply notice that they haven't pulled in the active version and use the drop down to grab the right one.
And so on that pleasant note, I'll hand things back over to Justin for his live demonstration.
Wonderful. Thank you so much, Tammy. That's super insightful. And to be honest, as an engineer, it's a lot of use cases that I've never imagined.
So, you know, having you share this and speaking to all of our other customers out there really opens my eyes up in terms of, just knowing what we are trying to build is going to be suitable for everyone.
We have got about seven minutes left. And so we're going to quickly rattle through a quick live demo. Can't go wrong, Kenneth.
So hopefully, everyone can see my screen.
So what we're gonna do is just gonna pick two books, that are better read. Hopefully, everyone's seen this already. The on demand comparison is our new kind of pseudo filter that lets you go into, the upgraded comparison workflow with absolutely nothing selected.
This was the default up until maybe a couple, maybe a few weeks ago.
And while we set out with the best of intentions to kind of try and make this a really clean interface, we had a lot of feedback, from customers that were saying we really, really rely on filters and, you know, some people in situations they would not thought of. Like, they use the same managed package versions or configurations every time. And they, they will they use the same API version every time, and they wanted to configure that before they got into comparison. So you can also now, aside from doing an on demand comparison, you can start with any of your preconfigured filters. So let's pretend I'm in the middle of a sprint, and I know that I've got a couple of, types that I want to compare. So let's hit that, and we'll jump straight into computer or the upgraded workflow.
As you can see here, I've got Apex class and custom object, and those loaded super duper quick. And this is, again, back down to that whole we don't wanna go to Salesforce for things that we don't need to because Salesforce takes time. And, I compared these two orgs just before the summit started, so we've got all the kind of latest information here as well.
But what if I'm in this and I've created this at the start of my sprint? Maybe I'm a weekend and we started working on some flows, a specific flow.
So I can come and search in the sidebar. I could just click this. So let's pretend that always jump a step back. Let's pretend that I needed profiles and I know that I need all of them. Previously, that would be a, you know, refresh types and, you know, select types, update all, or rerun comparison. But now it's just one click, and I've got all my profiles, and they're all they're all loaded. They happen to be all no difference, but they're all loaded.
But now, what if I wanted to look at a flow, but I only wanted to look at a specific flow? We can also click on this little filter button and it takes us into our new big filter, which we had a little screenshot of before and the semi also showed us as well. And I can come in and say, okay. Got changes to new contact two and Yacht's new test flow. So I'm just gonna update the comparison, and these are going to start comparing.
So you can see here, this lovely feature that I wish I could, take credit for, but some very great my engineers built this part. The flow, the flow visualizer, which is honestly, I much prefer this, to looking at XML, and I am the computer nerd. So that's saying something.
You can still do all the things you know and love, like refreshing items with the right click menu.
But you can also do type people refreshes, or you can just go whole hog and refresh everything all at once. Your selected items will appear, in the section here. And again, they are broken out into these sort of quick review, high level sections by type, but you can also still look at them altogether.
We've got these these sections up here for the quick filtering of the types, which is, you know, something that I'm sure everyone's already aware of.
And, I mean, most of everything else is kind of the same.
The the things that we're really proud of are the performance increases, when you're doing a comparison.
And the UI changes that we've made here are really just to leverage the improvements, in the in the back end, for the comparison speeds and also streamline the workflow for these kind of use cases that we've we've started to see crop up.
Now if I quickly, cheekily do this, no one will be the wiser. Oh, hit the wrong button there.
F eleven, not f twelve.
So, what's next?
We've got a few more upgrades in the in the pipelines, like, asset pipelines and our pipeline, the development pipeline.
We've got team shared comparison baselines. So every time you run a comparison, we start taking notes and snapshots of the metadata, and that's how we start to decide, when we need to go to Salesforce or not.
Right now, these are at a user level, but we're looking to, aggregate more on a team level. So in situations like, Sammy's team, if you're all working on kind of the integration sandbox and you need to move to QA, we'll start to hopefully see benefits in terms of reduced comparison times there because, you know, if coworker a runs comparison, then you can benefit from it as well.
We have currently a smaller subset of types that works on this kind of, do I need to go to Salesforce or not thing.
And we do that just so that we can be really sure and really confident in terms of, we'd like to be really confident that we're showing you the latest changes, and we're not hiding anything from you. We've had reports from people, in terms of saying that this or that isn't working and we we thrive on that. We need that feedback because we don't we're not out here to set you in the wrong direction. We wanna fix that as soon as possible.
We're actively working on improved time range management. So, at the moment, the time ranges are really just a front end filter down to just show you what, is in those time ranges. We're trying to we're we're experimenting with only retrieving specific items in those time ranges. And then, of course, we've got more UI optimizations and back end optimizations.
So I have waffled and I see people are leaving, so I don't think there are any type of questions. But if you do have any questions, as always, reach out to Intercom, our in app chat. Our, our lovely customer support people will will attend, to any of your questions. And if there are any gnarly ones, they probably will end up with me, and you'll have to deal with me again. But, we've got twenty six seconds left. Thank you everyone, so much for joining.
Claudia will be hosting the final panel session with our DevOps leaders. Thank you so much to Sammy for, joining us today and giving us a love presentation, and have a lovely week.