Description
Join Principal Consulting Manager Deepak Veera and Grant Mangum, Technical Consulting Manager from Cloud Giants as they talk through CPQ deployments and how you can build yourself a strategic framework when deploying this tricky config.
Topics discussed:
- Things to consider as you develop your DevOps processes
- The importance of testing
- Top tips to keep your CPQ deployments smooth
Learn more:
- Get in touch with Cloud Giants
- Deepak Veera’s LinkedIn
- Grant Mangum’s LinkedIn
- What is Salesforce CPQ?
- How to implement CPQ
- A how-to guide for Salesforce CPQ deployments
- Gearset’s Salesforce CPQ deployment solution
- Start a free 30 day trial
Related videos:
Transcript
Thank you for jumping on. We wanna start by thanking everyone for joining us today.
Our topic of discussion over this next hour is gonna be focused on Salesforce's CPQ product, but more specifically mastering CPQ deployments, expert tips, and some best practices.
Okay. Perfect. Today, from the gear set side, we have me, Andrew Calli. I'm a sales engineer here with gear set. And I'm also joined by my colleague Alastair, another sales engineer, our role in this presentation as it gets rolling is really gonna be to monitor that chat. So come say hi to us, hit us with any of your hard hitting questions, we'll be there and interacting with everyone.
Pass things over to Deepak to get into the fun stuff here. I really wanted to take an opportunity to introduce our partners at cloud giants, those who are gonna be leading us through our discussion today. Couple notes on cloud giants. They've been a Salesforce consultant partners since two thousand fourteen.
They've completed over two hundred engagements during that time frame, and they have a ninety plus net promoter score, and their specialty in Salesforce really lies within the CPU and revenue cloud products.
But that's enough for me here on the gear set side. At this time, I'd like to give the floor to Deepak. It's a Deepak take it away. Yeah, you bet. Thanks, everyone for joining. My name is Deepak Vieira. I am a principal consulting manager here at Cloud Giants.
I focused in CPQ and I've been heavily focused there ever since CPQ or steel brick became integrated with Salesforce. Seventy nine for the last five plus years.
I have what seems like have done a collective month's worth of CPU deployments in total in that time. So this topic are near and dear to me, so I'm excited for what we have in store for you.
And I am Graham Mango. I am a technical consulting manager at cloud giants. I've been working with Salesforce and CPQ for six years.
I've worked on many CPQ projects in a variety of capacities, both working and building CPQ solutions as well as working through CPQ deployments as well.
Like Deepak, I am super excited to be talking CPQ with everyone today.
Alright. I'll get us started.
We have a pretty wide audience group here. I think some of you are, of course, DevOps managers, there's probably CPQ admins, Salesforce developers or maybe you don't follow any of those buckets. You're just curious to hear more about what CPU deployments look like. Maybe you're trying to you're considering buying Salesforce CP can you wanna your DevOps process is well aligned for that. Wherever you are, we'd much just join. As Andrew mentioned, we welcome some audience participation, so we'll be curious to get your feedback and some thoughts as we go through the presentation.
But the main objective, what you should expect to get out of this is I think for me it's really twofold. One is strategic framework. So DevOps processes are not gonna be the same across all companies. Can be informed by a company size. And in this case also your CPU complexity.
So we wanna give you some tips from our side of things you can consider as you develop your dev ops process for your own org internally.
But just as well, we also want to, from our experience, give you some practical tips that's helped us. So I hope you walk away with some pragmatic things you can put into use in your work, but also some considerations on a more strategic level.
K.
So why are we talking about CPU deployments?
We throw out a couple words. There's dev ops, there's deployments, but what makes CPU deployments so challenging?
I have heard this joke that CPQ contrary to popular belief does in for a configured price quote. It actually seems for causes people to quit because it's it's very very challenging. Right?
I I think what's interesting about CPQ is what it does well. It does well in saying, we don't need a developer to put in all this logic. And the way CPQ does that is by having record level data that acts as metadata.
Right? So I think what makes it functionally easier for an admin to configure and develop is probably what is costing us on the deployment side. So all that's stated in the first bullet, CP configurations and this I think is the main point here. We have record level data think price rules or product rules that act as logic if they do function as such. So we need to make sure that we those are accounted for when we move to do environments but great, maybe you can speak and expand on those here.
Or, yes, so building off of Deepak's point, another challenge with CPQ are the object and record level dependencies that exist between the data, the custom objects that are a part of the CPQ package.
The ability to create these awesome bundles, product and pricing automation, templates sort of necessitates the need for this complex data architecture. And so there are all these dependencies between these objects that exist and having these dependencies makes deployments more difficult. You have to maybe necessitate the need for external IDs as part of your CPQ configurations, maybe even consider the use of a deployment tool to make sure that those dependencies are all preserved properly as you're moving through environments because when these dependencies get messed up in your CPQ data, you'll oftentimes run into errors that at times can be difficult to interpret and take time to fix.
The next challenge that we'll talk about right now is that CPQ solutions often depend on both data and metadata.
So oftentimes, with a CPQ solution, you're maybe deploying some custom fields alongside the need to deploy a price rule which exists in data. We can't add data to things like change sets when we need to move our configurations from one environment to the next, And so that's a consideration to keep in mind is how are you going to both move your data as well as moving any pieces of metadata that are also part of that same solution.
So I'm gonna pause right here. Those are a few challenges that we often see a cloud giant, but there are many more So I would love in the chat if you could post any other deployment challenges relating to CPQ that you often come across.
Just looking at the chat here to see if anyone's got any other thoughts on some of those challenges.
Thank you, t k. We have one about an exception to rules.
Allison, thank you for the comment here, knowing the dependent objects for each object being deployed. Definitely a big one there.
Few others. I think it was a great call out. So I'm thinking, yeah, like when you're doing a manual data deployment for price rules, you can't deploy a price with a custom condition. It has to be all or you know, like anyone. That's one, it's it's sort of a pain. And of course dependencies on objects too, I think like Allison mentioned where especially that custom fields to your CPQ objects, you have to make sure to include them in your query scripts.
We had one more here from Nick talking about backup considerations, definitely a great consideration here and specifically restoring active orders. Another challenge we hear.
What's it going?
Alright. Thanks everyone for for in putting their Again, we invite you to sort of actively chime in. We'll be monitoring chats and we'll have some opportunities for questions at tail end too. So let's dive into the meat of today's webinar.
As Granite and I were brainstorming, I think there's really four key conceptual areas and considerations that we sort of have craft here for you.
Just the game plan for today Grant is gonna kick us off talking about the standardization of documentation.
Grant will then cover the need for testing There will still be a need for it. We shouldn't shy away from it, and Grant will kinda dive in some detail there. And then building on those two, we'll wrap up today by talking about how those inform our deployment strategy. And then lastly, what a DevOps tool for CPG does for us?
Great, so the first tip that I will cover is the need to standardize documentation.
Really is true, the right level of documentation will make your life a whole lot easier when it comes to CPQ deployments There are a few types of documents that Deepak and I will swear by, we really like to use at cloud giants So I will talk about each of these here. The first one is a change log.
You may already use this type of document in your deployment process regardless of if you're using CPQ or not, it may exist with some other name, but really what I mean when I'm talking about a change log is a list of items that have been created or changed that will need to be included in your deployment. This is particularly helpful with CPQ since like we mentioned, you are combining the need of moving both data and metadata.
It's hard to have a consolidated place to have both of those things, if you're using change sets, you can only store your metadata there if you have spreadsheets with data or however you're managing your data in order to migrate it that may exist somewhere else. So, consolidating it into a single change log can be very helpful It helps organize the team and just keeps everyone on the same page of what's moving as a part of this deployment.
Change log, we find that a change log can be super helpful when you're really capturing things like one, the name of the component that needs to move, so whether that be the name of a price rule, the name of a flow, custom fields, the names of page layouts, whatever it is, just capturing the name line by line each of the things that will need to be deployed, as well as other data points like, is it metadata versus data, If it is data, what type of data is it? Is it a price rule?
Is it feature or a product adoption, quote template, you get the idea. Same with metadata. What's the metadata type? Custom field versus flow, whatever, just capturing things like that, just makes it clear for everyone involved to know what each item is that needs to be deployed.
If you're considering implementing a change log, I would recommend that you don't over engineer it I think there may be, you may feel the need to add a bunch of different data points, but I would caution you to keep this document accessible, keep it a tool that is helpful for the team and not a burden Because if it is a burden, people will stop using it to its full potential and that of course is a risk when it comes to your deployment. So just balancing the right level of detail in your change log is key, if you do decide to implement that.
The next type of document I'm gonna talk about are pre or post deployment checklists.
So, CPQ is a managed package.
Could view it almost as an app of sorts that you install in a Salesforce org.
When you install this managed package, there all these managed package components that are a part of this package that get installed into your org, all of the objects that we've been talking about, plus other things like the quote line, editor, and so on.
We are able to customize parts of this managed package but because CBQ is a managed package, that means that we cannot add any managed package components to our change sets in order to deploy them. Oftentimes, we're adding to pick lists on objects like Christrole or product role.
Or price action or price condition.
We're adding to those check or those pick lists but we're not able to deploy them through change sets. So you have to consider how are you gonna make sure those changes happen in the next environment, and a pre or post deployment checklist is an answer to that. Really what this is is just a checklist of the manual steps you'll have to take as part of your deployment to make sure that everything is recreated in the next environment, so that it works correctly.
And really the benefit of using this is that it's consolidate a place that the person who is managing the deployment can go and act on each step and just helps overall ensure a smooth deployment.
For everyone.
Lastly, I am going to talk about a data load script.
So this is a pretty much CPQ specific document that is gonna be super helpful if you are doing migrations of your data from environment to environment.
So depending on the size of your deployment that's involving CPQ configurations, you may choose to either manually recreate your configurations in the next environment, if it's a small deployment that involves a handful of things or if it's a larger deployment, you may consider doing a data migration.
So, this was mentioned in the chat as being a challenge.
Order for upload is important that you're getting it right. There are those dependencies which we've discussed that exist between objects So it's key in order to make sure that your deployment goes as smoothly as possible and you aren't running into errors that you're deploying in the right order, so that you aren't blocked by those errors that arise when there are certain things missing on the records that you're trying to upload. So for example, you need to upload price rules before you can upload price actions or conditions.
Similarly, you need to upload quote templates before you can upload things like template sections. And those sort of dependencies exist through a lot of the different CPQ configuration objects. And so having a script that you can go to each time to make sure that you're doing it in the correct order step by step, will save you time and is super helpful.
Now, our next hit is talking about testing, embracing testing.
I think everyone here we would probably all agree that Testing is a key step in any development, any deployment process, and like Deepod mentioned, that doesn't change with CPQ.
In fact, there are types of testing that we find super beneficial when you're working in CBQ doing CBQ related deployment or any CPQ development work. And so first type of testing that I'll talk about is post deployment testing.
You may also refer to this type of testing as smoke testing. And really what I'm talking about here is immediately after a deployment, going and testing key functionality, like using the quote line editor, or generating quote documents, or even just looking at key processes that touch CPQ that users are gonna be using and there could potentially be issues. Going and checking and making sure that all of those things are usable and are not blocked by any sort of errors that may have come up over the process of deploying to the environment that you're going to.
This is important because I've seen this many times if you've deployed CBQ configurations and maybe a relationship between some of the data is off or in our pick lists and the CPQ objects. We've entered in a field API name slightly incorrect.
When someone goes to say the quote line editor to start building quotes, they're completely blocked. They can't even save quotes and for sales users who are trying to do stuff quick and get things out the door, they're not gonna be happy, you're gonna get bombarded with messages asking for help, That never feels good. And so, taking that proactive step of going in checking to make sure things function properly and identifying any mistakes or issues and addressing them, as soon as you can hopefully before those end users get in there and start working is a great idea and something we definitely try to do The next type of testing is that's super helpful with CBQ is user acceptance testing.
We find that UAT is key for CPQ projects, especially getting that feedback from sales users who oftentimes know how products should be set up or how pricing should work the best, getting their input as soon as possible to help us as the admins or developers make sure that what we've built is correct and meets their needs.
There are This benefits us because it makes us look at where we can more quickly build and produce what end users need, but also it empowers those users who are getting involved with UAT gives them extra chances for training, but also empowers them to be super users and go out and help other users who aren't involved with UAT as their maybe learning the system, learning the products, whatever.
Getting those users involved with UAT, is very helpful for CPQ when it deals with these high use things like building quotes and messing with products and pricing.
So I am going to pause right here again. We've discussed both documentation so far.
And also the need for embracing testing.
In the chat, I'd love to hear what other types of documents maybe you and your team use. As well as other types of testing you feel are important.
Okay. Here's one from TK.
For testing. We talk about seasonal discounts, approval chain of commands.
Allison, another comment here. We have one express fee excel sheet that combines all these documents, and I find it super important. We do all these types of testing, but also add automated regression testing for each release. Very nice addition there, Allison.
And then Nick, we use a runbook as a single checklist as the deployment progresses, all great comments here. Thank you for throwing some thoughts on there.
Okay.
Let's turn down and build on what grant has discussed here. So next, let's consider what a deployment strategy looks like.
So I'm guessing just by virtue of the title of this webinar and I think the, you know, the sponsor here. We are probably all big fans of processes. Right? We we are drive towards process, And I think with the end, I would wager that we do that to to mitigate risk.
One thing that I think it's consultants we often think about, and Grant just mentioned it here too, is things like how are we driving user adoption and how are we providing value Right? And I think that's something I wanna discuss here.
You might remember I was saying, we will need to consider how this looks for your particular organization considering complexity and size. But a key item that I wanna call out here is let's not have process just for process sake. That's just bureaucracy.
Right? So really what we're looking for is how can we accelerate time to value while balancing that against risk.
So for example, if you are deploying a key I think somebody had a question on on product bundles earlier. Right? Like, that might need to have a little bit more testing, some post deployment testing like Grant just discussed. On the other hand, if you're just adding a quote level field that's just a view layer, we don't need to just have the same level of you know, a process that goes into it. Now depending on your company and other teams, there might need there might be a need to standardize across the board And again, that is gonna be a business decision that you'll need to make. But again, as you're considering deployment strategy, make sure you're not forgetting that we do this with the aim of ultimately providing value for the users. Anytime we can accelerate that and articulate that and drive adoption, I think that's a a way that I think DevOps benefits.
The next item here, cutover plan. I think this will work while calling out. So just like any other functionality, but probably here more significant is these are we are deploying functionality that is key business drivers. So there's salespeople that are actively building quotes.
They're probably actively negotiating with customers. Maybe it's in the middle of an approval process.
And making changes that impact that could be not so may not be well received.
So the main item here is when you're deploying things like price rules, product rules, or product options bundles, consider the impact that it might have to in flight quotes. So for example, if there's a existing bundle and you're adding new product options, When somebody goes and it clicks, we we can figure it'll change things. So you might want you to inform the users at some capacity. If you're changing price will functionality, that'll also impact in fight quotes. So make sure you do consider that you might wanna do this during off hours. At the very least, or in addition to communicate to end users to make sure that they are informed.
The next item here We have thus far talked about, you know, conceptually, we've created deployment. We are we're deploying what what documentation do we need? What sort of testing do we need? How do we communicate that the user? How do we accelerate time to value?
But also remember that once all this is done, we have to turn around and refresh sandboxes, which if we're just talking base level metadata, it's super Right? We just go into our production order, gonna be click a button, and it refreshes our orgs for us. But remember, as we said at the very beginning, we have CPQ data in here that functions as metadata.
That CPQ data, that's, again, price rules, product rules, etcetera, will not automatically move in your sandbox. So as you're developing your deployment strategy, make sure that you're properly allocating time to deploying those records to your sandboxes again after This might inform the cadence in which you refresh.
This might inform sort of the if you wanna stagger your lower environments, you may, you know, refresh one one week and another one a separate week. But again, the main item here is make sure you are allocating enough time for that in your deployment strategy.
Now let's talk about deployment tool. And I think this is where things get a little bit more interesting and hopefully fun.
CPK deployment tools are really cool, and and I cannot emphasize how valuable they are just in terms of, like, predictability and just time savings.
Grant discussed some some documentation earlier.
We discussed things like change logs, pre and post deployment checklist, and our data load order.
With a deployment tool, we reduce the documentation needs and complexity.
So for example, we a a a a data load order script is irrelevant. Right? We have the ability to because a tool automatically account for the object dependencies and we'll move them over. It knows the logic in which deploy. We no longer and I this has been a particular experience I've had. Right? Like I've spent Many of times the coffee shop waiting and just populating spreadsheets and doing lookups, so making sure I'm deploying things correctly.
This happens in in a couple clicks. Which I think is a is a big time saver.
The next item deployed with confidence, what does that mean? So I have I think I'm I'm embarrassed to admit I have spelled API names incorrectly for price rules and then spent hours trying to troubleshoot why something wasn't working, it's like finding a needle in a haystack.
For me on a given Friday, I would love to know that deployment is gonna take x hours. And that'd be a pretty good measure. Instead of spending time off hours and weekends, trying to troubleshoot things and find why things aren't working. So the less manual work that we have, the fewer errors we ought to expect. And for us, from our experience, that just meant more confidence with with predicting time.
Lastly, we discussed on the last slide, the sandbox refreshes. Right? We that that's also a pain. You have to do a data deployment every time.
Again, with the deployment tool deploying CPK changes, you can go from production to lower environments as well. So just like you might click a button and refresh the sandbox. Here in a couple clicks, we also have the ability to seed sandboxes with CPU data again.
Okay.
So far we've discussed are key considerations, I want to turn now and discuss a tool that we use here at cloud giants that we've been using this year.
And just kind of go through the ways in which has benefited us as well here.
So we are a consulting shop. Literally for us, we way our business works is we we bill our clients and we wanna provide them value.
Gearets tool their CPG tool for us has meant and I've highlighted this before, assessment much more confidence and time savings, which then means we can take that time and provide more value for our clients.
And maybe for you, it it looks different. Right? It means probably more predictability instead of saying, like, I have a deployment. I need to block off my entire weekend because things go wrong.
But let's discuss some of these items in more detail.
This deploying managed package components is not possible with chain sets. So what is a managed package component?
We grant mentioned this that CPQ can be viewed as an app that we're adding on. But with that comes dozens of object model relationships and possibly, you know, hundreds, if not thousands of record level data that are also in there that have dependencies.
I mentioned my embarrassing story a little while ago, with spilling a price rule API name wrong.
With a change that you cannot deploy pick list values that you're adding on to price because it says that this is a managed package field. You cannot deploy it. With gear set, we do have the ability to deploy pick list values that we've added in. I think also although it may seem small, considering the complexity that adds on the SB accounted for in pre and post deployment checklist, being able to do this part of deployment is a huge value out in my opinion.
The next item, Grant mentioned this earlier.
We deploying CPQ as metadata or with metadata, I think is a huge value out as well. So what do we mean here? We know for those of you that have used Pearson before, we know that we go in and, you know, in a in a view, see all the metadata components have been changed or they've been added in. I also mentioned at the beginning of this webinar that really what CPU aims to alleviate in configuration is added complexity in deployment. Again, record existing as functioning as metadata.
In this layer being able to view data as metadata, it makes life a lot easier for us. We can also view pendencies. So just like you would say, if you're deploying a page layout and there's a field you forgot to add in, here too, if you're deploying a price rule, it'll point out that you are probably forgot a price condition that you can see that and drill down in an added price condition as well. So having those both in one screen, one UI makes life a lot easier for us, and I think that's something we've also benefited Lastly, the the pre deployment problem analysis.
So like users of gear set may be familiar with this. In addition to change that like change its validation that Salesforce runs, gearset will run a problem analysis beforehand that that'll point out just the dependencies we might miss that'll make our deployments more successful. The same functionality also exists in and with the CPU side as well. So I just mentioned this a little while ago.
If I've added in price rule but I forgot to include my price conditions or price actions, well, or at least Salesforce is not gonna catch it. It doesn't know. But the the problem analysis tool will also point out the dependencies for your CPU data components as well.
I think out of as we sort of conclude this, I think one other item I'll add on is just I think from from us being a consulting shop, we've also gotten feedback from other consultants that have used this tool.
And On a given Friday, deploying and going through these monotonous processes checklist and just quoting of data, you know, quoting of data, making sure we're populating look up IDs.
Going from all of that, which I think one of our consultants was saying, normally what a project normally what takes us four hours. I was able to get done in twenty minutes.
Right? So to me, it's like that's more value adding to clients, but it's also for me, it's it's a better Better potty of life for consultants. Right? I don't know if anyone of us would say I love the drudgery and monotony of data deployments and troubleshooting why you know where I messed up, right? But simplifying that, getting more confidence, I think it's been really key for us, and if that's something I think we would flatly talk more about.
But, granted, I'll turn it over to you to to go through the slide here.
Great. Yep. So just to recap, before we open it up for questions, what we believe are the four most important CPQ tips for mastering deployments are, one, standardizing your documentation, so that your team has what they need in order to organize a deployment, embracing testing so that you and your end users are set up for success, crafting a deployment strategy to fit the needs of both your users and the organization as a whole, then finally investing in a DevOps tool to both decrease the time spent deploying and increase the quality of life for the people in deploying.
Alright. Great. Well, that brings us to our question and answer section here.
So many good insights there from Deepak and Grant. Thank you guys for for delivering that to the group here. Hopefully, everyone got some some nice things to consider out of that. At this point, like I said, we are going to move into that Q and A session where we can dive deeper into any topics that we've discussed as a part of this.
And field any questions that may have come out of there. For those of you who have to jump at this point, a quick heads up that we do have just a very brief anonymous survey that's gonna pop up in your browser browser when you leave the webinar, as well as that will be sent to your email. We would really greatly appreciate you filling that out. Your honest feedback is very appreciated.
With that said, let's get into that q and a. So feel free to come off from you If you have any questions or type into that chat, we've also had a few other topics that have been being discussed behind the scenes here in in the q and or in excuse me, in the chat. So we had Pumit. You talked a little bit about pre and post deployment checklist.
I thought Grant and Deepak, if you guys wanted to maybe start there. We could talk a little bit about some of what those have looked like and throughout your experience and working with different customers. We had Hangehta who asked a question about those differences between CPQ advanced approvals, as well as that native Salesforce approval stream just a couple things to get us kicked off here. But let's open it up to any questions, and we can dive into any of those challenges too.
So Depot and Grant, I think maybe a good place to start might be talking a little bit about those, what you guys have seen in those pre and post deployment checklist, Any context that you wanted to maybe provide Puneet who asked a question just about that. I mean, if you guys have any examples of specific items on there, different things you guys have seen might be helpful.
Sure, so I think some of the key things when you're doing a deployment that'll always be in a pre or post deployment checklist will be if you're not using a deployment tool, any of those manual steps relating to your CPU configurations that have to happen in order to set up that solution in the next environment. So, specifically some of those items would be adding pick list values a, b, and c, to this field on this object. So adding the API name for my new custom field to the target field field on price action, things like that.
Those manual steps updating the managed package items that you can't deploy through a change set.
Would be something you would often see, depending on the method which you're taking to get those configurations from the sandbox to either the next sandbox or production, whether that be manually recreating each item or doing the doing a migration, just having that called out in your script as well will be helpful.
The list of price roles that have to be created or the list of bundles that have to be created or if you are electing to do a data migration, a pre made, linking out like a pre made script that deployment order script that we discussed to make sure that you upload all of those records in the correct order, doing that would be things that you would include. You can also include maybe some non CPQ things that you wanna remember to do like alert users that the deployment is finished or activate flows. You know, you can really make it your own, but when it comes to CPQ, really the things that you'll wanna keep in mind are any of those manual steps that can't be a part of a deployment package?
You'll include there. I did see someone in the chat mention executing scripts in the CPU package settings, that's a great one that I would definitely recommend including.
I think we've all come across weird error happening in CDQ and everything else you've looked at isn't resolving the issue. And so you go to the package settings and you rerun post install scripts and magically it's fixed.
It doesn't oftentimes hurt to run those after deployment just to maybe get ahead of any weird things that have happened with CPQ, the package as a whole with the automation specifically is somewhat of a black box and you don't have a lot of visibility into what's happening. And that can just fix things that we can't see on our end because it's a managed package. So, I would definitely include that as well. That was a great call out in the chat.
Thank you, Grant.
You had a couple more that are coming in. We've had some comments a little bit about refreshing down into lower orgs. We had a couple questions about best practices for what types of orgs should contain CPQ configuration.
We've been speaking to those a little bit in the q and a, Let's see. I think maybe we'll I see one here from two. It says, are there any scenarios that we have to deploy CPU configuration data manually I think that is a great question.
Grant, I'm curious to get your take, like, given I don't know, like, what's called an average or from our experience, CPU configurations, anything CPU related we were we would have to do it manually I'm I'm hard pressed to think. I think the one thing I like to call out here is common CPU use cases using a custom pricing lookup table. So here we have a custom object that we would create that has records and a price tool would look up to it to popular price With with gear sets tool, that also is included. It can also be deployed.
So I haven't had it I need to go in and do that manually for any reason. But outside of that, I'm not sure I don't agree. Maybe you can chime in. Any other things we have to do manually.
Nothing comes to mind when you have a deploymental, like, gear set.
It really does just about everything you need for a CBQ deployment.
Forty you're able to manage it through the interface through the tool and that leads to a lot of those benefits that Deepak mentioned where you're saving time reducing the need for documentation because there's so many less manual things you have to do, therefore less human error.
So Yeah, I can't think of any situation where you would need to do something manually rather than use it deployment tool if you do have a deployment tool that you can use.
We've also had a couple questions from Rajesh about automating sandbox refreshes.
I'd be curious to hear if Grant and Deepak, if you guys have seen any tooling that might help with that. I know on the gear set side, we we've talked a little bit in the chat about our external ID wizard that helps that certainly helps that refresh process a bit with CPU, but any light you guys can shed on any types of tools that you've seen or maybe interacted with or just any tips and tricks for making refreshes easier?
So I think there are two ways I'm gonna come at this. So if you do not have a tool that you can use to help with your deployment or help with a refresh, the data load order script that I mentioned, is gonna be very helpful so that you can make sure you're one pulling your data down and then uploading it into your sandbox in the correct order. Anyone who's ever done that knows that it takes literally hours, you know, it could even take days depending on other priorities that you're juggling.
So, that sort of leads you to, I think, to the question, like how can we automate that? How can we make that easier? And I don't know of any tools that automatically kick off refreshes, but I will say that gear set the CPQ tool with gear set is super helpful for moving your CPQ data down to a sandbox after you've refreshed.
Just not too long ago, colleague on the Cloud Giants team actually did this, he was spinning up a new sandbox and he wanted to move, it was a developer sandbox, so no data, no CPU data, and he needed to move the CPQ data to that sandbox so that they could start building. And he used gear set to deploy from production to that sandbox, the CPQ configurations, And it took a matter of minutes, way faster than what it would be if you were to try and upload any of that manually. So, Like I know Deepak and I, we would highly recommend using gear set for that. You do have to click a button to kick off the movement of those configurations to the sandbox, but overall, the time that it takes is greatly reduced significantly. Yeah.
Thanks, Grant.
Got a couple more coming in here.
One from Honda, is this statement true, CPU metadata migrate from lower environments to prod, the CPU data need to refresh from prod to Sandbox as post deployment manual process.
It depends, I think. I don't know if Deepak and Grain, if you wanted to maybe take that and run. I I I sure you maybe have used gear set. We do allow you to deploy those things upstream as well as downstream.
But any context that you guys have ran into as it relates to moving CPU and migrating it from lower orgs up, but then what that process looks like to potentially get that downwards into some of those lower orgs. Any context that you guys have around that? I can talk to that. I think Henry had a similar question, so I'm hoping we can sort of answer both.
We will need to think of this as a from a lower environment think of CPQ data as another deployment.
Right? And the the the short of it is that we can't deploy CPK data as metadata, so we need this different tool for that.
Now once we've gone from a lower environment to production, we need to refresh our sandboxes.
So there, if you don't have a CPU tool, yes, you will have to manually query data, make sure you're using lookup IDs and then populating that or, you know, importing that into your lower environment, sandboxed after the refresh has been pleaded.
So it it's it's we'll need we'll need refresh, we'll need deploy on on from a lower to a higher environment and then refresh from a higher to lower. It's having a deployment tool a CPU deployment tool just automates that makes our flag easier for us.
Let's to to Henry's question.
Yes. So we if you don't have a full sandbox, you cannot I think only product records really move over that CPK uses, everything else will be lost. So you will have to again manually export and import data again. But so you have something like yours that's deployment tool will will make that easier for you. You can do that with a couple of clicks.
Alright. Great.
Anja had another follow-up question here asking a little bit about DevOps Center and what their support for CP queue looks like at the moment. Deepak and Grant might throw that over to you. If you guys have seen anything that they've come out with, of course, here on the gear set side, we have seen the Devop Center has come out when we're following their road map, but I I guess I'll start with Grant and Deepak. Anything that you guys have seen on that side?
I I have not seen anything. I don't know what has been done with the DevOps Center relating to CPQ.
I would imagine that this is a very specialized piece.
This CPQ deployment because it does involve this data element as opposed to just standard deployments that involve just metadata generally.
So, I wouldn't be surprised if this is something that hasn't been addressed by the DevOps Center, but really, I'm not sure. It isn't something that I've looked too much into. So, If anyone in the chat knows, would love to hear that, but I can't say.
Yeah, And and what I've seen too just generally speaking is looking at that road map with DevOps Center, they're kind of doing iterations here of adding support for additional things. As far as I know, at the moment, they don't have CPQ deployment support.
So I I think that probably would be the general answer to that Honda, but, of course, if there's any follow-up that you might want on that, you could reach out and say the word. So Okay. We have one more from Rajesh. Can you list a few features you need to gear set, which you have used in your experience, what are present and usual CPU deployment tools such as Blue Canvas, proudly, and you and your team felt the advantage of opting for gear set Again, I'll I'll throw it over to Deepak and Grant. If you guys have any thoughts, I certainly could kind of come up with a couple of my own as well, but I'll let you guys take that one first.
Yeah. I think that's a great question.
I'll I'll take a stab at it. I've used proudly before And for me it has been challenging and can be a pain to have that installed in a Salesforce instance. I think having an external website and UI they can go into the queries that data makes that a little bit easier.
In my in this been a few years, now probably also felt a little bit more cumbersome to set up Right? And again, but it also tacks on this like a added layer of there's you have your regular deployments, but then there's also the separate, albeit maybe, you know, easier to do CPU deployment. With gears, I think I enjoyed because it's it's a one UI, and again going back to the point of having CPU data functioning as metadata. In one UI, I think is a big benefit here.
Blue Canvas, on the other hand, I have used I know that they made developments recently.
There have been limitations I've run into particularly when you're deploying product bundles where there's many product options. So I think there's some I've I've had issues with it running up against you know, like max record level and I haven't been able to deploy. So my takeaway then was, hey, like, this may be a good tool, but deploying product bundles is is not really great here.
So again, like I for me, it's some as I'm thinking to processes, it's thinking what kind of streamline? How can I take the thought out of as much as possible and simplify? And I think to both those reasons, these are both tools we've looked at and I've used, like, my preference in lean would be for gear yeah, and I think from my perspective, I I that covers a lot of it deepak.
Of course, we're Josh if you feel like you you want any more context we can certainly speak more to it, but I think that was a great way of laying it out there deepak. And, Rajesh, thank you for the confirmation there.
But with that, we'll go ahead and wrap. I think if anyone has questions, like I said, we're gonna be hanging out.
Last thing I'll mention.
Of course, you can get in touch with the art partners from Cloud giants if you wanted to talk. More with Deepak and Grant. We have their link here that you can see. Feel free to visit their website, learn a little bit more, and these guys would love to stay connected with anyone that that had had interest in that. Oh, we do have a couple more. It looks like maybe have come in from Honda, so we can hang out here for a second Honda and speak to that, what's the billing adoption rate for your customer base?
So say what percentages now of your customers are using Salesforce billing. Again, I'll throw that over to you Deepak and Grant. What I would say on the gear set side is we do see a smaller slice of customers that are using the billing aspects of that CPQ solution.
To put a percentage on it, I might say something like twenty to thirty percent, Deepock and Grant, though, feel free to leverage any of what you've seen and if there's different answers from your guys' perspective.
I I think there's been interest in Salesforce billing. I don't know if there's any projects that we're actively working on at the moment that where the customer also has Salesforce billing we on the other hand, CPQ plus is the majority. So by that, I mean, CPQ and advanced approvals, right, both of them there.
The my take on the Salesforce billing tool, I feel like it is a little more on the younger side and I think leaves some to be desired. It is really nice to be able to automate the invoicing process in Salesforce, but going into the RP and the handout can be a little bit complicated there.
But I've, you know, I've I've had experience implementing that project.
The customers have seemed like it.
Alright.
We'll go ahead and wrap up then. We got around a minute remaining. Thanks again so much for those of you who are still on the call here. I really enjoyed all the insights, all the questions, and Depoc and Grant, any last comments from from either one of you?
I think great audience. Thanks for attending. This has been a blast, and we'd love to keep in touch. Alright, everybody.
Have a great rest of your day. Hi, Roan.