Description
Catch up on this webinar with Ali Daw (Senior Software Engineer at Gearset) and Jed Ingram (Account Executive at Gearset) as they cover how to guarantee business resilience through a complete Salesforce DevOps toolkit. And how breaking down the silos of DevOps and backup can benefit your organization.
They discuss:
- What a complete DevOps toolkit looks like
- Backup
- Reliable deployments
- Monitoring
- Compliance
- Security
Learn more:
- How to make a business case for a Salesforce backup solution
- The business case for Salesforce Backup whitepaper
- Velocity versus resilience: what’s more important in Salesforce releases?
- How to pick the right Salesforce DevOps solution for your needs
- Gearset’s Salesforce data backup solution
Related videos:
Transcript
Up next, we have Ali and Jed who are gonna be speaking to us about guaranteeing business resilience with a complete DevOps toolkit.
Hello, Ali.
How's it going, buddy?
Away. Yeah. Great. Thank you.
Thanks.
Hi, everyone. Hope you've been enjoying the summit so far. So today, I'm gonna be talking about guaranteeing business resilience with a complete DevOps toolkit. But before we get going, I should probably introduce myself.
So my name is Ali, and I'm a software engineer by background. I've been working on DevOps tools both inside and outside of Salesforce for about nine years now. And more recently, I've been leading the team at Gearset working on our backup tool. So that means I spend my day talking to users, trying to understand the challenges they face, and helping the team prioritize what they work on.
And then occasionally, I do get to write some code. And I am joined today by my colleague, Jed.
Hey. Oh, I'm Jed, one of the account executives here at Gearset. I've been working in the Salesforce DevOps ecosystem for around two years now, speaking to a wide array of customers trying to improve their DevOps set up. My day to day very much consists of speaking to users, understanding their requirements, and use of Salesforce, building and running technical demos right through to proof of concepts.
Last back over to you, Ali.
Thanks.
So at the start, I mentioned a complete DevOps toolkit. So what does complete DevOps look like? Well, it might be easy to explain it in terms of what incomplete DevOps looks like first.
So DevOps means following or at least working towards a bunch of established best practices. That includes things like unit testing, version control, a branching strategy, and then ultimately building some kind of continuous integration or even continuous deployment system.
Now most teams have a maturing DevOps process, and it doesn't really matter where you are on that journey. It it's more important, the ultimate destination that you want to get to. So when people are figuring out what they want their DevOps workflow to look like, it's quite common for people to consider backup separately to that or in quite a lot of cases to not even consider backup at all.
So when we talk about complete DevOps, we're really talking about a backup solution that's closely integrated with your other DevOps processes.
So I'm gonna spend the next few minutes going to a bit more detail about why this is the case and the benefits for thinking of thinking about backup and DevOps together. And then Jed will be back to show us a bit what that looks like, in gearset.
So I'd like to explain the challenges that are involved in adopting a a complete DevOps process and why that hasn't happened for a lot of people. So within the Salesforce ecosystem at least, backup adoption is lagging behind DevOps adoption. And I think one reason for this is that companies think there's no risk of data loss when working on the Salesforce platform.
And that would be nice if it's a case, but, unfortunately, it's just not true. And I'll I'll go into a bit more detail, about that later.
Another reason is that most companies have not been affected by a serious data loss incident. You might read about them or hear about them, but just because you've never had a serious incident doesn't mean you can't have one in the future.
Another challenge is that backup and DevOps get considered independently of one another. Tools that are available often focused on one or the other, so it's common for companies to use different tools to do these jobs. And that also means it's common for the responsibility for these two different areas to lie with different teams or even different departments.
The issue with that is that there's more layers of communication, and that limits knowledge sharing and ultimately makes you slower to be able to react to an incident.
So it might seem quite basic to justify the value of a backup solution, but I think it provides some useful context. So I'll go into a bit more detail. The most compelling reason is for having a backup solution is to protect your business in the event of a data loss incident and ultimately provide business continuity.
But what do we really mean by data loss here? It can mean a bunch of different things. It can mean a hardware or service failure. Now a platform like Salesforce manages that risk for you. And whilst they're extremely good at mitigating incidents and reacting to them if they do occur, they have happened before, and they can happen again. And it's a very low risk, but it's still there.
The other types of data loss can include data corruption, and that might come about because of user error. This is if you don't recognize it as seen from the movie office space, which is a favorite of mine. So that user error might be someone wanting to delete a record and, intentionally, but delete the wrong one or not understand the change they're making might have, unforeseen consequences.
Another way that that can come about is actually intentional or malicious, action. So I've in my conversations with people, you know, looking into backup solutions and all sorts of horror stories about someone leaving the company, maybe not on good terms and on their last day, intentionally changing some stuff, deleting things in the in the live environment to to cause harm. That's an unfortunate thing that can happen.
Another aspect to this is, actually, I should probably explain this. So I've got a small child, and I watch a lot of Pixar movies. This is a scene from Wall E, and, this is the bit where some robots that were originally designed to do something useful have, gone wrong and caused more damage. And that is a very tenuous analogy for data integrations that have gone wrong. So most production orgs will have all sorts of integrations that happen automatically. They might be triggered by certain events, or they might be a nightly scheduled task that goes and make some changes and updates fields.
And that's great if it works as it's intended to, but it's quite easy to introduce bugs in this. And especially if a human's not involved, it might be a long time before you realize that something isn't working as it should do. And at that point, you've caused a lot of disruption to the org.
So many companies don't have a backup solution in place. And if you don't, then you are at risk of these events occurring and not being well placed to deal with them. So what's the impact of data loss?
Well, it's hard to make general comments here because the impact is highly specific to your organization and and the nature of the incident when it happens. But it's so it's worth taking a moment to consider what that impact could look like for you and your business. So if you consider your users come in and they can't find the data that they need, or maybe the users are working with the wrong data without even realizing it.
Temporary disruption to normal operations is is bad enough. So that would be a data loss incidence causes problems, but then you recover from it. But there's also the risk of permanently losing data. And if that data's business critical, then that's about as bad as it can get.
But, ultimately, any form of disruption is gonna cost the business money. And the longer it takes to recover from that, the greater the cost. And those costs can be thought of quite tangibly. You know?
How much does my company make per hour? How many hours will be disrupted? Multiply those together and have a scary number. But there are also less tangible costs in play as well.
If you have an instant, your company's reputation could be damaged or you could lose client confidence, and that's a lot harder to quantify.
So we really want to measure backup solutions by their restore capabilities.
So what do I mean by that? Backup solutions, they need to offer a bunch of powerful tools to be effective. That might include monitoring, so you can identify an incident occurring. You could when you're understanding the issue and trying to rectify it, you'll need search and analysis tools, identify exactly what's changed, and then be able to find and retrieve the data that you need to to fix the issue. And then when you're actually moving that data around, it can have complex relationships between, other objects of the fields. And so you need to be mindful of that when you're restoring things. You need to be moving a logical unit that still makes sense.
So the goal really is to be able to recover from incident as quickly as possible. That minimizes the cost, minimizes the disruption, and that involves, the time that I'm describing on the right hand side of that clock diagram. So that's the incident occurring at the top, and then it's the time spent restoring normal function in your organization.
So that's not only the time spent identifying the issue, it's the time spent testing and moving that release through your pipeline, and that's the bread and butter of DevOps. So you're starting to see how backup and DevOps are very, very closely linked.
But on the left hand side of that, clock diagram, we're also thinking about the time between your most recent backup and the incident occurring.
Now this is data you can never recover. So if you're perhaps using the the native data, export tool inside of Salesforce that offers weekly exports, which is great, it's better than nothing. But if you consider how many changes your org sees in a week, that's the potential data that you'd be losing in between that backup and and an instant occurring.
So a robust backup solution, is gonna store the data in a separate environment to the live data. That's that's a core tenant of of backup solutions in general. And storing your backed up data inside Salesforce does give you a single point of failure. If there is a service issue, then you won't be able to get at your data at all. So tools like Gearset will store your data in AWS servers using enterprise grade encryption, and that'll be accessible even if your org isn't available for some reason. I should point out that the original diagram here was meant to show all of your eggs being in one basket. One egg has escaped from the basket, but don't let that distract you.
So you also need to think about how you're gonna handle your data. This is one of the the dangers of implementing your own backup solution. Where are you gonna store this data? How are you gonna store it safely?
How are you gonna control who has access to it? How are you gonna track where else it's been used? So if you're downloading from Salesforce, that weekly export and saving it, you know, on your local machine, that's quite a big risk. It's only a matter of time before someone leaves a laptop on a train and you have a serious data breach on your hands.
So it's a lot to consider with a backup solution.
So now I'm gonna take you through, five different reasons why bringing backups within your DevOps tooling and processes is gonna make the whole system more effective. And the goal here is to give you a truly resilient process of building on the Salesforce platform so your business can get the most out of it.
So first off, we've got data and metadata.
Now DevOps and backups both relating both relate to managing and protecting what's in your org, and data and metadata are intertwined.
They're gonna interact in complicated ways. So, for example, bugs in your metadata config can cause corruption of data. If you delete some metadata, then that's also gonna delete the data that's held in that metadata object, for example. And without the metadata being correctly configured, you can't restore data because it wouldn't be the right shape for the place that it's going. And it's also worth bearing in mind that metadata controls the permissions used to access your orgs data.
So by integrating backup and DevOps, it makes it a lot simpler to deal with data and metadata holistically. For example, you can do things like backup your data immediately before doing a big release just to minimize that time.
So the next point I wanted to make was around reliable deployment. So moving best data and all data from one place to another is a deployment in the common DevOps workflow that could be deploying from your dev org into Git or from a sandbox into a product, and restoring from a backup is no different. So a good backup tool needs a powerful deployment engine that really understands Salesforce data and metadata.
Another nice benefit here is that DevOps deployments are routine operations. They're done day in, day out.
Data incidents should ideally be a lot less common.
And recovering from an incident is stressful enough. You're under pressure. The business is impacted. People are upset. The phone is ringing.
If you're the one doing this, then you're the person that has to fix it. And, ultimately, you'll be making some changes to production, which is never to be taken lightly.
You don't wanna be adding on top of that stress, the stress of remembering how to use an unfamiliar tool or an unfamiliar workflow to do that.
So a single tool for DevOps and backups means you and your team will be familiar with it when it matters the most.
I mentioned monitoring earlier, and I think monitoring is a really powerful tool. It gives you visibility into a live state of your org.
So in the normal course of things, if if nothing's going wrong, you can establish what normal looks like for your org, which is a really useful phenomenon in itself. Some objects might see thousands of changes to records each and every day, and that's that's fine. Other objects might see a couple of deletions every other week, but it really means you can start to spot and hone in on unusual activity. If one of those objects I mentioned before suddenly has a thousand deletions in a day, it's a sign that something strange has happened and it it warrants some attention.
So the real nice benefit to this is you can start being proactive rather than reactive to an incident. You can start investigating strange behavior quickly. And the more quickly you start investigating, the smaller the scope of the damage, for example. So if you imagine those data integrations that are running nightly, if that was left unchecked for a month before you you start to investigate it, it's gonna do a lot more damage to your org than if you investigated it in the morning of.
Similarly, the fewer changes that have been made, it's a lot easier to narrow down the root cause of an issue. So So if you've got the net changes in a day or two versus the net changes over a week, it's a much smaller playing field to start picking apart and figuring out what's actually gone wrong.
And that's in comparison to not having any sort of monitoring in place where chances are if there's been a data incident, then the first time you're gonna hear about it is sometime later when someone finally gets around to looking up an account that they know should be there and phoning you up saying it's not working. That's a lot less helpful than being able to diagnose an issue based on recent changes, recent deployments to your underlying data model rather than just some observed behavior.
Now complying with data protection legislation is never gonna be easy, but having a single hub for recording and managing changes makes fulfilling audits a lot simpler. Backup tools give you a history of who changed what and when in your production org, and your DevOps tools will give you a history of why and how these changes were made. So tying them all together is just gonna make a lot more sense.
If you're using production data to seed sandboxes, then you can start tracking the history of other places your production data has been exported to. And you can also use automated data masking tools to redact sensitive data. So you're really keeping the sensitive stuff in production where it belongs and nowhere else.
There are the trickier things that are out there that vary quite a lot by industry and the regulations that you need to comply with. So, for example, one is the GDPR right to be forgotten.
That means that, one of your organization's users may, insist that you purge all information related to them from your live system, but you may also need to purge them from all of your backups as well.
Similarly, various other, regulations involve retention policies.
So they these policies might take either a minimum or a maximum time a record can be held by you full. Now achieving compliance with these things can be tricky, especially when backup and DevOps overlap. So if you're using a tool that can help you with both, you're in a much better position.
And the final point I wanted to make here was around security.
Now and a separate backup tool from your DevOps tool doubles your exposure to risk. You've got twice as many tools to learn, twice as many users to manage. You've got to think about their permissions and their hierarchies, twice as many licenses to manage, twice as many proof of concepts to go through, twice as many legal reviews. So it's much better to have a single trusted vendor that you can use for both DevOps and backups.
So, hopefully, that's given you an overview of thinking about DevOps in a more complete manner. And now I'm gonna hand over to Jed. He's gonna show you what that looks like in practice within Gearset.
Thanks, Ali.
Great overview there.
I'm now gonna show you, as Ali says, how to put this into practice using Gearset.
So to set the scene a little bit here, I'm a Salesforce product owner, and I lead a small team of Salesforce admins and developers. We've been using Gearset for our metadata deployment for around three years now, and we've recently decided to adopt Gearset data backup to ensure we've got a solution in place in case of data data loss.
As you can see, I'm currently logged into my Gearset account, and the first thing you'll see is, the tool that we use for our metadata comparisons and deployments.
On the left, you can see all of the additional features, and you've got data backup about three quarters of the way down.
As product owner, I took responsibility for creating the two jobs that you can see on scene on the screen, one on production and one on UAT. And I also assigned the relevant user permissions.
The job that you can see on the left is running against my production org, capturing both the metadata and the data daily.
In the job configuration, I've actually set up some really slick smart alerts, and these notify me when strange behavior happens in this org.
So you can see I've got three set up at the bottom, and they capture when, changes or deletions happen to the most important objects.
This gives me a really good overview of my org health, and I like to keep track of that daily.
Funny enough, one of my junior developers has just sent me a Slack message saying that that that they've been working directly in production trying to fix a bug. They've accidentally deleted a custom field by mistake, and worse still, they can't remember what field they deleted.
In turn, this would have removed the data values on the associated records.
Not only does the loss of business business critical data have a direct impact on our operations, but it can also damage our reputation and client confidence. So I really need to get this sorted ASAP.
Alongside the Slack message, as you can see here, one of the smart alerts was triggered inside GearSet, and it notified me of the change via email. I get a quick overview of the backup summary, things like change records and deleted records. And I can also see the smart alert at the bottom saying three records have been changed in GearSet DevOps Summit.
I'm just gonna jump over to my Gear Set data backup job again and click into the details.
In a second at the top, you'll see a graph that shows the changed and deleted re deleted records over the last couple of days.
So, again, just really useful for keeping track of my org health.
Underneath, I can see all of the corresponding runs, including things like run times, completion status, and a high level overview of the records that have changed.
Typically, in situations like this, I don't restore the metadata or the data directly back into production. I typically follow our incident response plan, which ensures I understand the cause and scope of data loss before running a restore. I then proceed to running a restore to a local sandbox environment first using this option here.
As I'm confident I understand the cause of the data loss, I'm going to restore the custom field back into my prod environment. So I'm just gonna kick off this metadata comparison.
What we're doing here is we're comparing the metadata between the snapshot in time, so the backup, and the current state of my org. The way we do this is we log in. We download that metadata. We then cache and compare the metadata.
One of the really nice things that Gearset does in the background here is we run a semantic diff, which essentially breaks down all the metadata into bite sized pieces, which which allows us to really understand that true meaning of the metadata.
If you look at a a custom object as an example, we'll break that down into all the constituent custom fields, and then we can see things like dependencies, what's used by, and even things like profiles and permissions.
So as a quick overview on the top, you've got five tabs. You've got change new, deleted all, and selected items.
I'm just gonna jump over to the new items, and I can see the custom field here at gearset DevOps Summit.
As my developer said, if you if I'm unaware of what that change was or that deletion was, I can do things like view the change by and change on date to narrow it down.
I'm gonna select the custom field here, and you can see that one component's now been added to the selected items tab on the far right.
At the bottom, because I've got this custom field selected, I can see how the pick list looks in my production snapshot on the left and in my current live state of production on the right.
As I'm product owner, I'm keeping it this in line with the pick list view, but if I'm a developer, I might wanna switch between a couple of different views that I have access to at the bottom.
If I expand the custom field, I can see things like dependencies used by profiles and translations.
To ensure this deployment is successful, I'm gonna ensure that I include the related layouts, I'm gonna include the translations, and I'm also gonna include all the FLS that's been changed or deleted.
You can now see my selected items tab has got twenty two components, and they're the components that I want to restock.
I'm gonna hit next, and GIS is gonna check this deployment for me. And I think as my colleague, Mark, mentioned earlier, GIS has got a deployment success rate of ninety three percent, and that's purely down to this problem analysis that we do behind the scenes. You can see I've got zero suggested fixes, zero warnings, and zero code quality errors or warnings. So I'm just gonna skip over this.
On screen on the left, I get a quick summary of all the components that I'm planning on deploying. I'm gonna add a friendly name and some notes as well. And this is really, really useful for keeping a full order history.
If I use any ALM, I've got integration with Jira, Azure, and Asana at the bottom. But as my team is not quite ready for them, we we're not gonna use that today.
I've got the option to do a pre validation at the bottom, which will ensure that validation runs successfully before deploying to production.
And I've also got the option to schedule things like deployments. So, obviously, a common time to deploy for us is later on in the evening or when there's not much activity going on with inside of Salesforce.
Fortunately, there's not much activity going on got in inside my org at the moment, so I'm just gonna hit deploy now.
In a second, we'll see all of them components individually deploying. If there's any errors or warnings, they'd be reported by Salesforce on the screen. But, hopefully, we'll get a green tick in a second, and that metadata will be restored directly back through to my production org.
Perfect. I get a quick summary on screen here, and I can also download things like the full report.
Full reports aren't too important in my role, but my CIO, Sarah, is really interested in all of these sort of things. So every time a deployment is made within Giza, she'll be pinged via email, and she can check out these reports.
She can see see things like who the deployment was made by, the source, and the target location.
You've also got deployment times, deployment notes, and deployment details at the bottom of the second page.
This PDF report is stored in your deployment history section within GearSoft. So, again, just a really good way of keeping an order history and go and looking back through all of the deployments and restores that you and your colleagues have made.
Perfect. Just before going back and restoring that data, I'm just gonna jump across to Salesforce here. And you can see the gear set DevOps submit object at the top, and I can see all the custom fields underneath. There's no pick list at the moment. I'm hoping if I refresh this page here, that pick list will now be populated because of the restore at the bottom. So I can see the test speed pick list is now back in my Salesforce production org.
The next thing I want to do here is restore the data. So, again, back over to my data backup. I'm gonna jump into the details.
And rather rather than clicking on the compare and restore metadata like we did the first time around, I'm gonna view the details.
And that's gonna give me an object level of all these objects that have changed and the associated records on the far right.
So I can see the three change records there. I'm just gonna explore them a little bit further to ensure they're the right ones.
Perfect. So I can see the change records tab is highlighted at the top, and I can see there are three three records here.
I can select any of those records. And on the left hand side, you can see how the records and the fields look in the previous, previous run compared to on the right in this run.
I can see the path the value in the pick list on the left hand side, but it doesn't exist anymore on this on this run.
I'm gonna select the three records, and that'll give me two options at the bottom. I can either do a quick restore or I can configure a restore.
The quick restore is really, really simple to use, and it allows me to restore them three records directly into production.
The configure restore is a much more powerful tool, and that'll allow me to take them three records plus any related data and push that back through to production.
I just need to do a quick restore right now. So, yes, it'll prep a restoration plan for me automatically.
And if I'm happy, I can go through and restore.
What that'll do is that'll restore them three records back through to production once them green ticks appear on the right. Sorry. On the left.
And that's then done. So now then Beckords will be back in production. It might take a little little while for you to see them on Salesforce just as Salesforce is catching up, but they will go back into production now.
Perfect. So in just under ten minutes, I've added an alert. Sorry. I've been alerted of a custom field deletion, reviewed my incident response plan, validated and restored the metadata deletions, and restored the record changes, all while working in a user user friendly tool without any complex tooling with a full history and reporting available across the whole team.
If you'd like to learn more about Geasset Data Packer, please visit our website to start a free trial or arrange a tailored demo. Thanks all. Enjoy the rest of the summit. Back over to Ben for q and a.
Thanks a lot, guys. That was fantastic. I know the pain of trying to restore data, metadata using native tools, and it is not pretty. So that was great to see. So got a few questions for you. So first off, where is the data backup actually stored?
I can take that one if you want.
That's a really, really good question. It's definitely the kind of question that you want to be asking when you're thinking about a backup solution. For anyone who's not aware, data sovereignty data sovereignty is a really, really important factor, for lots of industries. So data sovereignty is you might be storing your data in the cloud, but then where is that cloud physically based, or where is the data center that backs that cloud physically based? So to answer your question, within Gearset, we use AWS data centers. So they're the same data centers that back a lot of, Salesforce itself's infrastructure.
And currently, we can offer, storing your data inside Europe. So that is US one, if you like that kind of thing. That's a data center based in Ireland. And also, a data center in the USA, slash, I think, US west two. Again, if you like that sort of thing. So they're the two that we offer at the minute.
It's not trivial to add more, but, if you do need to be storing your data somewhere else, you know, please let us know, and it really helps us prioritize, doing that if we know what the demand is for different regions.
Great. Thanks, Ali. Next question. Should I build, plan, and test a recovery plan of action? If so, how do I go about this?
Yeah. I guess you should absolutely, plan ahead. So it's horrible having to deal with an incident. It's a very stressful time.
You've gotta understand exactly what's happened, before you can go about fixing it. So having a plan in place, or some people call it like a run book, which obviously you can't plan for every single eventuality, but it gives you a lot of steps that you should follow, and remind you of the processes. So if that's things like nominate an instant commander, communicating to your users that there's an issue, taking a a backup again before you make any changes to your production environments, then you've got a record. If your something goes wrong with your restoration, then you still got a known point to revert to rather than making things worse.
All of these things are common between any kind of incident. So, it's definitely worth considering, that upfront. Even people's phone numbers if, you know, the person who knows how this thing works or is is on the call, that kind of thing is is really important to think about upfront because in the midst of it, it's it's harder to to think rationally. So, yeah, there's there's some I think there's some good advice in our, if you go to the Gizet website, there are some resources there.
There's a backup white paper, and that'll have some some, information about the kinds of things you'd be should be considering.
Cool. Thanks, Ali. Next question. How often do I need to back up my org?
It'll be very oh, sure. Chet.
Cheers, Ali. It really depends.
Every twenty four hours works for most companies, although some people do need hourly backups. So we do have that available if if that's the case. With GIS, it's high frequency backups. It also can't hurt to back up immediately. So one of the things we do offer is the backup now button, and that's very useful if you're about to do some kind of major release to production. You can take a snapshot on demand, and that'll capture both your metadata and your data. So if anything goes wrong with that, if that with that deployment to production, you've got a point in time which you can recover to.
Great. Thank you. We got three minutes and a few more questions. Do I need to back up all the objects in my org?
I can, I can take this one. So instinctively, it seems like a good idea to back up everything in your org, but this isn't necessarily, but it isn't necessarily. You can actually, have a it can actually have a negative impact by doing this. There are objects of secondary importance that tend to change frequently and provide little value when backed up.
For example, objects like, auth session. You can often see a lot of churn, and they do not store business critical data. So to back them up, it might not be worthwhile. So it's definitely worth having internal conversations, understanding what objects are most important to your business, and then backing those ones up to ensure you've got everything covered.
Great. Thank you. And can the target AWS storage be under the client's AWS instance or just gear sets?
Really, it's just gear sets. So the behind the scenes is, sort of a proprietary format that we use to store that data, and there's there's a lot of constraints we have in in order to make, the features available. So storing quite a lot of data over time and then being able to access it is is a is a tricky beast. And it's worth understanding for us, why that's something that you want if if that's something you feel more comfortable managing or whether it's around security, there's definitely a conversation to be had there.
So for example, within Gearset, we, without getting too technical, we, use encryption keys, and we make those completely self-service for our our users to to delete if they want to. So if if your, you know, your main concern around security there is, you want to know for a fact that, you know, if you were to move on or you needed to, remove all data that Gearset has held, then you can actually delete that encryption key and render any data that we've stored on your behalf completely useless. So, it kinda depends on the reason there, but currently, it's, it's something that Gearset manages for you.
Great. And last quick one. How many records can Gearset roll back in one job?
That's a tricky question.
In in the order of we could do thousands comfortably, easily tens of thousands. Again, it kinda depends because you might, as Jed was alluding to, if you if you start restoring the records in one object, but they relate to another object that has records, then relate to another object. GIS that's really good at chasing down all those, those dependencies, but, it obviously believes number of records. So it kinda depends on the number, whether they're sort of what's the word I'm looking for? Like, the parent objects, number of records, or the number of records it references, but tens to hundreds of thousands are a thing we've done before.
Great. Thanks, guys. There were a few more questions, but that's all we got time for.
Cool. Thanks a lot, Ben.
Thank you.