Description
Understanding the importance of data backups is crucial for any Salesforce organization. In this webinar, Charlotte Humberston and Laurence Boyce discuss why Salesforce backups are everyone’s responsibility and how to effectively manage them.
- Explore common misconceptions about Salesforce backups and data protection
- Learn about the shared responsibility model and its implications for your organization
- Discover actionable steps every team member can take to ensure effective backup strategies
- Understand the costs associated with data loss and the importance of a robust disaster recovery plan
Learn more:
- Salesforce backup: The complete strategy for data protection and recovery
- Gearset’s Salesforce backup solution
- Salesforce backup and recovery best practices for a reliable backup process
- Who’s responsible for your Salesforce backups?
- Why you need to backup your Salesforce data and metadata
- Backups for Salesforce ebook
- Success story: Granicus
Relevant videos:
- Innovate without fear – unlock the hidden potential of backups
- Strategies for safeguarding your Salesforce Org
- How to choose a backup solution for Salesforce
- The ultimate guide to Salesforce backups
- How to perfect your Salesforce data backup and recovery strategy
- DevOps Launchpad Live: How to build and scale your team for DevOps excellence
- DevOps Launchpad Live: How to improve your Salesforce deployment success rate
Transcript
Hi, everyone.
Welcome to today's developer's launchpad live webinar. My name's Charlotte. I am the editor in chief of Devil's launchpad, which is our free training and development platform for all things Salesforce DevOps, is powered by Gearset, but the content on there covers all kinds of DevOps topics, including backup, which is what we're speaking about today.
And today, you are in the capable hands of my colleague, Lawrence Lawrence Boyse, who is one of our backup technical experts here at Gearset, and he'll be taking you through today's webinar. If you've got any questions as we go through, please pop them in the Q and A, and we'll either get to them as we go through the webinar or there'll be time for questions at the end. So, without further ado, Ava to you, Lawrence.
Super. Super. Thanks. Thanks, Charlotte. So yeah. Hi, everyone. I'm Lawrence sales engineer here at Gearset.
So my role really is responsible for leading the technical side. When working with customers to evaluate their current processes and challenges and advising on the appropriate solutions to their specific use cases to achieve the correct technical outcomes.
I've had a number of years working in the SaaS backup space, so I'm familiar with many of the challenges that Salesforce teams have with regard to data and metadata backup and the potential solutions to address these.
As Charlotte said, as we go through, please ask q and a in the, in the q and a as the chat isn't, isn't doesn't need to be enabled, and we'll answer those at the end of course.
So to start off, I thought we'd explore some of the common misconceptions around backup. And why the need for backup is so often underestimated by teams working within Salesforce.
The first of these is that many people think Salesforce is simply too big to fail.
It's true that as a cloud solution, Salesforce just like other SaaS providers, does have a responsibility to maintain the uptime and availability of the platform.
And generally Salesforce do a fantastic job of this, ensuring that your data is available to you when you need it.
Outages are very rare, but have happened.
Permageddon in twenty nineteen is perhaps the most memorable example.
This was where any team that had integrated par into their orgs found that their permissions model was corrupted.
Salesforce rightly in that instance, decided to protect their users' data as the first priority.
So it actually deleted all affected permissions.
But unfortunately, this meant that admins had to rebuild their aux profiles and permissions manually. Which caused all sorts of problems for those affected teams.
And that brings me on nicely to perhaps the second common misconception that Salesforce itself holds a backup of your data and metadata.
Unfortunately, this just isn't the case. While a lot of data and metadata is held in Salesforce, Salesforce itself only acts as a data processor.
This is because Salesforce subscribes to what's called the shared responsibility model.
This means that while Salesforce are responsible for the security and availability uptime of the platform, you are responsible for ensuring that your data stored within Salesforce is protected.
This then means that you and you alone are responsible for backing up your data and ensuring that you comply with corporate and industry regulations.
And depending on which industry your business is in, these could be quite extensive.
Healthcare and financial services are two very obvious examples but around the world more and more regulations are coming into play, which could impact the data that your business holds in Salesforce.
Outages and catastrophic events do happen, but it often goes unnoticed that instead of platform level outages, The biggest threat to the security and availability of data in your org in fact comes from inside your business itself.
In guessing's most recent state of Salesforce DevOP survey, a huge sixty seven percent of Salesforce teams reported that then experienced data loss just last year.
Data or may data losses can happen due to human errors, corrupted data, or integration issues.
In fact, a very recent scenario occurred where customer's integration had inadvertently corrupted thousands of contacts.
Thankfully, they had a backup in place so it could roll back those contacts to a previous state, resolve the issue with the integration and redeploy that. But in any situation like this, it's worth noting that employees and the business have a responsibility to limit the impact to their to their company.
And the cost of Salesforce data loss is increasing.
This increases at about eleven percent year on year according to research.
This is because we're all storing more and more data in Salesforce itself.
And with that data, we're becoming increasingly more reliant on it.
At a high level, the breakdown of costs associated with data loss will look something like this pie chart on the right hand side.
Thirty five percent of the costs are direct to the business.
These are immediate impacts of the lost Salesforce data.
Fifty two percent of these are indirect.
Stalls to your development or end users when you experience that late data loss or downtime.
An opportunity costs are an additional twelve percent It could be lost leads.
You would have generated or sales you would have closed during that period.
Along with the impact of reputational damage, on your future business.
And don't forget about metadata.
Many businesses also underestimate the impact of metadata loss, which can be just as much or even more of a problem.
Salesforce data, and metadata are intrinsically linked, meaning that metadata corruption or deletion can have severe knock on implications to your data.
Losing metadata then also waste developer time, as they'll have to go back and rebuild these fundamental blocks that shape your Salesforce org.
And while your development team are focusing on this, all your other work will be delayed.
Potentially causing this project deadlines or delays the delivering value to your Salesforce end users or even customers.
So with all this in mind, you want to be confident that you can restore data and metadata quickly and accurately after a data loss incident. Rather than relying on anyone else on the platform to do it for you.
So if the responsibility for backups doesn't fall to Salesforce, it's a fair question then to ask, who should take charge?
Many people think that based on maybe organizational structures, that the responsibility for backup falls to maybe one individual or a particular team.
This is a problem for a couple of reasons.
Firstly, this creates a bottleneck.
Which could cause real issues in a critical scenario.
Time is of the essence to ensure minimal business downtime.
To ensure business continuity.
So you wouldn't want just one person or one group of people to be aware of and be able to action any disaster recovery plan.
Secondly, it's best practice for anyone working within Salesforce to take an active interest in in ensuring the security of their orgs. So everyone should understand the potential impact of the work.
That they and others are doing, and what they should do if things don't go exactly to plan.
I'm gonna run through a few examples of people within an organization and their respective responsibilities when it comes specifically to backup.
Business leaders are responsible for the reputation of a business and pioneering the direction the organization goes.
The trust of customers and shareholders and associated revenue or or investment can easily be shattered with a single incident.
So these leaders need to be confident that all steps have been taken to mitigate against such risks.
It's also then a leader's responsibility to ensure that their teams have the tools they need to be compliant. With all regulations and be able to restore any losses quickly with minimal impact.
IT teams also should have responsibility, and this often comes because They on they often ensure the company stays compliant with regulations.
And the way this ties to backup is that backup can be configured in many ways. And it's on IT to ensure this is secure.
IT teams should also be in charge of undertaking regular maintenance to ensure that a solution remains fit for purpose, particularly as new guidelines are introduced with maybe regulation or compliance.
And Salesforce teams, of course. They're often the team best placed to manage the day to day processes for Salesforce backups and restoration processes.
That's because Salesforce teams often understand the complexity a Salesforce data and metadata best, and are also responsible for delivering value to end users of the platform.
In addition, they'll most likely be first to notice or be notified of an issue and be aware of maybe the potential causes to that.
More importantly, they generally will be called upon to restore any lost data.
Imagine delivering the news to your to your shareholders or stakeholders that there is no backup of the customer or company data. And that has been lost.
Nor is there a way to easily restore that lost data?
It goes without saying then that this is not an exhaustive list, and all parties involved should collaborate closely and regularly to ensure the backup processes you follow are effective, but of above all reliable.
So now onto the most important part, perhaps, of today.
What can you do to play your part in ensuring effective Salesforce backup for your organization no matter your role.
I've got a few tips for things that you can do.
Firstly, raise awareness that this is the problem.
Many people we speak to just aren't aware that Salesforce doesn't back up your data or metadata.
So proactively making others in your business aware of the risks and taking action before an instant occurs will ultimately result in better business outcomes.
Following this then, and maybe alongside, advocate to make this a priority.
This is because many organizations are perhaps reluctant to invest in backup. But when you consider the potential cost of a data loss, breach or downtime, it's time, effort and budget, well spent to get a reliable solution in place.
You might get pushback from stakeholders who argue We've never had a data loss or corruption.
But ask yourself or or them, probably better off. How would you or they know?
That that's occurred.
Additionally, treating DevOps is part, treating backup, sorry, is part of your overall DevOps process rather than as an afterthought, is a great option.
DevOps is that the accepted model of development on Salesforce and extending that out to backup as well will improve your team's overall performance.
DevOps is about continuous improvement, so integrating backup as part of your overall DevOps process ensures that nothing gets out of sync, and it's considered as part of your overall development strategy.
Additionally, back up and restoration capabilities encourage a more agile approach.
The knowledge that production and other environments are always safely stored in a backup and can be restorable Quick promotes a frequent release cycle. Allow your business to begin reaping the benefits of agile development and ensures a shorter time to recovery in the event of an incident.
It also allows for more secure collaboration using a tool integrated with your overall process enables restore permissions to be delegated to anyone in the team or just those who are required to, speeding up the process and minimizing impact to your business.
And this is all very well and good, but even then once you have a process, make sure you have a plan and practice it.
Once you have that disaster recovery plan, ensure that everyone is aware of the role they have to play in it.
Companies that have a plan are significantly more likely to bounce back quickly and ultimately minimize the impact on their own users or customers when they also have practiced.
As we mentioned previously, you're more likely to experience some kind of data loss than not So failing to prepare for this scenario is really not an option for any organization, especially those using Salesforce.
Once you've worked out a solution, then actually have a practice.
Exercise that disaster recovery process to ensure that you're familiar with the method, and the required actions, if an incident were to occur.
The last thing you want in a highly stressful situation is to be left using a tool where you're unfamiliar with the process to restore that mission critical parts of your software stack.
So exercising your business continuity process is not only best practice, but in many regulations, it's actually made mandatory that this occurs periodically, so we'd advise everyone would follow that too.
We actually had a great webinar a few weeks back on disaster recovery planning, which we'll share a link to when we send out the recording for this session. So keep an eye out for that one too.
So in summary then, we've covered why the need for backup is underestimated.
Who should be responsible and the steps that you should be taking to secure your Salesforce orgs?
So to summarize, The risks are significantly bigger than most people think, and not having a backup solution in place is often just not worth the risk.
Back ups are your responsibility.
No matter what role you have in a business, so ensuring that this is in place is critical for everyone.
Backup's also a fundamental part of a mature DevOps process and should be integrated into that as opposed to separate.
And also, when you have a process and you have a plan, make sure that that's practiced so that you so that you know the steps required before you have to do that in a real life situation.
If you'd like to know more about backup, I'd strongly recommend that you download the free backup for Salesforce ebook, and there's a QR code here on screen. I'm gonna leave this up for a minute.
The ebook contains lots of really useful information about how to set up an effective backup solution which can seamlessly integrate into your Salesforce development process.
You can, of course, also get in touch with anyone here at a Gearset, and we'll be happy to advise you on a suitable solution when that solution will be based on exactly your specific requirements.
And don't forget that we offer free training on DevOps launchpad.
So you'll find the backup for Salesforce course, particularly helpful on there.
Scurly those QR codes there for another ten seconds or so.
Fantastic.
Okay.
So I appreciate everyone joining today. We've actually got a bit of time for questions So please use the Q and A function to to ask these. I also understand that the chat is also now back up and working. So feel free to use the chat there as well.
So Charlotte, I think you've been monitoring the chat. So are you able to see if there's any questions?
Yep. No problem at all, Lawrence. Thank you for that. And really interesting content, and there's a good tip there that I think people can take away and and start actually now. We've got a question from Rob, which is can the backups be restored to scratch orgs for validation.
Fantastic question, Rob. Thanks for, thanks for asking that one. So, I think a key part of restoration is of course seeing what this data restorer is actually going to look like. So with with with backup solutions, you know, that would really be best practice to be able to do that from Gearset's perspective Absolutely.
We enable you to restore the data, let's say, in a simulation first to maybe a different environment to see what would the restoration look like? Is there going to have any knock on implications of that restoration?
Before you then go and restore back to the original place that that came from. So backup restoration to other orgs. Absolutely. And, would would be very happy to, to, to show you that in a bit more detail if that that could be of interest for you, Rob.
Perfect. Thanks a lot, Lawrence.
We had a couple of other questions.
Firstly, is there a way with the backup solution that you can be notified if there's been any kind of data or metadata loss.
Understood. Understood. So this probably comes back to the point of ask yourself or someone how would you know?
So, again, it's really best practice.
To for for any backup and recovery solution to be able to proactively notify you. And the way that the way that that's done with Gearset is by setting what's called smart alerts.
That means that based on a set of, set of criteria that you define, You'd be able to set, for example, when more than ten accounts are deleted, notify me. And if that was ever, triggered, you would be practically notified via not only email, but also directly into your Microsoft Teams or Slack channels as well. So you get those notifications directly to those platforms that you use most commonly to communicate. So typically, being being notified is difficult in typical situations without any any tooling. But guess that really streamlines that process so that you're only notified if there's a genuine incident.
Brilliant. Thanks a lot, Lawrence. A question from Andrew. What are the most common restore issues that you see after data loss?
Yeah. Thanks. Thanks, Andrew. Great question there. So a couple of ones that could be, let's say, typical would be, if, for example, you're restoring data that's from historical records. So it could be you've realized you accidentally deleted a contact or an opportunity or a case. Three and a half years ago, and you need to pull that in.
Obviously, Salesforce ought to develop very quickly.
And over time, you could have lots of different changes to the particular data structure from which you've had that data deleted from. So a common issue could be, for example, you've restored data in one of the fields, might be a picklist value that now doesn't exist anymore, or maybe the record type doesn't exist. Anymore just because of the nature of how your Salesforce org has developed over that time.
So that that's probably a common challenge that is, that that is seen. That then more often happens though from historical data.
If it's more recent data, you know, that those sorts of challenges are greatly reduced.
And really, it only it only comes back to maybe you have some validation rules triggers or workflows that are blocking data being able to be added into the orc. And that's where as well with, with solution you know, I can speak for Gearset, we enable you to disable and then re enable any validation rules, triggers, workflows, etcetera.
Directly from within the restoration tool before the deployment takes place, meaning that you don't hit any of those blocks. So probably invalid record type pick list values. Those sorts of things would be common or the most common for historical data. And sort of automations getting in the way. I think most common for recent data.
Does that answer the, does that answer the question?
I think so. Yeah. It was Andrew. I'm sure Andrew will let us know. If not, please let us know in the chat all the questions.
A follow on question, I think from both of those previous ones, really, from Charlotte, another Charlotte, who's just asked for a bit of clarification on whether that you whether you recommend restoring to a sandbox first and then using DevOps to deploy to your live org.
So it really depends on the type of restoration you're doing and the sort of, let's say, urgency and criticality of when you need that bag. It's all it'll always be safer.
To restore to a, let's say, a full copy sandbox, which is the direct clone of your production org first. Absolutely. So you'll be able to see will there be any knock on implications?
Once that's done then and verified, You wouldn't then go from, let's say, UAT into production. You would go back from that original backup and just re point the restoration to production. So you wouldn't necessarily need dev ops to deploy to the live org, but it would come from that backup. That was originally in place there. So it's probably recommended if that's something that that you actually have a copy of that full production order as well.
Great. Thanks a lot, Lawrence. Another question, from Owen, sorry if I pronounced that incorrectly.
Who's asked aside from data, what are some common situations where a metadata restore would be necessary?
Yeah. So so metadata restoration can, can be can be required.
And of course, that is more commonly because maybe someone's gone into Salesforce and they've changed particular elements. They might have updated a custom field. They might have accidentally deleted a flow.
And as Salesforce has some really highly dependent structure, that can have some severe knock on implications downstream.
So a great example could be.
You have ax you've you've been updating your count object and you've removed a field.
Brilliant.
Okay. Maybe that was what was intended for a particular change.
Of course, that's gonna have a knock on impact to any layouts that we're using that field. Maybe any flows that we're referencing that field, and any other parts of Salesforce that we're referencing that field.
So from that situation, you might actually be better off restoring what you've deleted because then you would have everything back to how it was before any issues took place. There are some really common, restoration issues that we see based on the intricate nature of how Salesforce is coupled and the dependencies are around the whole, the whole schema here.
When it comes to Salesforce mesh data as well, I think it's a key part to to to bear in mind when you think about any sort of backup and recovery solution.
For data or metadata, ensure that any solution that you look at not only has full backup capabilities, but the ability to restore your entire Salesforce organ all Salesforce, message items within that because that fundamentally is the, let's say, worst case scenario and ensuring that you're confident in that tool can restore those building blocks of your org should be seen as fundamental.
Great. Thanks a lot, Lawrence.
Another question, from Charlotte again. No matter how often you back up, there's always going to be some data us. How would you know what has been lost in the interim period?
Yeah. Great question, Charlotte. So I think there's a few elements here. So Of course, no matter how often you back up, there will be some data loss. So smart alerts will start to help.
This will enable you to immediately be notified and see in the event of any data loss or corruption that occurs.
That's that's a good start. However, ensuring that you minimize the point and the time between backups is also really important to ensure that as little information or as little data as possible could slip into that category.
So with that in mind then, shortening that space between backups is also as as as critical as anything when it comes to your most important business objects.
So it could be your accounts, your contacts, opportunities, cases, etcetera.
So having a solution that can back these up regularly that don't require you having to go and do that. So maybe even on hourly cadence, for example, for your most critical objects, should be seen as something that can greatly help with that. So maybe a daily backup of your entire org, as well as an hourly backup on your most critical twenty or thirty objects would ensure that we minimize that data loss that can happen in the interim between backups.
Big loss works for a car then. You'd see with smart alerts. You'd see with the automatic differences.
That are presented with with, with backup and recovery solutions, but it's probably a great thing and also to bear in mind if you're ever evaluating tools is how easy is it to see these differences?
And what does the process look like to restore? So Of course there, there may always be some data loss if a backup hasn't occurred, but shorten that recovery point objective.
To get as little data loss as possible and have a way to identify and restore would be best practice.
Thanks, Lawrence. That's really useful.
Brings me on nicely to to the next question that's come in. So there's obviously best practice in terms of keeping that period between backups as short as possible. But are there any other backup and recovery best is that you should bear in mind or that you'd recommend people bear in mind when they're looking at, or evaluating a backup tool.
Okay. So overall best practices for Salesforce backroom recovery.
Sure. So I think first one would be a solution that sits off platform from Salesforce.
The reason that that's critical is ultimately you don't want that backup of your data being in the same location as the primary source of your data.
The reason for that is if your primary source is unavailable, you want your backup data to be available.
So if you have that on Salesforce, you would also not be able to access that, So having a off Salesforce platform backup and recovery solution, I'd say it's probably number one most important thing to to to look at?
Secondly, ensure that that platform not only has a backup and recovery process for your data. But also supports all metadata supported by Salesforce.
That then means that you have a robust process and plan in place If the worst that to would occur, that you could restore everything, those fundamental building blocks of your org, the metadata, as well as the data that goes in it.
I think also we're looking at tools.
Have a look at that alerting piece that we mentioned earlier on you know, how are you going to know if a data loss occurs and make sure that you have the confidence that if a loss does occur, you have the you have the ability to be able to notice that.
And also what I'd say is whenever you're evaluating tools, test them.
That's probably the most important part of any of this is get hands on because these tools fundamentally are used and business critical and highly stressful situations.
They've got to be intuitive.
They've gotta be easy to navigate gate easy to follow those restore paths, and that's best done via testing.
I have a belief that with all software, it shouldn't need a manual. You shouldn't need training.
Think, has anyone seen the WhatsApp user manual? No. Of of course not, because it's so intuitive, So you need a platform that you not only as intuitive, but you can test easily to know that before buying and committing to a backbone recovery solution, you have the confidence it can meet your requirements.
And I think finally on best practices, Have a good look at the security protocols that those platforms adopt. Make sure they're following best practices of ISO and IEC twenty seven thousand and one, as well as other regional requirements that might be in place for your organization.
So I think there was a lot in there Charlotte, but maybe being off platform, data and metadata back up, smart alerting for knowing, if an issues occurred.
Having a good test of the platform before buying as well as evaluating the security, I'd say would be probably top five.
Brilliant. Thanks a lot, Laura. I think we've got time to squeeze in maybe one or two more questions.
So one from Erwin, does Gearset backup backup functionality extend to data that is streamed into Salesforce data cloud?
So set, Gearset will be able to back up and restore data and metadata that is hosted on force dot com. Everything hosted within the force or column, Salesforce three sixty platform is able to be backed up. Anything external to that.
It really depends, and that's how I appreciate really a terrible answer having me. It depends.
But what's what's probably best there, Owen, is is it if we have a quick chat offline? Let's let's look at your exact use case where the data is going to from what type of data it is, and we'll be able to give you a a better, better answer to that if that's okay.
Brilliant. Thanks a lot, Lawrence. I think just time for one more.
So we mentioned how fundamental backup is for the entire DevOps process.
How can people or, yeah, whole teams learn more about the other important parts of DevOps process kind of including backup?
Cool. So, of course, backup is that is one of the key pillars of dev ops. But, here, actually, on screen, if that bottom right QR code there, that's the DevOps launchpad. And the DevOps fundamentals course certificate is a great starting point. This is free training on there, and that includes the backup course alongside lots of other useful content, which would be a fantastic intro to dev ops. There's also more advanced modules on there too for those looking to maybe further expand their DevOps knowledge. So for anyone looking to learn more, about overall dev ops, that QR code bottom right would say, I'd say would be your would be a great start.
Brilliant. I think that's all that we have time for today. So, yeah, thank you everyone for joining us, and thank you, Lawrence, very much for your time. We'll be sharing the recording of this session and all the resources that Lawrence has gone through, but, just to reiterate if there's anything else that, we can help with please do get in touch and we'll be really happy to help.
Thanks so much.