Description
Discover effective methods to optimize your Salesforce data management processes. This session focuses on the importance of archiving in the data life cycle management. Learn how to manage your data storage limits and ensure compliance with regulatory requirements.
In this webinar:
- Explore the significance of data life cycle management and archiving
- Understand the impact of storage limits on Salesforce performance
- Learn how Gearset’s archiving solution can automate data management
- Discover strategies for compliance and data retention
- See a demonstration of the Gearset archiving interface and its capabilities
In this series:
- Episode 2: Strategies for safeguarding your Salesforce Org
- Episode 1: A lifecycle approach to Salesforce data management
Learn more:
- Gearset’s solution for Salesforce data archiving
- Salesforce data archiving best practices — cut costs and boost performance
- How to reduce and manage your Salesforce data storage costs
- How to efficiently monitor and track your Salesforce storage usage
- Salesforce backup: The complete strategy for data protection and recovery
- How to build a practical Salesforce governance framework
- Salesforce data governance explained: Why your org needs it
- How to ensure Salesforce data compliance across your organization
- Success story: Granicus
Relevant videos:
Transcript
So much for joining.
This is the third part of our three
episode, life cycle process where we've hosted webinars talking around the end to end Salesforce data control and strategies alongside that. In the first two episodes, we covered what are data life cycle management processes, why the need for it exists, and the steps involved. We also then covered the strategies for safeguarding your Salesforce org. So today here, as it very much says here on this grade, we're gonna be looking at methods to optimize your Salesforce data management processes.
But before we get started, quick intro. For those of you who have not met before, I'm Lawrence. I sit in the technical team here at Gearset working with customers either looking as new customers to Gesset to understand some of the capabilities of the Gesset platform or working with existing customers who who might be looking for additional capabilities based on the full suite of DevOps solutions that Gearset provide.
So for episode three then, so far in this series, we've taken a tour of the data life cycle management process.
And here, we'll really be completing that data life cycle journey and discussing the methods to optimize your process through step four and five here, the archival and permanent destruction of that data.
But it's not just because we want to follow best practices, which means that these final stages of the data life cycle management processes are important.
Salesforce teams are also subject to the burden of dealing with storage allowances.
And all Salesforce customers are limited to both the data of, the volume of both data and files that they're allowed to store within their instances. And this limit roughly correlates to how much they spend with Salesforce.
And once you're hitting those orgs limits, you'll see alerts like storage limit exceeded or data storage exceeded limit.
And, additionally, once your Salesforce orgs reach capacity, the platform will become increasingly slow and unresponsive.
And as such, when customers are coming towards their storage limits, they're presented with a few options.
First option when you're coming towards your limit is delete data.
The second option is purchase additional storage from Salesforce, and the third option is do something else.
And for many customers, regulation and compliance along with unknown future costs and that performance segregation we mentioned mean that option one and two aren't viable.
So this leaves doing something else with the data, and that's where maybe an archiving solution comes in that we'll touch on later.
But before any of this, we should firstly consider how do you check your Salesforce
limit in your org before you face these types of decisions.
So this might be a screen familiar to many people. Of course, here we're in Salesforce setup.
If you navigate to the top left, you will have the option to type in storage, and you will have the option to select in the quick find box in the drop down here, storage usage.
Within storage usage, you can see your current data and file usage as well as below the largest objects that are taking up that storage space.
Now I bet some of you have been here before, but unless you check this regularly, there's really no way to tell when you might run out of storage space and how much your storage has changed in recent months or whether anything new or unusual is happening in the org.
So because of that, GSF built this. This is the data dashboard.
But this solution is,
currently in pilot, but is available to all of our backup customers.
GSF's data dashboard uses your historical backup data to show how your storage has changed over time.
It calls out any fastest growing objects and helps you detect, for example, when an object you didn't expect to grow suddenly grows at a speed faster than others.
And the main graph on the right hand side allows you to switch between your data and file storage.
But let's say this example here is your org and you've exceeded your Salesforce storage limit.
You can then also see that tasks, for example, are the fastest growing object for the last twelve months, and you might want to do something about it.
But in terms of that doing something about it, there are a few options when you get to this sort of situation.
Of course, all of those options include somehow solving that problem of reducing your Salesforce data storage and thus not having to purchase additional space. But how do we go about that?
Firstly, categorization is key. So a lot of the data being stored in your Salesforce environment is probably irrelevant for your day to day operations and may never be used again.
You can categorize your data then to
identify all of the outdated records, maybe attachments or even duplicated files that can be removed.
Some ways to do this would be in would include perhaps making use of record types or by optimizing your custom settings.
Once your data is then categorized, you can delete the unnecessary unnecessary data either manually or automatically.
From a manual perspective, you can delete those unwanted records by selecting them in Salesforce and then pressing delete, Or for a quicker process, you can use the mass delete option.
But you will need the right permissions, of course, to do that deletion.
However, the mass deletion option only is a mass deletion up to two hundred and fifty records at a time and only allows records in certain standard objects to be deleted.
So because of those limitations, some teams look at a more automated option.
And some Salesforce app exchange apps have potentially paid services that allow you to mass delete standard or custom object data without having to do it manually. But they might come with some limitations such as maybe on the types of records that can be deleted or how many in one go.
But you could, of course, also, from an automated perspective, use Apex and maybe create a scheduled Apex class, which runs on a period to automatically delete data from data from files.
But one of the downsides of this is having to run and maintain the code yourself, and any mistake here could prove very costly.
Additionally, if your primary challenge is file storage specifically.
So in the storage usage, if files was the problem as opposed to data, you could save some space by using file compression.
And this method involves downloading and then compressing attachments into smaller files before reuploading them back into Salesforce and deleting the original bigger files to reduce that storage usage.
And this works in theory for companies that have either very minimal numbers of files or have the time to be going through this entire process. But even if you did have enough time, this method requires some level of technical expertise, and there are usually limits on the size of attachments. So then you might just split those files into smaller ones.
So that solution's likely to only really be temporary as there's a lot of time and effort to constantly be be using that.
But all of these solutions have a common issue.
To safeguard end customers, compliance and regulatory requirements are becoming more and more prevalent, and the impacts of breaking these becoming increasingly severe.
And somewhat universally, these regulations contain how long data must be retained for and, ultimately, how long it should be permanently purged after.
So these methods of, perhaps, manually deleting data might not be the best choice because of these requirements from a regulation and compliance perspective.
So while we're there, let's take a look at our compliance part in more detail.
Of course, Salesforce underpins core business functions and revenue streams, whether that's sales targeting or marketing campaigns.
So because this data is so important, access to it is critical.
But for any team that creates and stores and uses data in their Salesforce org, which I'm sure given the nature of this series is is everyone, ensuring your data restored securely is critical for aligning with these compliance or regulatory requirements.
And, additionally, I'm sure many of our rules contain sensitive data, whether
around our contacts or business partners or maybe even internal employees.
So this sort of data should be treated and managed with a special attention.
And these regulatory or compliance requirements are across pretty much all sectors, such as health care, central government, and from financial services to pharmaceuticals.
And we see these in regulations such as GDPR or HIPAA and many, many more.
But, critically, failure to comply can lead to severe legal and financial damage.
And it's not just financial penalties.
Reputational risk is at stake here as a data breach or noncompliance incidence can really damage your customer relationships and public trust.
So in summary, as Salesforce holds so much of the valuable data about our partners and customers, compliance and regulatory requirements are not just a set of rules we need to follow. They're critical for safeguarding data, ensuring privacy, and building trust.
Unfortunately, however, although compliance is a core business requirement, it's sometimes seen as a blocker to business transformation.
As forty four percent of teams reported, they failed an audit either internal or external in their cloud environments last year.
So to navigate this, instead of having to be reactive when these audits come up, what we're seeing is more and more teams, especially in the Salesforce context, leading with a security and compliance mindset at the heart of decision making, core to everything they do to help mitigate potential issues down the line.
And to navigate all of these complexities, this then comes full back full circle. And especially then for Salesforce, the step four and five of that data life cycle management process is, of course, then essential.
And this is also where the doing something else option, that third option when you come to your data storage limits that we discussed earlier comes in.
So an archival process removes data that's no longer required in your org, but perhaps can't be permanently deleted maybe due to compliance requirements or for a desire to be for it to still be accessible if if required.
And I know we've spoken about a few benefits of a archival process already, but there's numerous others, especially
in the context of Salesforce.
It's not just the data and file storage limits, but stale or out of date data can influence reporting and analytics leading to uninformed decision making.
Proactive archiving also reduces the risk of inactive data being exploited or leveraged by attackers to gain access to your Salesforce systems.
And automating the process of removing data from Salesforce will avoid that unwanted performance impacts that we mentioned earlier, such as page load times or slow reporting.
And, also, when when data needs to be deleted, archiving processes should be able to automatically purge that data for you so it's not yet another manual task.
So let's see this in a real world situation then. So Cincinnati works like many nonprofits rely heavily on Salesforce to store and manage critical data.
And over time, a third party app began generating a large volume of records creating a serious problem, and that problem, of course, was Salesforce storage limits.
And if left unchecked, this could have led to costly storage uplifts or the risk for the data in, integrity to be, lost as they'd have to manually delete data.
And, of course, for a nonprofit with limited resources, finding a scalable and efficient solution would be critical to ensure continuity.
So those requirements really were extend the life of their sales for storage and avoid those expensive storage uplifts as well as automating the process of removing the data then as to free up those admin teams to focus on supporting their mission.
So what what what could be a solution then? So that's why GSA archiving comes in, providing an unlimited storage container off platform from Salesforce to stay safely house data, importantly, with the ability to automatically delete data from Salesforce and avoid then the requirement of purchasing additional storage.
So let's take a quick look.
I know we've had a tour of some aspects of Gearset already in this series, but here, we have navigated from the metadata deployment side through the pipelines as well as the sandbox seeding and backup into archiving.
Archiving then provides me with a visual interface to see all of the maybe multiple production environments that I have archiving jobs running on.
Each tile represents an org, so I'll get to see straight away the volume of both data and files that Gearset has offloaded for me.
And by jumping into any of these, I get to set a series of policies.
A policy is a set of criteria that tells Gearset what to go and archive, and these can be run on a schedule. So for example here, I've got my expired cases policy that runs daily at twelve.
Of course, I want to get very granular as to what is gonna be removed. So GIS enables me to either set filters based on some field filters or I can use a SOQL query. So in this example here, my expired cases policy is looking at nonactive accounts where the SLA expiration date is older than a year.
And, of course, this enables me to set multiple policies to be able to have the periodic cleanup of various data depending on the specific data in my org. So, of course, this can be tailored exactly around my use cases.
But what about if we need to restore some data back into the org, or we just simply want to see some of the information on the archive data?
The search option here enables me to search across any of my archived records. So for example, if I need to investigate something regarding the, contact Hussein Fisher, I can simply search for this in the archive.
This will see that I had a task previously related to this record, and I can, of course, restore or for GDPR purposes, I can delete this record permanently.
Additionally, within the history, I get to see a full breakdown of any time a policy is run.
And by jumping into these policy runs, I can see exactly what was archived.
So here, for example, I have a hundred tasks.
But for some of my other policies, potentially, this one here where I've archived accounts, of course, this will have also archived any child objects and relationships too.
So not only do I have the ability to see the details for any of these records, but I can, of course, restore these as well. And when, of course, I'm doing this restoration, of course, it's critical that if I'm restoring accounts, I can also restore their cases and any related child objects and relationships too.
And, of course, for those folks who are familiar with Gearset's backup restore, this UI will look super similar to you. That's because it uses many of the same back end processes.
So this enables me to restore data back into Salesforce in exactly the state it was pre archival.
Additionally, we talked about that fifth stage of the data life cycle management process, the ultimate deletion or the purging of that data.
So Giza enables me to set retention rules based on the specific compliance or regulatory requirements that I'm looking to adhere to.
So for example here, I have my German expired accounts that I must keep for seven years.
So after they are archived into Gearset after one, I set gear set to permanently close them after six, ensuring I stay compliant without any manual intervention required.
So Gearset archiving provides me with an unlimited storage container for both data and files off platform from Salesforce so I can automatically clear up that storage usage, avoid any performance degradation, and, of course, those nasty bills that can come for additional storage.
I can then view or browse any of that data once it's gone into the archive, but critically also restore that data back into Salesforce if ever I need it.
Additionally, there, not only does this hit the fourth stage of the data life cycle process, which is the archival, but, ultimately, the retention policies cover the destruction or permanent purging there as well.
So we've talked about a number of strategies here in terms of how to optimize your data life cycle management processes, and that was really what we were aiming to cover today from a third episode.
But in summary, we've looked at what a data life cycle management process is, why the need for it exists, and the steps involved in episode one. We looked at strategies
for safeguarding your Salesforce org two weeks ago.
And today, we've looked at some of those ways to optimize your Salesforce data life cycle management process in the final stages of its life cycle.
And, of course, Deaset can support with many of these stages of the process from protecting your org with data and metadata backup using the data deployment or sandbox sealing capabilities to ensure that your testing in dev sandboxes is using production like data, but, of course, with anonymization or masking as well.
You can also stay on top and understand how your storage is growing over time using the brand new data dashboard.
And, of course, you can optimize that, reduction of data on the platform, ensuring you stay compliant with archiving, which covers the removal and permanent purging of data on Salesforce.
And if any of this has been interesting to you, I hope so.
We've got the option for you folks to book a tailored demo exactly regarding just what you're looking to achieve with your data life cycle management.
So I'm gonna leave this QR code on the screen, which should link you out to be able to book a tailored demo if that's of interest.
We do have a few minutes, though. So if anyone would like to ask any questions, please put those in the chat.
One's coming already.
I'll just give everyone just a minute or two to ask any more questions before I answer any.
Okay. So we have one question, which is regarding where is the data. Is it off platform? So yeah. Absolutely. So the data is stored in Amazon, AWS.
So leveraging the same data centers that Giza is gonna be using for either backup or seeding or the mess data deployment processes, that Amazon, AWS is either hosted in Canada, US, Europe, or Asia Pac, and, like, every customer gets to choose that data jurisdiction.
So I hope that helps, Ryan. Yep. Thank you very much. That was good.
Fantastic.
Brilliant. Well, thank you everyone for joining. We'll wrap it there because I'm conscious folks might have another call in couple of minutes or so. We really appreciate you joining not only today, but throughout the entire three part series.
So thank you very much for that. If you have missed any, the recordings will be sent out to you for each of those. And if anything is of interest, please do reach out to us. But, hopefully, it's been educational.
With that, we just wanna say thank you very much again for joining the Gearset event.
Good luck to see you again soon.