Description
Discover how to unlock the power of advanced monitoring and debugging in Salesforce with Nebula Logger! Whether you’re a Salesforce admin, developer, or architect, this session at DevOps Dreamin’ covers how to implement and customize Nebula Logger for Apex, Flows, and Lightning Components to proactively track the effectiveness of low-code and pro-code solutions. Join Jonathan Gillespie, creator of Nebula Logger, as he runs through how to use Nebula Logger to:
- Proactively track and respond to issues with real-time monitoring
- Set up automated Slack alerts
- Set up async process reporting and advanced log retention
Learn more:
Transcript
Can everybody hear me okay? Oh, that's loud. Alright. Good deal.
I think we are ready to get started.
So thanks everybody for being here.
Today, we're gonna be talking about how to use the tool Nebula Logger to add advanced, monitoring and debugging capabilities into Salesforce.
I am Jonathan Gillespie. I'm the creator of Nebula Logger that we'll be looking at today.
I currently work at Salesforce. Been there next week. It'll be three years. And I've been working with Salesforce ecosystem for about twelve years now.
In that time, I've done a lot of different roles. I started as an analyst, became an accidental admin, moved on to doing development, technical architect, consulting, and now engineering at Salesforce.
So I think in that time, there's been a lot of incredible improvements to the platform, a lot of new features that we get out of the box that we can provide to end users, A lot of improvements to the capabilities of what we can build on the platform.
And a lot of improvements and new tools to support those processes.
I think that there are still some aspects of implementing Salesforce that bring some challenges, and one of those is observability.
So no matter what language or platform we're building on, if we are building software or if we're supporting software, we're gonna have times where we need to know, is it working the way we expect? And when it's not, do we understand why? Do we have the context to troubleshoot, and can we explain what it is doing?
I think that this is a big challenge for everyone, and Salesforce is no exception to it.
Could be that we ourselves are trying to troubleshoot something, and we have questions about what it's doing. We wanna do things like do we know for sure that a particular scheduled job ran, or do we know what data we got back from an integration, Or do we understand errors that are happening?
So to help us understand these kinds of situations and to provide insights in what's actually happening, we need to have ways to know what's truly happening in our system.
So, observability is how well we implement our system to let us answer those questions, and logging is one of the ways we can do it.
So again, any platform, any language, logging is a common software engineering technique to provide us these insights.
And it's essentially us giving ourselves messages and context that we can look at later and understand what is actually going on.
So it helps us with a couple things. One is understanding the system itself. There's been plenty of times where I am the one actually building something. I know for sure it works a certain way. When I've run it, it's completely different.
So we make assumptions because we were involved, or we make assumptions because we read the documentation and we think we understand it. But what happens at run time can sometimes be vastly different.
So logging gives us a way to understand those things. It also gives us a way to understand what are our users actually doing. Again, we try and build things intuitively. We try and predict how people will interact with the system. Somebody always finds something to do that you don't expect.
They enter data a certain way that we didn't anticipate. They click something that you didn't think that they would click on.
And trying to understand how they got to a certain point can be critical.
So as you use logging in your system, it can give us a whole lot higher sorry about that. It can give us higher confidence in what it is that we're building and supporting because we have not only answers to it now, but we have ways to get answers to other questions down the road.
And I think overall, something I a bullet point I took off, I wish I had now. I think when you have something like logging in it, it can drastically reduce your stress. It can make our work life balance better, because we are able to give these answers and have less stress around it.
So not only that, we have, we spend less time troubleshooting and we can spend more time doing value adds so our bosses are happier, our business is more productive, and we are able to handle change more quickly.
So when it comes to Salesforce logging, there are some problems.
Salesforce, in a modern implementation, where typically he's in a mix of Apex, Flow, and JavaScript, the technologies have changed a little bit over the years, but either way, these are the most common ones today, and none of them really provide true cohesive logging.
So in Apex, Apex is the most robust of the three, and in Apex we have the ability to use system debug as a way to log. That's an incredibly helpful tool. I still use it myself. That doesn't go away with something like Nebula Logger that we'll look at today, but the platform has some limitations with what we can do with it.
So if I'm using system dot debug, I can write a message for myself. I can put it somewhere strategically located in my code and know what a variable's value is or be sure that it's actually executing.
But the way the platform works with it is you can only set up logging for twenty four hours for a particular person, so enabling logging becomes a bit tedious to maintain.
The logging data that gets generated can be truncated, and it can also just be automatically deleted.
So in terms of trying to set up scalable logging that we can use can be difficult.
Once we do have this data, there's also not a lot of functionality on the platform to actually support analyzing it.
There's not a lot that we can do to make something actionable based off of that data.
So it is there. It does have its benefits, but it's not something that really scales for us.
From there, flow is also another one that has some limits.
It doesn't really have true logging out of the box. There are things that the platform tracks for us. There are some log objects that you can report on or query, but there's not a way for us to just add a message into our flows ourselves. And when it comes to application logging, that's something we're gonna wanna do. We're gonna wanna know it got to a certain point. We're gonna wanna know that some variable had this value. We're gonna just wanna know, what error we got.
So this gap in flow is a big one. We have the ability to do debug mode, which is incredibly helpful for troubleshooting, but we don't have a way to add messages ourselves.
And then lastly, JavaScript or Lightning Components in particular.
JavaScript has two options. It has the console dot log functionality.
It's just standard in modern browsers. It's a way for us to print out values from JavaScript and have it show up.
That doesn't help us though if we're Salesforce admins, Salesforce developers, especially if we're working with a company that's global or not everybody's together.
The reason is console logging only outputs the user's browser. So unless I'm sitting with a user or I'm doing a screen share with them, I don't have a way to see that data.
That also presents another security challenge of maybe I don't want the user to see that data. So logging to the console doesn't, help us, and it doesn't make it, very secure.
Both JavaScript and Apex do have the option for event monitoring.
Event monitoring's a paid add on from Salesforce. It's kind of it's a a whole other topic in and of itself. And it does provide some great observability for Salesforce.
But it is a paid add on. I don't know the price. I can't speak to the price, but I can say it's expensive.
And it's not something that we all have the budget to use.
So, it has its limitations on what we can log with it and not everybody can reliably use it.
So about seven years ago I started working on my own logging tool to try and solve some of these gaps for myself.
Five or ten years ago, the platform did not have some of the common tools that are available today. There's a lot of open source projects in the ecosystem overall for things like trigger handlers and other areas.
But at the time, nobody really had a centralized logging solution.
So I started working one on for myself.
The goal was really just to doubt myself out. I was also trying to just learn Apex and the platform as a whole, so it was a way to kinda learn.
And it really sat there for a couple years. I didn't do a whole lot with it after I did some of the initial build. But about three or four years ago, the popularity really took off.
I started seeing a lot of people adopting it, a lot of people, following it on GitHub, and eventually, Salesforce started using it as well.
So the trajectory has grown a lot. The popularity has skyrocketed, and there are now dozens of people that have contributed to it. The feature sets have grown drastically over the last couple years.
It's become something that's very scalable.
So a couple aspects I wanna make sure everybody understands. I'll pause for effect.
It's completely free. Everything is open source. There's no licensing costs.
I complained a little on the last slide. I don't wanna pay for event monitoring if I'm a Salesforce customer. I do wanna have logging because it's a fundamental concept of software engineering.
So this project is fully free. So if you do have a need for logging, and I think everybody in the stream probably does, I think you should consider using Devio Logger as your option.
It's very customizable. It uses platform events to have reliable logging in Salesforce, and it has a lot of features to try and make it as secure as possible.
So, I think or I know in this room, we have a mix of people. There are several people in here, at least a handful, that have used Nebula already that are using it today in their production orgs.
There's also some people, I think, that have never heard of it before this session.
So, I do think its popularity has grown a lot, but there's still a lot of people that don't know that it exists, or there's people that do use it but don't know all of the features it's capable of.
So if you have never used it before, it's designed to be very easy to get up and going. You have two options.
The first is an unlocked package, the second is a managed package.
The underlying metadata is the same for both. There's a couple of differences though in the features and functionality that you'll have.
I tend to generally recommend the unlock package. A little bit of a heart there to give it some extra love.
The big differences are the unlock package. You get full features. You have full control over the metadata that's installed into your org. You can make changes if you want. You can see the metadata, all the actual source code.
And it also has a plugin framework that we'll look at a little bit later.
The managed package is sometimes better though. I can't say everybody should use the unlocked package, but the managed package can be good for other situations.
If you already have a logging tool installed and you wanna try and migrate tooling, using the managed package can avoid having overlapping panes of metadata.
There's also, ISVs if you're trying to build your own package and you wanna build some kind of dependency, or some other advanced options, then using the managed package can be your better option.
Both are good. Just a bit of a trade off on both sides. But either way, once you choose which one you wanna use, you're gonna do a single package installation.
Oops. There we go.
Once it's installed, it's designed to be very quick and easy to get started. There are some post configuration steps that you may wanna take, some things to customize and configure.
But just out of the box, as soon as it's installed, you can immediately start using it. So if you're an admin, you'll have the ability to see the data, and every user in your org can start logging.
So, very easy. Left side here, there's an example of if I have a problematic area in my Apex code, in this case maybe I have a DML statement that I know has caused problems in the past and I'm not sure why exactly. Maybe it happens for some users but not others.
I can do something like add a try catch block around it and then calling a single method. In this case, I'm calling logger dot exception.
This is a method that tells Nebula Logger that I have an error and I wanna save it and I wanna automatically rethrow it. So with basically one line of code and a try catch block, you can start using it right away.
There are also several other methods in here such as logger dot info. The method name basically tells it the priority of each entry that you're adding.
So you could have error warnings, info, debug, and so on. That comes, or come ends up being helpful for reporting, as well as But either way, you're looking at one to two lines of code to get started in Apex.
From there, Flow, our second technology, same general idea. It's easy to get started with it. So in Flow, there are three invocable actions. You can see three, the top three there on the left side.
They are very similar in terms of what they're doing. A couple differences in the parameters. The first one is really just I wanna add a message somewhere. The second one is I wanna add a message somewhere and also see what a particular record looked like at that time.
Third one, same idea, but I wanna see what a record collection in Flow look like.
So just different ways for you to give extra information and context to what it is you're logging in Flow.
A very common way to get started with flow is to start using this for flow fault errors.
So again, if you have a, potentially problematic area in your flow or you have an area that you wanna make sure is actually running and executing as you expect, you can start leveraging one of these invocable actions to add logging.
And then JavaScript, our third technology. Again, very easy, or it's intended to be very easy to get started. In this case, there's five lines of code that you need to add to do everything from importing it, adding something to log, and actually saving those logs.
And so again, this is another place where you can use this to start logging for a potentially problematic area if you know that some users are clicking a button and it's just not working for them. Very common in LWC, very difficult to troubleshoot because we're not always with our users.
So you can use it to help troubleshoot issues like that, if you're not sure what data the user is seeing or what some of the actual code paths are doing when you click it.
And from there, that's it in terms of the bare minimum you need to do. So four slides going from installing it to using it in Apex flow and Java So, the first one is a tagging system. When we're logging, it's sometimes helpful for us to add additional context ourselves.
And sometimes that context can be as simple as an extra word or phrase that is meaningful to us.
And so it has a tagging system that you can use, in all three technologies as a way to add that additional context.
There's also the ability to automatically add tags.
Excuse me. Automatically add tags using this, included custom metadata type. So, these are all just something to do declaratively. If you set up one of these rules and a particular message contains some string that means makes sense to you, the tags will automatically add it for you. Another way that we can add context and controls in the new logger is with the logger scenarios.
I think this is a more advanced feature, not something that initially everybody wants to use, but this has been incredibly helpful for teams that are working in very large complex orgs.
I've seen some orgs or teams where they have either multiple teams working in that org. So very complex orgs. Sometimes they are separated into could have like a sales team and a service cloud team.
Or it could be that you manage multiple packages or lightning gaps in there. And it's just not always clear who owns what and, who to contact for different areas.
So scenarios give you a way to identify and define those areas of your code, whatever segmentation or sectioning of your metadata makes sense to you, you get to choose your own naming convention.
And so that gives you first a way just to identify and report off of it.
It then also gives you a way to control logging in a very granular way. So again, if I have them working in Salesforce org, we have two Salesforce teams, one's managing Sales Cloud, one's managing Service Cloud. There may be times where I don't care about what's happening in the Sales Cloud side. I I know it's running smoothly. We haven't launched any new features. It's fairly stable. But if I've just gone live with service console, I may wanna have more details there.
So by setting up these rules, I can increase granularity of logging in certain areas while keeping it off in the others. So it gives me a lot more control over the log.
It is designed for Apex Flow and JavaScript, but there are a couple of Apex specific features that I think are worth calling out.
The reason I I like to call these nowadays is because I think for a lot of us, Apex is where our complexity lives. Either we built stuff years ago before Flow could do certain things, or we just are a more code heavy implementation.
So when working in Apex, we are all dealing with some very common data types. We are all working with s object records.
We're all doing DML statements on those records, and a lot of us are in, doing integrations, excuse me, integrations with external systems.
So in this example, I am say I'm making a integration with an external system and I want to make a call out to that system and I wanna get a response back.
So by using this fluent interface, I can call two additional methods on here. One to log the HTTP request and one to log the HTTP response.
So not much more code to get this going, but the end result is I now have more structured data that I can report on. I have specific fields for the HTTP request and add other fields specific to the response.
So, you could just store all of this as a giant JSON file, unstructured data, but that makes it very difficult to do anything with.
So by doing using these methods, it gives us more structure around our data and more control.
One more specific for Apex, we often use Apex as a way to do asynchronous automations.
These could be scheduled jobs where we need to run something once a day. It could be that we need to avoid certain Salesforce limits, and so we run it asynchronously. It could be we we don't wanna disrupt the flow of something that a user is doing.
And so by using async Apex jobs, we can get around a lot of, platform limitations and improve, overall performance.
But troubleshooting those can be a difficult challenge.
So, we don't always know exactly, if we go under setup and we look at the Apex jobs page, you can see a history of what jobs have run, but you don't always have the context of what happened during that job.
So the information we get from the platform is a bit limited in that sense. We can see basically that there's an exception there and we see that a job ran, but that's about it.
So here in the build logger, again, we're adding three extra lines of code into this code block. I have an example queueable.
The idea of this cubicle is, well, if you were to actually run this cubicle, it would run nonstop. So don't do this in production, please. Just for example.
But the idea is we get a couple things that we wanna track. One is every time we're doing an async job, there are these four different interfaces.
Batchable context and queueable context, I think, are probably the most commonly seen ones.
But it's just basically some objects that Salesforce provides for us to give us more information about our async jobs.
So Nebula has a way to automatically capture that for you, and then again you'll see the async context details down below. So it's storing again that data in a very structured way.
And in this case, because it's re enqueuing itself, there are multiple transactions involved with this job running and understanding the flow of that, what's actually happening in all of those transactions can be difficult.
And so we can use Navigator Logger's features to set a parent transaction ID, and the end result is what you see at this bottom screen here.
So I ran this actual code here. It failed because in a scratch work you can only reenqueue something five times.
But I can see that here. I was on an actual log record, and I can see that it ran four other times afterwards. And so I've got this great traceability now of following it throughout the entire life cycle across across multiple transactions.
So a couple things that are great in there for Apex developers.
So at this point, we've got it installed.
We've gone through the basics of getting it up and running in all three technologies.
We're using some extra features to give more context.
Where does this data go?
It goes into some custom objects.
Depending on which features you're using, it'll be in two to five of these.
There are some concerns around this that people will have. We'll get into that a little bit. But this was actually one of my original goals for the project. When I started on this, I had been working in Salesforce for about maybe four years or working on the platform for about four years.
I had spent a lot of time learning about custom objects. At the time, I think it was Process Builder and Apex and and other automations.
And I could do a lot of really cool stuff for my stakeholders, but I couldn't leverage any of that for myself.
So trying to understand what was happening in my system at any point was a challenge, and being able to do, or leverage platform features was not viable.
So Nebula Logger has been built around the idea of trying to store this data in your Salesforce org so that you can do, things like use list views, build reports, use dashboards, some of which are included with it as well. So this dashboard you see here comes out of the box.
So we're just there to try and help give you better insight into all of this data.
Pause for a second here.
If you're considering building your own logger, this one has a lot of data points for you.
Lots of people do it. I am also happy to help you build your own logger if you really want to.
But my goal with this project is to save us all time and give us all a solution that already works to do this.
So when I am logging with Nebula Logger, I'm adding my own message. I'm sometimes adding my own context and other data types, but then it's also gathering dozens of data points for me. So it's gonna automatically track, you can see sections on the far left there. It's gonna track details about the user, the user session, the Salesforce environment I'm in.
It's also gonna track details about every entry I add, what metadata actually generated logs, what exceptions do we have, and more.
So again, it's a ever growing, data model that's designed to capture some of the most frequently most frequently asked questions, or details for the most frequently asked questions for us.
One big section of that is monitoring limits.
So, anybody that has worked on Salesforce for a while knows that there are some challenges with limits. There are different types of limits in different spots.
And if you hit those limits, things will break.
Ask me how I know.
I've been shut down in Salesforce production as a Salesforce customer for meeting multiple of these limits. Not fun. Sometimes it can be a one off thing, a single transaction fails. Sometimes these can block you for a day or more, depending on what issues you're running into.
So every time I log, Nebula's gonna capture two types of limits for me. You can see on the far side is organizational limits. Those are things that are either hourly, daily, or cumulative.
So for example, we can see in red data storage. You can all see that there.
So data storage is one thing. If you hit your data storage limit, especially in a sandbox, Salesforce will, soon thereafter, stop letting you save records.
So even though it happens in a sandbox, I think production, they're a little bit looser about it. But if it happens in a sandbox and you're in the middle of UAT, it can really mess up things in in your testing.
There are other limits too that are specific to a single transaction. So things like DML limits, SOQL queries, CPU time callouts, things like that that as soon as you hit them, that transaction is going to fail.
So we need to wait to be able to monitor these limits.
Everything in this screenshot came from me writing a very bad script.
I ran it a bunch and it was basically doing DML in a loop. It was doing callouts in a loop, and it was creating a whole bunch of data along the way.
None of this resulted in an error.
Everything is getting close to the limits, but we wouldn't have a problem yet. Perhaps next transaction or next time the job runs, you're gonna start seeing errors.
But by seeing this data ahead of time, we can keep an eye on it and we can avoid these kinds of performance issues before they actually cause an error. Excuse me. Before they actually cause an error.
So, this all just happens for you automatically. You don't have to do anything additional to to see this data.
Oh, there we go.
For the Apex developers.
This is my favorite feature. This came out in February. Thought about this for about a year before I implemented it.
When I'm logging in Apex, we looked at the examples. I'm doing a single line. I'm doing logger dot error or logger dot info or to logger dot debug.
And that's it. I'm providing some text. I might give some extra context and some variables, and that's it.
What I get in return is a whole bunch of fascinating info about my metadata.
So all of this would be shown in this example. This all comes from a single record that I generated.
So what I did here is I added in this try catch block.
I purposely made a method that fails to do DML because it doesn't have required fields.
And some of that is an easy thing to do and something that can be missed in code reviews or if you don't have proper unit tests. Things like this can slip through the cracks and get into production.
So when you come into Nebula Logger, you can see and understand what has happened.
So I can see a stack trace. It's automatically captured for me. I know exactly what Apex class ran.
I know the stack trace for what other code we're in in order to call that. And it's going to automatically capture and display these code blocks for me.
So the left side is where I added logging.
And in this case I was logging an error, including an exception.
That exception came from my own code. So on the right side, Netgear Logger automatically realizes that and it automatically captures it for me. So in one place I can see my try catch block where I added logging. I can also see where exactly that exception originated from.
So in terms of support and quickly understanding what's going on in your system, this is a great way to get a sense of what you need to look at.
Alright. So at this point we've got a bunch of data being generated.
We're using a bunch of different features to add extra context.
And we have some concerns.
Right. As soon as we start having data, data management is a new priority.
So a big concern people always have, justifiably so, with logging.
What if we accidentally log sensitive?
So out of the box, NetoLogger includes data masking capabilities.
In particular, it will automatically, obfuscate Visa, Mastercard, and US social security numbers.
There's nothing else you have to do. That just works by default.
But if you have additional types of sensitive data you wanna mask, there's an included custom metadata type, that you can see here.
And using regular expressions we can define new masking rules for us.
Once those are in place you can deploy those between environments and it just will run automatically for you.
So that's one big concern people have when it comes to data management.
Next one, Custom objects. Custom objects, as we saw, data storage can be an issue for you in Salesforce Works.
Data storage is an ongoing concern, and so having a logging tool that uses that same limit causes concerns for people.
The best way to handle that is using some of this functionality built in with Nebula Logger.
So by default, every record that gets created has a date field that's set for fourteen days from whenever it was generated.
And then there's a scheduled job that you can include or that's included that you can run. I generally schedule it just once a day, and that will then automatically go and clean up data for you. There There are ways you can customize it as well if fourteen days is longer than you need, or if you wanna keep it, less time or more time. All of that's configurable.
But all of it's also baked in for you. So especially in production where you don't want to risk hitting the limit, this can be a great way to do it. There's also a custom tab here that you can use to run it on demand.
From there, there's a lot of features built in. Hundreds of data points, a lot of methods that we don't have time, unfortunately, to cover today.
And I think all of those are great, and out of the box, it's designed to just work for you.
But we're sacrificing something when we add logging. Every logger is free, but logging is not free. What do I mean by that?
We're asking our system to do not just what we've built to support our business processes, we're asking it to also store messages and context along the way.
That comes with overhead.
Even though we're not necessarily paying for the tool itself, we are sacrificing some data storage. We're using some CPU time.
It's running some queries behind the scenes, so we're consuming extra query counts.
And some of that just eats up and can then negatively impact our systems.
So, two big ways you can customize Nebula Logger.
One is user and profile specific.
So in some environments we may not care to log at all. We may just wanna be able to easily turn it off. Or we may only want to log for certain people.
So there are ways you can customize, I used to know, I think there's about twenty or so feature flags on the left side here, all of which you can customize per profile or per user. So you get a lot of granular controls over what's happening for any particular person at any time.
The other side is there are some system wide functionality or features.
These are, again, an area where some of the very large complex orgs using Nebula Logger can have issues.
And so including performance issues, features that just don't necessarily make sense for certain orgs, and other limitations that you want to keep in mind or try and avoid.
And so you can come in here and disable or customize those features.
If some data points just don't make sense for you or you just don't care about them enough to sacrifice that overhead, you can disable them as you need to. Some orgs have used these two sections to customize it to the point that they are now able to log millions of arrays of data per day. How do you log that much data? I'm not really sure.
What do you do with that data? I'm not sure either. But there are orgs that are using this to generate millions of arrays of data a day. So in terms of scale, this is designed to work easily out of the box for small nonprofits with a solo admin, or small development team.
It's also intended to scale to four to one hundred companies, doing large enterprise implementations.
Alright. How are we doing on time?
So last area.
When I first started building it and it first started getting more popular, I was very excited. A lot of people were interested. A lot of people started contributing.
But then a lot of people also started making their own changes to it.
And the way they were doing this, they weren't necessarily wanting to contribute back to the project, and that's fine. It's open source. That's what it's there for. Take it. Do what you need to with it.
But then those people had trouble upgrading afterwards.
They made changes to fundamental core pieces of Nebula Logger. And then when I released new features or bug fixes, they didn't have a clean path to get there.
So, this is one of the big limitation or differences between the unlock package and the managed package.
This is basically there's an entire framework where using Apex or Flow, you have ways to extend the functionality.
You can add your own fields. You can add your own logic that does different integrations, does, different functionality within your Salesforce org. Really, whatever you want to do with this data, if you want a clean way to extend it, excuse me, this provides it. Sorry. Getting a little hoarse.
So this is incredibly helpful if you do want to use it, but you know that you have some features specific to you.
This also helps a lot, from my perspective.
So I have some pieces of functionality that I personally wanted to build or that I didn't know a lot of people would benefit from.
Some of those features don't make sense for everyone though.
So in this case, I have four plugins that are officially available.
These plugins are, again, something something I don't think belongs necessarily for everyone.
But you just simply install these on top of the unlock package.
I think the two big ones I like to point out here is the first one, async failures.
This one is I think very helpful.
All of the stuff we've been talking about is instrumentation of logging, adding in those calls to logger dot info, logger dot error, all that.
That takes work.
Takes time of planning. It takes resources for somebody to do it.
But this async failures plugin is something that once you install it and enable it, it will automatically start logging errors for batch jobs and errors for, I believe it's screen flows.
So the platform has limited capabilities for us to do automatic error capturing.
There are just a handful of ways to do it right now, and this plugin provides that for you.
And then the last one, I'll mention on here too, Slack.
When I started working on this seven years ago, the company I was working at used Slack.
And Slack wasn't owned by sales reps at the time. So, even though I integrated this here, it didn't really make sense for Slack to be there for everyone. That changed a little bit now that Salesforce does own Slack. And eventually this functionality will become part of the core package.
But for people that don't use Slack, it doesn't really make sense to have all this extra metadata in there.
So, this has been a great way for me to prototype things, for me to release limited functionality, and give everybody else a clean path to add your own functionality as well.
So I think with that, Missing slide here.
I'm not sure what happened to my thank you slide, but my thank you slide's done.
Alright. Well, I'll pause here for questions.
Does anybody have any questions? And I think we have microphones going around.
So every large org with significant customization or at least custom code I've seen has at least one, if not several loggers in place already. What is your path to world domination?
Little by little. Getting there. That example, especially if you have multiple go back here. That's an argument to use for the managed package. If you already have an existing logging system, if any of your metadata, your Apex classes, your custom objects, any of that has the exact same name as the metadata in the unlock package, When you try and install the unlock package, it's going to give you errors.
So the way a lot of people end up trying to slowly migrate is they install the managed package alongside their existing stuff, and then they either slowly replace their existing logging with it or they update their existing logging tool internally to call Nebula Logger.
Especially if you take that route, it can really speed up time in terms of having both running in parallel until you're ready to to fully cut over.
Have you thought at all about defining some interfaces where maybe you could plug in behind existing logging systems?
Oh, interesting. What do you have in mind?
I'm not sure.
I don't have anything. Well, yeah. I don't have anything offhand for that. I think we should talk later because I wanna see what else you have in mind for there.
Nebula Logger but wants to work with it? Would something like that help with something like an ISP that wants to doesn't wanna require Nebula Logger, but wants to allow their customers to, issue allow their package to issue logs to Nebula Logger if they have it.
Yeah. That that's a very common use case.
There are a lot of people, ISVs, OEMs, and others that are doing package based development, as a way to release their their metadata. Could be for their own internal use, could be for external customers where it especially gets complicated.
There's some challenges around it. There's not a quick and easy option for it. I don't think there will be a quick and easy option, but there are strategies around it.
The strategy I've seen and helped a few people with before is, if you're building your own package, what you're trying to avoid is you don't wanna have a hard dependency on Nebula Logger. Typically, when you're building packages, if you do wanna have a dependency, it's required to be there.
But in these situations where not everybody is using that v logger as much as I want them to, not everybody is, you can end up basically building, a set of three packages.
Let me think. Two packages. So in your own package, you would have some kind of logging interface and that would be kind of your default logging implementation.
And then, generally, you would make your own extension pack, I believe is is the official term. That extension pack would depend both on your package as well as Nebula Logger. And once it's installed, you would have to give yourself a way to configure and say, now use Nebula Logger.
So it's it's definitely a bit complicated, something I I wish I had more time to go into today.
But there are multiple projects, including some open source ones, if anybody's interested.
There are multiple projects that have taken this approach and it's once you get it up and running, it it seems to work pretty well for them.
Is there any plan to include plugins for the managed package?
Because Good question. You're uploading the upgrade.
Yeah. Good question.
There's a lot of limitations when it comes to just managed packages as a whole. Things that once you put them out there, you can't really change again.
And so, most of my experience working on managed packages comes from this project. So when I started the plugin framework, I was very nervous about adding it to the managed package.
I have not made any drastic changes to the framework in at least a year, if not, probably closer to two.
So I do think that I I'd like to eventually make it an option there as well.
It adds more complication. Well, there's some initial testing I would need to do to make sure it's safe. And then once it is available, people will inevitably want to have, support for the official plug ins as well. And I wanna be able to give that to them, but it does add some complications as far as maintaining the project.
In the longer run, right, it's easier for managed package because you get upgrades and everything.
Right.
So that's why I mean, obviously, there are a lot of advantages with unlock package. We can manage it, but then you are opening it for all the developers to do the change however they want. Right. And this one is more controlled. That's the only point.
Yeah. And then while we're on that, I think that's a really good point too for people trying to decide which version to use. The unlock package, a lot of companies want to be able to audit the metadata that's in there. They wanna see the actual source code. Some of them do wanna edit it and that's an advantage. But a lot of them just wanna be able to see what's in there. But the only way you can see it is if you can also edit it in the unlock package.
So it does pose a risk that somebody could unintentionally change something in there, versus the managed package. The the source code's a little bit more locked down, And a lot of people view that as an advantage.
So I do wanna eventually bring the plug in framework to those people that prefer the managed package, just not quite there yet.
There's a question over here.
Oh.
I was curious if, you had provisions for transaction control. If you're saving s objects and you need to do database rollbacks, how do you handle that situation?
Good question. And this is some content I've done before. Cut out a lot of things for the time slot An important detail, went missing here.
Platform events are the answer.
So platform events are I think at this like, they are a fundamental aspect to get logging to work on the platform.
When we're talking about regular s objects, if you try and log something into a custom object and then you throw an error, could be a validation rule, could be an Apex error, could be any kind of error, your custom object data is gonna get rolled back. Platform events came out, I wanna say, in two thousand eighteen, maybe.
They're a game changer. They provide a way to publish data without having rollback impact that. So by default, you can customize it. But by default, Nebula Logger is using platform events, to publish the data, and then asynchronously, it gets transformed and normalized into custom object data.
Good. All right. I think we're up on time. I'll be outside. If anybody has questions, happy to stick around and chat.
Thanks, everyone.