Panel: The realities of AI in Salesforce

Share with


Description

Join us for an engaging panel discussion on the realities of AI in Salesforce. Michelle Vose (Senior Salesforce Operations Specialist at Wolter Kluwer), Petter Chittum (Former Senior Director of Developer Advocacy at Salesforce), Ian Gotts (CEO at Elements.cloud), and Melissa Hill Dees (Salesforce MVP & Founding Partner at HandsOn Connect Cloud Solutions) delve into the practical applications, challenges, and the future of AI integration within Salesforce.

Learn more:

Relevant videos:

Transcript

So one of the things that, Eddie talked about was was AI. I don't think any conference nowadays is is complete unless we have an AI session. And and it's worth thinking it's almost less just less than a year since GPT hit the streets, and it's been a wild ride since then. Obviously, a lot of us have been using, AI and GPT inside our own companies. I think the, the statistic I saw from Salesforce's analysis is forty one percent of organizations and now have got someone there using something inside sales, inside GPT.

And, and, obviously, we've also embedded GPT into into elements dot cloud as a number of other ISVs and Salesforce has.

And also I'd I'd lead or co lead GPT dreaming, which is, a conference.

So I'm delighted to be able to, have a discussion now with three people who are at the intersection of AI and Salesforce. So please welcome Melissa Hildes.

Michelle Voez and Peter Chisholm.

Hi, Peter. Just let the others join.

Right. So fantastic. So thank you everyone for spending time with me.

So let me start with you Melissa. You presented AI at a section in at Dreamforce this year. So how how much were you surprised by how AI has dominated this year?

I was astounded.

Truly, that the rate at which it has dominated everything, right, every conversation, everything, at Dreamforce, and, you know, the point being, like you just mentioned, Chat GPT is what a year old.

So for for anything to move with that speed has been completely unprecedented in my experience the last seven years.

And I think we're at the very early stages. We're at the flip phone stage of the iPhone, if you will. I mean, the recent release from, Chat EPT opened up so many new things that that are possible.

I know from our experience, things which we we had on our roadmap to build that would take six months reduced to six six days, six weeks, and it's it's that that level of of compression. So, Peter, you've obviously spent a lot of time talking about and writing about AI. So have you seen a change in the way, where you've worked or, or that you've your clients are working?

Well, so definitely, for me, the penny dropped when I took a prompt engineering for developers training, and the very first time I could see, what it actually meant to use one of these language models, not just to converse, but to use it as a tool to extract data to to to summarize to to basically and then to to do, you know, structured output from that, such as Jason, so that, as a developer, you could actually take that and do something with it deterministically based on what came out of the large language model.

But but for me, that really inspired something that I hadn't really expected, which is, I just kind of have totally become passionate about machine learning as a discipline. And so for me, the biggest change is I've been spending time learning the the core foundational pieces of machine learning, going back just from the basics of logistic regression, neural networks, and really trying to get the foundational pieces of what it means to do machine learning and AI today.

So I think the interesting shift that I've seen is that people started with treating this as Google on steroids. So I can ask it stuff And and that's where you end up with someone who's hallucinations because, unfortunately, it's like the eighteen year old intern who is male not necessarily right, but never in doubt. And therefore we've got we have all the problems associated with that. But I think we're we're kind of missing the point. I Peter, some of your research has gone into that, which is think of it not as a great way of asking it questions, but if we give it all the puzzle pieces, it's brilliant at solving puzzles.

So you get a whole bunch of data and ask it to apply its its reasoning to that data. We're not expecting it to know its knowledge of the world, but actually make sense of this data for me?

Absolutely. And I I think that is the core piece, which is what's the data that you're starting from?

And if you're coming from a, a, a generic large language model, you know, that is wrapped up, you know, in a web application or a mobile app. You're gonna get a, you know, relatively generic response from that.

But, I think where it seems that people are getting the most out of, generative AI, with the the least amount of cost is where they can take, you know, maybe it's a, you know, an AI or LLM service, like OpenAI's GPT services, or maybe one of the open source models, like Lama, for instance, and then, you know, fine tune that, use retrieval augmented generation perhaps, you know, in other words, you know, injecting some, some ground truth knowledge about the problem set that you're trying to get it to work with, and then interrogating that model, you know, that that model can then find the right words, in other words, to help you out.

But, you know, the the less specific your data is, you know, I I think you can coax it into specificity, but, you may find challenges. And I think the the confident teenager analogy is is about as good of one as I can find as well myself.

Look, let me try and bring this to life with some some examples.

We've obviously, as elements, we've got we're we've got all the metadata for an org. Like tens of thousands, like hundreds of thousands of metadata items and all the dependencies and the complexity scores.

We gave it that as the input and said, so we're not training the model. We're simply saying, take this data set and you know all the dependencies and we ask the question, how many hours would it take to change all the things associated with the stage field on the opportunity object and it came back and went ninety six hours. I mean, okay. Can you show me your working?

And it had gone down all of the different paths from the looking at the things which were related to that field, which were the apex classes and the flows, it knew the complexity. We've given it a table of how much time it took to go and change any of those types and edit on the math. So instead of us having to write some calculated algorithm, which would have taken us weeks to do, it did all that for us. We're not we're not expecting it to unknown anything about the world.

We're just saying, make sense of the data we've given it, it's not training the model on our data.

And that's the sort of I mean, the the ROI on that is enormous So, Michelle, I mean, you're a business analyst. Like, you must be the most excited of all the people on on the call here because this is surely where it it it's it's not gonna replace you, but it's got to be your co pilot, so you're the best helper.

Well, when you began, I'm a little biased, So we we began, implementing Einstein for service, in September. And I have to say it's been extremely educational in the process of doing that, to see what the capabilities are of AI and how it can just continue to learn and progress in order to save our agents time ultimately is the end goal. Right? We wanna be able to get our customer inquiries and be able to respond to them in the quickest way possible.

And so far, the results have been successful. We're still in a soft launch.

But we'll be active by the end of the year with classifications.

So I'm super excited. Every time I talk about it, I get really excited, when I share it with the users and you know, we're getting a lot of good feedback. So, absolutely, I think that I, the AI is gonna be very impactful for our users regardless of what industry we are working in.

You know, there are a lot of different, you know, we have reply recommendations and classifications and, automation, natural language processing included in all of that. They're just there's so much to talk about.

And I I am very excited about.

So that's that's almost the client facing side of things, which is where you're using it inside Salesforce. And there's clearly a lot of power there, but there obviously there's the balance there of concerns about what what we're doing with those that customer data.

So when how are you handling that? And also, I mean, Melissa, you're gonna jump in as well because I know you've been alright thinking about that.

I mean, I don't think currently we're having any concerns with the customer data because the in the industry or the business that I work in, we are very much a one and done. So we are handling our data internally within Salesforce And there's no sharing of data across, let's say a digital experience page where customers can potentially get their invoice information or, you know, a retail type thing. We're not using that. So for us, in particular, Sonman issue.

I love that, Michelle, what what you were saying earlier about across the industries, because, of course, I'm focused on the nonprofit industry. You know, who always has to do more with less. Right? And so for, for AI to be able to not, like you say, not displace jobs in, but to make things work better, faster, more consistently, more information, more actionable information, right, from from the business analyst side is huge. And In fact, we have already, you know, we're an ISV as well. And so as a partner, Hansel Connect has implemented AI into helping them create customers create descriptions.

Right? That's a huge, not everyone writes, like Shakespeare.

Not everyone writes in a way that encourages and motivates people to take action. And so to be able to leverage AI to do that. So it's not that it's not any personally identifiable information. It's not anything that anyone's concerned about, but it is an opportunity to enhance what, say, the description of a volunteer opportunity looks like. And what that is.

I love also, and I've really been looking in another business situation to the layer of trust that Salesforce is providing with the large language models and being able to use the data that's in Salesforce that may be a little bit more sensitive than, you know, what we're currently using it for with our application, but to be able to not and, obviously, I'm not the engineer. Peter probably can speak to this better than I can, but how that layer of trust works so that we can leverage all the good of AI without compromising the security of the data itself.

Right? So to Melissa's point, she was talking about, like, agents or individuals' users being concerned about replacement. Right? So I think it's really important to give that feedback to the agents and help them to understand, you know, the use case for it. And the impact it will have on their daily responsibilities and help them understand that it is truly not going to replace. It's simply gonna make their job easier. So it's really important to have a good dialogue with your users?

I I I presented something just recently called Fobo, fear of being off release. And I think the I start by saying your job's not gonna get replaced by AI. Your career is not gonna get replaced by A, but but it is going to get replaced by someone else who exploits AI better than you. And I'm trying to encourage people to Maybe maybe not as much as Peter has, but at least start thinking about what AI means, thinking about get get hands on and start going, what what is it what's the good prop look like? How do we actually refine prompts? How do we actually turn them into templates so that people were then reusing them inside our organizations? And we're refining them and thinking of them like code, we're actually looking after them because it actually is essentially code.

I think that's a really important point, Ian, which is you know, I've been around long enough, and maybe some of you ought to have too. But, I remember what it was like, you know, when the internet landed and the dot com boom happened, and anybody who could spell a world wide web pretty much could find a job.

I fully believe that in the next two to five years, there's gonna be a huge demand for people with some skills around AI. And some of that is going to be on the machine learning side, but some of it may also be more the the users of a and and understanding how to effectively get a result out of an AI.

Some of it is also gonna be about AI strategy because, you know, as we all know, AI is not just generative AI, large language models and the like, AI has a whole lot of different ways that it already exists today.

And, you know, the the suck of people going into generative AI is also gonna create a demand for people who know other types of machine learning as well. So, you know, I think that message that there there will be work out there, is absolutely resonates, and There will be huge numbers of opportunities for people.

And again, I'd like to piggyback on that. I think one thing that we also have to consider is just because there's a feature or functionality of AA doesn't mean that it makes good business sense for us. So it's really imp it's really important that we choose the solutions that make sense for our business.

Yeah. It's about finding that sweet spot, isn't it? It's finding a place where I think that sweet spot is almost the intersection of three circles. One is, is there a really strong ROI? Is there a huge benefit?

The second circle is actually is it achievable. If it requires ten gigabytes of perfectly clean data or kind, it's not a in somebody, is it achievable? Can you, if it's, like, that's simple enough. And then the third thing is, the third interlocking circle is, are the results good enough enough of the time for us to trust it? And I think we need to that's the intersection of those three circles where the and I think as business leaders, we need to help our teams find those, those sweet spots and then then really drill in and and get good at making those work. Rather than going, well, I had a go and it didn't kinda kinda didn't work. That's maybe you pick you pick the wrong you pick the wrong use case there.

Plus, Ian, and I would I would further that to say that to remember how fast this is moving. Right? How fast it's moving in the last year?

I got to, beta the Salesforce email prompts. Right? That system. And I'm actually a a fairly decent writer.

Right? And the way that I craft emails and the way I put them together, I do a good job of it. And so I was very disappointed in how that functionality worked straight out of the box initially. Right?

That was, I don't know, two, three, four months ago, maybe, you know, just before Dreamforce.

Whereas they have updated change brought that so much forward that it's a completely different beast.

And the summaries, the AI that you can, you know, use AI for summaries within the sales context now in Salesforce, is huge. That's that's not something I can do quickly. And as an executive who really needs that executive summary, Instead of having to dig back through every communication that I've had with that customer, potential customer over the past two years, that is huge. So just because you do it once and you're kinda like, meh, you know, don't write it off. You know, three weeks from now, check back and see what it's doing because it may be completely different and so much better.

Well, and that's to to Melissa's point. I mean, One of the things that I've recently learned was AI is continuously changing and Salesforce is always making updates to it. So as they continue and to improve it, We don't have to wait for those release cycles to get these notifications.

So, you know, we're we're very excited, of course, I'm gonna say that a lot. I'm very excited.

So the, like, reply, email reply recommendations, those types of things. That is something that we are gonna look into. And the whole being that would be able to condense a lot of our customized email templates and just have little pieces that they can pick and choose from to create the the needed template versus having eight hundred templates because everything is so customized and specialized.

And so less maintenance. Yay. Right. Exactly.

I wanna circle back to your, your Venn diagram that, you had us visualize Ian because to me, I I had a similar one, I would say.

But the one bit of it that, I would add is a circle in there that I would call low cost of error.

Because, you know, and I think this is also where a human in the middle comes into, focus as well.

Right now, if you're a business where, you know, if you're maybe trying to automate with some kind of artificial intelligence, especially generative AI, you know, you need to be really careful that if something goes wrong, that, you know, minimum harm is something to occur.

So it's a it's a term that I heard pretty early on in my journey, but it's one that really resonated with me that low cost of error, thing.

Yeah. Maybe that third circle I described as are the results accurate enough. I think that's probably what I was trying to get at, which actually, are you happy that the results are good enough to be reapplied at scale.

Can I go back to one point? I think, Melissa, you said, and both Michelle about AI is getting better. I think also we're getting better at asking the questions and getting better at prompting. This is think about your think about your delegating a task to that eighteen year old intern. If you said, can you book me a hotel?

Why, where, when, what I think we we quite often the prompts have been too vague and therefore the answers come back are too vague. I mean, we've got prompts that are three or four hundred words long in terms of what we're prompting. I think as we get better and understand, now you go, what, three hundred words. Why would you type that when the result's going to come back as two hundred words?

Well, if we are reusing this, in fact we've got thousands of agents using this, it's worth spending days optimizing that prompt to get it really, really good. And I think if the results aren't very good, sometimes we just need to look in the mirror and go, well, maybe it was us not asking the question properly rather than going, you didn't do a very good job. And I think that's the other thing we're seeing. We're getting better and better at understanding how to ask the questions to get the right answers back.

And I think that's a skill that we'll we're gonna have to get into, which is how do we manage a prompt like an email template and how do we take it around a release cycle? So I think we we've got we've got some new skills to learn, but I think the other thing I'm seeing is AI is forcing us to to to go back and look at some of the existing skills and do a better job, data governance, architecture, business analysis, documentation, all those sorts of things, which we probably didn't do a great job of. AI, if AI is now reading then we need to be a lot better at it. So I think it's we shouldn't be thinking AI's, everything is new.

There are lots of things actually we need to reinforce which are our current skills.

So, back to you Michelle about we've the there are we're talking about business we're talking about using AI, but if I'd almost see three very different set. There's AI outside Salesforce, whether it's right using the creating images or writing articles. There is AI as you described it, which is inside Salesforce. It's a prompt, but there's also all the area of how we manage Salesforce, and and therefore from a business analysis perspective, there there are areas there where it can help us. So are you seeing any of those being used in practice yet?

Well, we're we are working on kind of very much wallwalk run for our implementation.

And I think for really for any business, it's really important like you were talking about taking that time.

Take the time to get it right, to reduce the error, you know, potential.

I mean, really it's just about taking the time. So and part of that taking the time is to evaluate what you're trying to accomplish documenting that, and getting buy in from from not only your management team and your other administrative support, but from your users.

So and that's why we choose to do soft launch for each piece because we are able to get small subset of our senior users and get their feedback on what makes sense, what doesn't make sense, are or, you know, our page layouts even are really important to our users because it helps them to know where to go and, how to use it.

Zion, you just described a business change project, which it yeah. We we've got to we've got to approach this as though a it's a major transformational change that's coming through. And I think as part of that process, I think sometimes you you come to a stop point where The answer is just no. Like, it doesn't make good business sense for us.

We don't have the correct licensing to accomplish a specific task.

And I like to give no with a reason.

So my my goal is to always help anyone involved, stakeholders to understand why the reason is no.

It could be No. It's not possible. No. Not right now. And there's a whole variety of no scenarios, but it's really important to help them understand why.

So what do what do we think the the key challenges are, with implementing ai on on the platform at the moment?

Understanding it, honestly.

There are so many moving pieces to it.

And and like we talked about constant change that's happening.

So, you know, we implement something and then a new piece of that comes Zau. We have to reevaluate again to see doesn't make sense to add that piece or move forward to the next, project.

I think one of the biggest challenges we have right now or I personally have the automating based on AI scares me a little bit because the the the error situation. Right? I mean, you talk about the speed of things. And so the faster things move, the more the more errors you can generate before anybody even realizes it.

Right? And I I'm very much not risk averse at all. Right? My team wishes I were more risk averse.

But it seems a high level of risk to me. At this point, in automating anything based on AI.

I mean, and I would love you know, I wonder I mean, because you're not doing that already. Right, Michelle? I mean, Peter, Ian, we're not automating something to happen. Right?

We're not doing a cause and effect thing based on AI at this point. Right? Correct. Yes.

So we're we're working with, like, field automation, recommendations, things like that, working with existing data rather than creating something based on what has happened or how a case is handled.

You know? Like, I know that in, you know, in Salesforce, there is the possibility of creating a knowledge article based on a case.

We're not utilizing things like that.

So that is definitely something that we would have to keep an eye on if, you know, and really dive deep into your design makes sense for us. And, you know, there's always a potential for invalid or incorrect information, which kills the, you know, the data accuracy that we're always talking about. Right?

I mean, is there anything at this point, Ian, that you recommend that, you know, would build automation on AI?

Not not inside Salesforce. I mean, I think the human and the loop piece is so important. Yes. Which is when it's but but we've had it before.

I mean, you create dashboards, you wouldn't want the execs to go, okay, I will automatically implement whatever the dashboard says. The dashboard is there to be interpreted. And I think we're still at that point where inside Salesforce. It's about, okay, it's now generating something for me, whether it's an email that's going out or it's a recommendation or it's it's something that a human can go, okay, let me make sense of that and make sure we ask the question properly.

Yeah. And I mean, the current models right now just don't have the ability to, you know, allow you to interpret why they came up with an answer, Now it is being worked on, anthropic published a paper on AI interpretability.

There are companies like Arise, which are trying to, look at observability and AI models.

You know, so I think it'll come, but for anything that, you know, an end user might turn around and need to understand why or an auditor or a regulator, even worse, to to put an AI in, and, you know, at the at the sharp end of that, that spear right now I think would be incredibly dangerous for any company in fact.

But but let me turn it around a different way, which is let's let's not worry about what we're doing with data inside Salesforce where think about the business analysis type role. We've now got the position where you could you could give it the a call customer call, discovery call, thinking about business analysis now, discovery call, or you give it the statement of work or a policy document about how something works, and it will automatically now build the process maps and do a pretty good job of it. Now you're not obviously now it's done the first cut, so it's done all the heavy lifting.

You've now got a process map in the UPM format. Great. Okay. Now, I can now buy my business analysis skills and go Okay.

Well, yeah, I know the discovery call said that, but actually now I can make some changes, but at least a lot of that, the bulk of that work's been done. And then once you've got once you're happy with that, it will automatically write user stories, something which is time consuming, boring, but it will write really good user stories and our experience of having used this for several months is we go, well, let's start a very good user story, and then we go back and go, oh, well, actually it wasn't a very good process, Matt. That's why. So I think where, where there are some standards at each end, there's a UPM standard for process maps, and there's a very clear standard for, for, say, UPM.

It does a really good job of that because it's not relying on its knowledge of the world. It's it's it's it's it's reasoning. And then the last piece it can do is I'll look at all your org and tell you which bits of metadata could be reused because it's really good at looking through tons of data So I think this idea of using it for pattern matching or using it to drive things across standards works quite well. But that's, but at every step you've got a business analyst in saying, okay, am I happy?

Let's move to the next step. I think the scary bit would be someone said, well, if you just draw a decent, have a decent discovery call, that should build the end that should build Salesforce for you. I'm not sure we're ready for that I I in preparing for this this, this panel, one of the things I was trying to imagine, are we at a point where, you know, a common problem I hear about amongst developers is I'm stuck using this flow would much rather have some Apex and even vice versa, you know, maybe a customer that had a developer implement something but would rather maintain it themselves looking at some Apex, and could I turn that into a flow?

I I think we can't be far away from some kind of model that would actually do translation one way or the other or both. So, yeah.

Yeah. So I think the there are huge, the leverage on if you've got a thousand agents or ten thousand or whoever you've got a lot of end users the leverage you get there in terms of getting AI in their hands is huge.

But the risk is also relatively high in terms of we need to get it right. I think there's Obviously, that the benefits around the, say, the business analysis piece, huge, huge uplift there, like, thousand x productivity gains, but you still need those senior people involved in it. So that I, again, back to you gotta pick those use cases and work out what works for you in your particular in your environment. Think the the clear message though is this isn't going to go away. Everyone who actually harnesses and and understands how to exploit it is going to be front of the job queue when it when people say, have you got skills in this area? And can you demonstrate those? So it's great that people are starting to play with and start starting to, experiment.

So so we're, I mean, I know this is, my crystal borders as murky as yours, but Where do you see the future AI in Salesforce?

Where do you see this going next?

I mean, I don't see it going away, to be honest with you. It's just gonna continue to grow.

You know, and industries continue to have their own use cases.

So I think AI is gonna become more specialized based on the use case of each industry.

Kinda like what Peter was saying as far as, like, the, you know, we had the dot com. Right? And now I think AI is gonna kinda build similar.

I I you know, I think this is a prediction I would say across software in general, you know, in in the same way that moving, you know, from mainframes client server, from client server to web based applications, from web based applications to mobile. You know, I think there's gonna be a whole new discipline of user experience that evolves out of this that we don't even understand right now in the same way that, you know, I'd like I was looking at, just to see if there was anything there, if Apple had anything in their human interface, guidelines, for example, that they publish. And and there's nothing, but I have to believe that Apple's gonna enter this game really soon And when they do, there's gonna be something in there about human interface guidelines for how to interact with an AI. So I'm I'm sure of that. And I think Salesforce will be right there pushing that ball along too.

I think that's huge, Peter. And in fact, and, you know, if you've talked to me for five minutes, I've told you my theory on design and how we've done such a job of making things intuitive for end users, you know, retention is such a huge question mark as it's too complicated. I can't do this. Whatever.

And you look at chat GPT or I look at it and what it originally looked like, and it wasn't pretty.

You know, that's and and that's what I've preached. You know, it needs to be enjoyable and pretty and fun to use, but ChatPG GPT was just so simple I mean, you literally type in a question, type in a what you need, and it came back to you. I mean, that's that's a kind of user interface and a user experience that it if that can be translated?

You know, why would anybody do it differently? I mean, it doesn't need lots of bells and whistles. You just need to get the information that you want to be able to ask a question. I I teasingly say my husband, you know, I had him before I had Google.

Right? And I would ask him. I would say, you know, Mitch, you know, when was when was the cathedral that Eli built? And he would tell me this he would know for some unknown reason.

And that's what chat GPT has become for us. And I think that's the way people expect it to work. So, So the AI, I think, is gonna have a huge impact on the user experience.

I'm not sure how yet. I don't know. I don't wanna, you know, put too many predictions out there and what that might look like, but I'm excited about it.

I mean, we're talking about AI in a professional environment. Right?

I think that the chat GPT, the AI, in general, I think it's gonna change the educational industry, dramatically as it continues to grow.

Because students have access to so much technology.

So whether the teachers use it, the administration uses it, or the students use it, I think it's just gonna change a lot of of how the education system works.

But I also think it's gonna reinforce things which I mean, we're all taught how to delegate, as we start to build teams.

This is delegating to something which actually takes every word literally.

So you have to be a lot more accurate about that. And I think, I'm just gonna post this just come out on Salesforce Bend, which is about asking the right question If you had five minutes with Mark Benio for Ryan Reynolds or pick pick your pick your hero, he only had five minutes to ask questions. It is, what are those questions going to be? And and how do we ask the right questions? And how do we ask them with enough level of detail where we get some decent answers.

And I think that's a skill which maybe it would is is going to be more important because actually the answers you get back will give me driven based on the level of question that you you're putting in. Right. Choosing the right questions, you know, you wanna make the the answers impactful to what you're trying to accomplish.

Whether it's your whether it's your hero or, in the within the profession trying to get answers.

Doing a discovery doing a discovery, Michelle. I mean, that's Exactly. Asking the right questions. That's when you're working with a new customer, a new, and as a business analyst, you know that. So So is the AI agent going to be the discoverer then? Like, you might ask them something very natural language and they come back and the the agent comes back and says, Do you mean blah blah blah or blah blah blah. I mean, you could definitely imagine that being very capable very soon.

Mhmm. Yeah. But but I think it also changes the nature of the way, us as software vendors work. Because we've now discovered. I'm well, my one little story. We spent and we had a team at four of them spent six months building analytics three sixty, which is the front end to our application, which is dashboards and reports.

Now AI has now got single code interpreter, what it actually does is it will you ask the question, it will write the Python code to go and do the analysis. It will then come back and then also if you say I want it as a bar chart, it will write the code for that.

So that piece of work, six months, four people in the team. We replicated that in an hour.

Because we provided the data. That's a nineteen thousand two hundred x productivity gain, and we suddenly went hang on. We don't need to write a report every time someone asks for a report or a dashboard and do all that analysis, we are now responsible for providing the information in a format that AI can now read and then the questions so you go, you can explain what you want and give some some suggestion questions. And then after that, people go for it. So therefore, a customer goes to me, oh, I really wanna know about how a permission set works, or Melissa's got this permission set, and actually they're not I want to actually provide an a set of an access, it's the same. What should I do?

We've got the data to provide those answers. But you need to make sure so it would suddenly change the whole UI, which is that no longer is it, I know when we'll produce another dashboard and another report, another report, I think the same is true for Salesforce.

We'll start to see another UI layered on top.

Yeah.

But there may be a different UI for getting the data in.

Right. And and we I mean, adding on to that is it's not specifically what we're asking. It's how we're asking it. So it's not it's not what you say. It's how you say it. And that's kind of something that I live by, you know, in life as well as you know, within business. And I think it's important to have that communication skill to be able to ask the right question, not just voice your question.

So it's, again, it's back to a lot of those soft skills, which is critically important for all of us, which is why encourage everyone to try and go go and get your business analysis, certification. I think it doesn't matter whether you're an admin or developer, and architect, a BA, those skills I think will will be important as we go forward.

Finally. So I'm conscious we're sort of at the the top of the hour. Thank you much all of you for spending the time with us and your insights.

And, very happy that so many of you have joined us for this, this dev ops summit and all the the recordings of all the sessions will be live so you can then go and rewatch them. So thank you again for everyone who's attended. Again, thank you so much for the panel for joining me.