Practical AI for Restoration
A 3-hour, vendor-neutral bootcamp for restoration leaders
This AI bootcamp is designed to help restoration leaders build a practical understanding of AI — what it is useful for today, where caution is needed, and how to evaluate AI thoughtfully as part of real operational workflows.
This session provides a structured way to think about AI in restoration: how it behaves, how it’s being applied, and how to make informed decisions as adoption continues to expand.

Watch to the end of the webinar recording to access the link to claim your IICRC CE credits.
Welcome, everyone. My name’s Leah. I’m the director of product and content here at Encircle. I’m so grateful for you guys joining us today. Alright. So like I said, welcome to our first ever AI boot camp at Encircle. Like I said, we had over twelve hundred people sign up, so we’re so grateful that you took the time out of your day, your busy day, to join us for this three hour boot camp. Why we’re here? If you are anything like me, you’re probably feeling a a certain way about AI at this point in time. It’s very I find it very noisy. Every time I open up LinkedIn or open up an email, I have another I have somebody reaching out from an from a company offering this service, offering me new software. And just overwhelmed by the amount of new solutions, big promises, everything that’s out there to make my life easier. Or I’m I’m seeing big warnings about the dangers and the risks of AI and why I shouldn’t use them. So it’s all just it’s it’s a lot. And I joke about needing an AI solution to help me understand all the AI solutions that are out there to just help me cut through the noise. So that’s really why we put this webinar together today. It’s really a mini conference. You can see here we’ve got seven incredible speakers lined up for you today. They all come from different backgrounds, different experiences, but they all share the same passion for turning down the noise in AI when it comes to restoration. We’re gonna stick to facts. We’re gonna teach things that are really meaningful, give you some real tangible takeaways and learnings so that you aren’t as confused about AI as I am today. So, I’ll just switch slides here. Like I said, it’s a mini conference, seven sessions, with some really great speakers. My first guest is always on time, always a professional, and always ready to go. So I’m going to welcome Taylor Carmichael. It’s just well, she there she is. Welcome, Taylor. I will do a quick intro, and then I’m gonna throw things over to Taylor. So Taylor is the director of systems for Southeast Restoration. Many of you might recognize her face. She’s been on some Encircle webinars before. Southeast is a full service contractor specializing in insurance repair and restoration. Founded in nineteen ninety nine, Southeast has seven locations across Georgia, Alabama, South Carolina, and Tennessee. But specific to Taylor at Southeast, she leads the integration of technology and operations across the cleaning and restoration industry. She’s got a master of information systems and over a decade of industry experience. Taylor focuses on building practical, scalable systems that reduce friction, improve communication, and turn data into meaningful insights at South Beach. She is a frequent contributor contributing author with CNR Magazine, and she is a speaker at the upcoming RA convention. So Taylor is super passionate about helping restoration teams sharing her knowledge and helping them adopt technology in ways that strengthen day to day execution while laying the groundwork for long term business growth. Alright, Taylor. I’m gonna leave you. You got twenty minutes. Take it away. Alright. Hello, everyone. Thank you, Leah. And hello, Encircle community. Thank you for having me today to talk about a subject that I am passionate about. But before we talk about all things AI and AI doing some really impressive things, I do want to ground this conversation in a real moment from our organization at Southeast Restoration. And early on, we started experimenting with AI in preproduction in our sales department to help us move faster from summaries to scope narratives to better loss descriptions. And on paper, yes, it was a great use case. Where could we go wrong? But we learned very quickly was that AI didn’t fail because of the technology itself. It failed because our inputs into that AI, they were not consistently defined. Handoffs to other departments, they weren’t always clear. And ownership across next steps, it was not clearly defined. The AI output, it certainly sounded confident, but it wasn’t always accurate. And instead of saving time, our preproduction sales team had to stop and validate what the AI produced. And really, that’s worse than not using AI at all. And last year, that experience, it taught us something very important. AI does not create clarity. It exposes the lack of clarity. If the process isn’t defined, AI is going to make assumptions. If ownership isn’t clear, AI is going to fill that gap. And if the data isn’t trusted, AI is just going to amplify that problem but amplify it even faster. And that’s why we here at Southeast, we lead with this principle, clarity before capability. AI is not the starting point. It’s the reward for doing the foundational work first. And everything that we are going to talk about today, it is going to build from that specific lesson. Now next, I wanna start with this because it usually does get a laugh. But if you or a loved one has suffered organizational chaos from unmanaged AI, you may be entitled to financial compensation. And it sounds like one of those late night class action lawsuit commercials, but if you slow down to think about it long enough, there is real truth in that. What we’re seeing across the industry isn’t that AI doesn’t work. It’s AI without intention creates chaos. A lot of organizations, they just jump straight to the tools. They may automate the wrong workflows or layer AI on top of unclear processes or even expect technology to fix discipline issues and habits. And instead of removing friction, AI has a potential to create more of it. But the reality is, again, AI, it’s not the starting point. It’s the reward for doing the foundational work first. When the foundation is solid, AI can be incredibly powerful. And when it’s not, AI just makes the mess move a whole lot faster. And today, we’re not going to talk about which tools to buy. We are gonna talk about what that foundation actually looks like in restoration and how we can decide where AI truly belongs. When people say that AI fuels overwhelming and restoration, they’re not wrong. Our reality is that our work is unpredictable. It’s time sensitive. Every job is different, and decisions often have to be made before all of the information is available to us. At the same time, our data lives in too many different siloed systems. So when we talk about AI needing good data, leaders are already wondering, well, which data and from where? Our field teams are stretched thin. They’re asked to move fast, document thoroughly, and now learn new tools on top of that. If AI adds even one more step, it certainly can feel like a burden, not a benefit to them. And leaders are being asked to decide faster with less confidence using fragmented information. So when AI is introduced to an organization without that clarity, it does not feel like progress, but it feels more like noise that we it was talking about earlier. Now let’s talk about some common mistakes or assumptions. This is where most AI efforts go sideways. We automate broken handoffs instead of fixing them. So the same confusion just moves faster. We add AI on top of unclear processes expecting intelligence to compensate for a lack of definition. It can’t. We cannot well, we expect the technology to fix discipline issues. It can’t. We expect the technology to fix documentation gaps, inconsistent inputs, or missteps. That’s a leadership problem, not a software problem. We roll out tools without clear ownership, or maybe no one’s accountable for the output, the accuracy, or the outcomes. This is what happens when we skip the very important decision discipline. When any of these on your screen are true, AI doesn’t reduce friction. It’s only going to amplify it. And if you remember nothing else from today, I want you to remember this one slide. Before you apply AI anywhere in your organization, you have to slow down just long enough to ask these questions that you see on your screen. Is the process documented? If it only lives in someone’s head, AI will guess, and it is likely going to guess wrong. Is the ownership clear? If no one owns the input or the outcome, AI is not gonna fix that either. It is just going to make the ambiguity faster. Is the data trusted? Not perfect, but trusted because AI will confidently amplify whatever you give it, good or bad. Would I make a decision from this output? If the answer is no, you should not automate that. And finally, this is important. Would I defend this in a courtroom? In restoration, that question matters. If you wanna stand behind it with a carrier, an attorney, or a regulator, AI has no business touching it yet. If you can’t answer yes to these questions, the right decision isn’t better AI. The right decision is better clarity. And here’s what that looks like for us at Southeast Restoration. We certainly did not start with AI everywhere. We started with sales and documentation where friction and quality and consistency, it was already high. Was already elevated for us. One of the biggest wins last year was loss descriptions. AI helped us produce more complete, well structured documentation that answered adjuster questions before they were asked of us, that reduced unnecessary back and forth, and it helped claims get approved faster. We also saw stronger documentation overall from both carriers and customers and cleaner handoffs between departments. Scope voice dictation was another key improvement. Field teams could speak notes on-site and instantly convert them into structured documentation instead of spending hours typing a leader on into the evening. Because we were intentional about where we started and we had that foundation, AI improved clarity instead of adding noise. We focused on the problem first, not the tool. And we definitely did not get this right the first time. One of the biggest roadblocks was that temptation to chase shiny AI tools before we had fully closed our process gaps. The tools looked promising, but without a solid foundation, they created more questions than value. Governance and security also slowed us down. And in hindsight, that was not a bad thing. Those pauses forced us to be more intentional about where AI had access and how it was being used. But what I wish I had known earlier is that AI, again, is only as good as the foundation underneath it. Clean data, clear workflows, defined roles, those have to come first, and there’s not a shortcut around that. And, honestly, the hardest part wasn’t creating technology and artificial intelligence agents. It was the change management behind that. Helping our team to trust the output, adjust bad habits, and adopt new ways of working. That took more time and leadership than building the technology itself. If there’s one practical takeaway that I do want to leave you with today is this. Build something AI can actually help you with. Before you implement any AI solution at your organization, start by identifying where friction really exists. Where is time wasted? Where is information lost? Or teams forced to rework the same things over and over? Then document those processes clearly and consistently in a way that systems can understand, not just people. Clarity here is what makes automation possible. Organize your knowledge base so information is findable and accessible. If your team can’t find it, AI can’t find it either. And only after you’ve done that should you explore AI tools to enhance what you have already improved. When you do the work in this order, AI doesn’t feel overwhelming. It feels supportive. It reinforces good systems instead of exposing broken ones. And AI works best when it is invited into something that already makes sense. Thank you, guys. Thank you, Taylor. We had a couple of questions come through. Just a quick I’m just gonna reiterate quickly before I throw some questions over to you, Taylor. There’s some questions about getting the recording and copies and stuff. Yes. We will be sending a recording after this with you’ll get a link to a website that has all of this on demand. And there was a question about IICRC credits. You’ll get a link at the end as well that you can fill in a form, and we’ll get you those certificates. Okay. Taylor, I’ve got some questions specifically for you, now that we’ve got those ones out of the way. How how do you or your your business how do you know when you’re actually ready for AI? That is a fantastic question. We are ready for AI when we can clearly and confidently answer the question, is the process clear? Because if that process is not clear and defined, When we automate something last year, it was documentation, lost descriptions, scoping. And at the time, our process wasn’t clear. And AI, it guessed, and it guessed wrong several times until we realized that we had internal process gaps. We had internal documentation issues and bad habits that we had to clearly define at the leadership level. And once we did that, we realized that we were ready to tackle certain AI challenges. But, really, we had to slow down long enough and ask ourselves, what is the problem that we’re trying to solve? That’s the first thing. It can’t it cannot be everything. Everything cannot be the answer. And once we define specific friction points in our organization, is the process clear? Is there documentation? Would I trust the output? If you answer yes to those questions, then certainly give AI a try. That’s so just to paraphrase, it sounds like you’re saying that AI is not a is not magic. It’s not a magic silver bullet solution to all sorts of of of bad habits. Is that is that fair? That is absolutely fair. We get a lot of questions on is AI gonna replace our people? No. I’m a firm believer that it’s not. Even as a technologist and someone that is researching and deploying AI solutions at our organization, I’m a firm believer AI is not gonna take our jobs. Our people are our greatest asset. And artificial intelligence, if you think of it like this, our people, they are our thought leaders, and artificial intelligence is purely just your thought partner to come alongside of you. That’s great. You talked about how you, like, you’re a leader in your organization, and your your role is to bring these technologies and solutions to the table. How about in a smaller a smaller company, that doesn’t have, you know, seven locations across the southeast south u southern US? Where should a small company start? They’re probably feeling like me where it’s, like, bombarded with, like, this tool and this tool and this solution. So where do you think they should start? I I genuinely feel like you have to look at your your department. Okay? I’m sure you have market development, sales, emergency service, preproduction, accounts receivable. You need to be looking at those specific areas of your business and just defining your friction points. I know I keep saying the word friction over and over, but that’s just because I’m a firm believer. And once you look at your broken processes throughout the life cycle of a claim, that is when you can say, okay. I have a sales problem. I have a problem with getting claims approved. I have a problem with getting claims approved timely. Then start drilling into why is that. Is that an internal issue where we need to define what does the handoff look like? What does the documentation look like? Can our documentation be better? Maybe your documentation isn’t the best, and that’s why you are experiencing elongated claim cycle times. But I just encourage you, look at the areas of your organization and really start defining those friction points. At a recent leadership team meeting, I asked our general managers, what is one friction point at your location, and how could AI come alongside that? Because, yes, I know. I don’t necessarily live in the real world. I’m behind my computer, but you just need to ask the question, where can AI help you, general manager of sales? Where can AI help you, person responsible for finances, and just slow down long enough to find that friction and make sure that it’s well documented and that you have clear processes around it? Right. That was a question that just came in from Carlos. And, Taylor, you may not be able to answer this, but I can probably answer it a little bit too in the context of this overall session. But I’ll ask it to you first. When you’re talking about AI, are you mainly referring to generally the LLMs, like Clo, ChatGPT, or Gemini for, like, client emails, or are you talking about field specific AI systems as well? So I’ll let you answer that, and then I’ll give some additional context for what’s to come. Talking about both in a sense, but my brain is really more so focusing on artificial intelligence agents and creating those at your organization. When you have defined your documentation or your knowledge base, I’m sure you have SOPs, wouldn’t it be nice if a field team member could go to a chatbot, ask it a question, and it’s referencing your well documented SOPs or friction points that you’ve already defined. And you can build AI agents around that where it’s just referencing your organizational information. But as it relates to generative AI, yes, use use that to for adjuster communication to help you write emails better. If you’re just getting started, I I do think that that’s a great place to start. But with that, I would just encourage you to think about governance. Don’t copy and paste sensitive information and do free versions of any AI tool. You don’t want any of your data getting used to train the large language model, so that’s just something to pay attention to and be mindful of. That’s great. And, yeah, Carlos, really great question. And just as we go through the the rest of our speakers today, we’re going to reinforce this message that that AI does not just equal ChatuchPet or an LLM. There’s a whole there’s a there’s a whole lot of AI, branches of artificial intelligence, and and we’re gonna talk about each of those specifically. Some of our speakers are gonna give you tips and tools for how to interact with LLMs in a better way, how to do better prompting to get a better output, how to validate and test those prompts. So we’re gonna hear from people, who who have more have expertise in those specific areas, and we’re also gonna look at what Taylor talked about is how do you go from, like, a, interacting with a chat with, like, with ChatGPT, for example, to building an application or building something purpose built, whether that’s local to your business or whether that’s something at Encircle building a more enterprise grade AI solution, which leads into the next question, Taylor, which I think is for me. But I’m gonna throw it over to you after I answer the question because I know you have an answer. Will AI be implemented within the Encircle platform, and are there any future plans to program it within Encircle? Great question. Love to tell you about it. Yes. And you’re gonna hear from three of our AI experts from Encircle today. They’re not gonna be telling you about any specific product. They’re gonna be talking about how we’ve responsibly built AI into our platform and give you a bit of a sneak peek of sort of the things that were come that are coming from there. But, Taylor, we’re we’re almost at time, but I would love it if you could just talk because I know Southeast is testing testing one of our features for us right now. So if you wanted to just talk about how you’re piloting. Yeah. Absolutely. One, I’m I’m very thankful to even be a part of this conversation as a restorer that is utilizing or piloting rather the Encircle AI scoping tool for field dictation, structured narrative, documentation summaries, and that is helping our field team members do voice dictations as they navigate the property. What are they seeing that’s capturing their narrative? And then also looking at the documentation that they are entering into Encircle Hydro, and it’s producing documentation summaries at the end of that that has been incredibly impactful and beneficial and helping our team members just really be more effective and more efficient all at the same time. Amazing. We’ve got a couple more minutes. Jason asked and, Jason, I’m gonna throw this question to Taylor specifically for her perspective, but we are gonna get, more in-depth into prompting with Joe who’s coming up in a little bit. But Jason asked, do you have any good prompts for AI to help with mitigation and restoration? I I personally don’t prompt it in that way. However, we do have defined prompts that we have instructed our teams to utilize specifically for our five function departments, market development, sales, mitigation, and emergency services, etcetera. And what I encourage them to do if they’re unsure of the prompt, instead of, like, say this better, say this nicer, say this more firm, those are very just high level. I like to really encourage you to develop a persona. And what I mean by that is in your prompt, act as a proficient mitigation and emergency services professional that is triple IICRC certified and write this email, reply to this adjuster in this way. So it is already developing that persona for you of like, oh, okay, this is how I should respond utilizing this persona that I’m adopting essentially. So I encourage people that are new with AI and new with prompting is just to develop a persona. Who do you want to be today? That’s great. Thank you so much, Taylor. This was so helpful. I think my biggest takeaway is that you you need clarity before you can, before you can apply AI on top of that and get your get your ducks in a row, get your documentation in order, get your data organized. That’s gonna be the foundation because AI is not is not magic and is not going to, you know, magic wand you out of bad habits and bad processes. So thank you so much, and I’m sure we’ll talk soon. Thank you. Our next section is, with Nick Hindle. Nick is the chief information officer for PureClean. Welcome, Nick. Good to see you again. Pure Clean is celebrating its twenty fifth anniversary and just surpassed five hundred locations across the US and Canada. And I just saw today that they’ve been named the twenty twenty or a twenty twenty six top franchise by Franchise Business Review. And Nick is leading the IT or information and technology at Pureclean. He is an experienced technologist with degrees in computer engineering and postgraduate certifications in software engineering, computer architecture, and operating systems. In addition to that, Nick has fifteen years of experience over fifteen years of experience serving the technology needs of restorers. He’s held senior positions, not only at Puroclean, but, at Nexgear at Encircle, and now, of course, as the CIO of PureClean. So Nick is going to talk to us about evaluating AI vendors. So Taylor Taylor looked at how to make sure that you’re ready and what to get your your foundations your foundations right. And Nick is now gonna talk about how in his role at Hero Clean, he evaluates AI vendors and how to make the right decisions. Thank you, Lee, and good morning to everybody. I’ll actually just follow on with something that Taylor mentioned, and I think it’s really important. You must know what problem you’re trying to solve. If you don’t, you won’t ever realize that you got to the end. So and and I think think that was a very good point that she wanted to highlight. So let me just start off with this. There’ll be a hundred what ifs that you go through, And it’s very important that you don’t get overwhelmed by trying to solve everything in one pass. So I I often like to use these three items when I’m going through any evaluation, but just in in general also. So I always say, and I like to say, every decision has a half life. And that’s because as we make decisions, we make decisions based upon the information that we have, but we have to recognize that that decision only has a lifespan. And if we find more information that now negates that decision, we should change that decision. So what if I get more information when I’m evaluating a vendor? Take that information and make a better choice. Indecision is the worst decision. I’ve learned this many, many years ago from somebody who said, there are three decisions, good, bad, and indecision. And indecision is the worst decision that you can can have. As as a leader within PURACLEEN, I’m I’m surrounded by lots of people that are looking for direction, and creating indecision leaves people in an unknown state. So they don’t know which which software vendors they that they should be working with and if they’re the right software vendors. So sometimes we have to make decisions. And knowing that that decision may not be perfect, I’ll refer you back to the first item. Every decision has a hard one. And then the last one is don’t rush. When you’re talking to your AI vendors, they will want to close the deal. That’s their job. But your job is to make the best decision that you possibly can within a reasonable amount of time after knowing, going back to what Taylor said, what is the problem that you’re trying to solve? And once you have all of these things together, you can make a more informed decision. The important piece that you need to think about is where is your business? And a decision for for me and the one that I make on behalf of PURACLEEN may not be applicable to somebody like Taylor in in in her role, at Southeast or somebody else that’s a a local or somebody else that’s a larger organization. But what I would think about, I would think about these time frames. Twelve months out, you can generally predict, what directionality is going to be. Thirty six months out, you’re thinking about growth strategies and ensuring that your business is on the right trajectory. What kind of things do you need to pivot? And then if you’re thinking seventy two months out, it’s probably very, very difficult to predict where you’re going to be. So don’t pick a partner that is tying you in to a timeline that you don’t know what’s going to happen. Okay? It’s really important, this piece. Know where your business is at, know where you’re going, and know what is going to be necessary for you to be successful in each of those phases. So these are questions that you just will go through. Okay? Ai is a technology. Okay? AI is a is a segment of technology. And you’re going to ask these questions. What should I buy? Who am I gonna buy from? Why should I buy it? But the important questions are, will my life be easier? Will this help me grow? Is what is being told to me just, you know, just the sales pitch, or is this reality? Can you show me? When you make a decision to follow through on a software vendor, that vendor needs to be a partner to you. If they cannot be a partner to you, they’re probably not the right vendor to purchase from. Because you want to realize that when you have that relationship with the vendor, it’s not that they just got you to sign an agreement. You need to have a partner as you go through that twelve, three year, five year cycle, potentially. Okay? And the areas that you are probably focused around when it comes to a a restoration provider are these that are on the screen. I’ll go back to what Taylor said. Know your problem. Know where you want to attack it and incrementally. She actually said pick a department, and she’s a hundred percent right. Pick the area that you want to tackle first and be successful and incrementally propagate that success through the rest of the business. If you try to to do everything, you will absolutely fail. You cannot do it all, and that’s just not a good sensible decision. So, I’ll give you an example here at at, at PureCloud. We do evaluate all of our vendors. We use, some criteria around this. We have actually built essentially a vendor management program. We call it SPAR, standard partner approved and restricted, and these are the different categories that we place our vendors into. Whether you have one formalized or not, you probably will go through an evaluation process for your vendors. That is vendor management. You may have a platform that you do it with. You may do it on a piece of paper. You may sit around the table and say, what did you think? That’s a thumbs up. That’s a thumbs down. Let’s let’s go. But I would highly recommend that when you make a decision that you document it and you know what the reason is that you went forward with that partner. So, again, here at PureClean, we actually have a a very formalized process, and we evaluate people from everything from what are their support capabilities, what are their road maps looking like, what is their delivery, have we have we got not just testimonials, but we actually know people that have been using this product, can we pilot it. Taylor is actually in a pilot with the Encircle team around scoping, and it’s I I I’m actually incredibly aware of what that, functionality is, and I’m I’m quite excited. But when it comes time, I will probably pick up the phone and call Taylor and say, what did you think? Not, hey, Leah. Tell me what you think I should be reading. I’ll actually pick up the phone and go, hey. Do that. You’ve got peers in the network, and you’ve got experience that is out there, and you should take advantage of that. And you should always, always, always make sure it’s the right fit for you. It might be a great fit for Taylor, but it’s not a great fit for Nick. Every every every decision has that half point. Okay? But think about doing some actual vendor management and set the criteria, live by that criteria, and ensure that your vendors meet that criteria. One of our fundamentals, this is a nonnegotiable. Every vendor will be SOC two certified for us. If they aren’t, we say, you’ve got a fantastic problem solution to a problem that we have, but you aren’t certified. I’m sorry. We just cannot move move forward with that. We define those criteria and ensure that your vendors meet those criteria. Okay? So, lastly, everybody should be able to go away with something in my mind. So I I put these items or what we think about. I have a a wonderful team that helps ensure that we can be successful, But we do follow these very simple but very useful items as we do the evaluation. What problem does it solve? That’s the fundamental. If you don’t know what the problem is to solve, you cannot find the right partner to help get that solution. Technology is changing so rapidly. It’s wonderful. It’s actually wonderful to see. The problem there is this. If I sign on a five year deal with somebody, in one year’s time, am I already behind? And in five years’ time, am I five years behind? Don’t get tied into a lengthy agreement where you can see other opportunities run by you that you should be able to get access to. So don’t sign a five year agreement is is is the basic there. This also is we we often call this the hidden cost. When you take on a a new technology or you are transitioning out one technology to another, There’s a hidden cost, retraining, you know, the onboarding of that, getting everybody familiar with it, and ensuring that it’s actually gonna help you grow your business. A partner is very different to a vendor. A vendor sold you something and then said, you’re on your own. Good luck. A partner is somebody that will help you overcome the hurdles that you will absolutely experience in the first three, six, nine months of that of that rollout. So ensure you pick not just on the technology, but how that vendor will be a true partner to you in your business. And I I am not saying this, by the way. I am not saying change every twelve months, but you will know in twelve months if you’ve got the right partner, and the right partner will deliver year after year after year. So as I said earlier, ask others. Don’t just go for, for the testimonials, but actually pick up the phone. Go to I’m I’m actually at the convention, this week. I ask other people that are in in in the same space as us, what do you think of? And we we share that that that that experience with each other so that we can make better decisions. Hey. I was looking at this tech. What did you think? I thought it was good, but it didn’t help me solve this problem. And I I can go, I actually don’t have that problem. Maybe it’s a good fit for me. But speak to people, get their input, and make the best decision that you can. And finally, this is the last one, I think. Does this vendor support your growth? It might you you you may only be looking twelve months out, as I mentioned earlier, or thirty six or or even five years. But does this vendor support a three year growth for you? If they do, think about it in these terms. Are they a good partner, not just the vendor? Are they a true partner? Do they have a product road map that aligns with where you are going? How frequently do they hit the milestones for that road map? You will make a decision probably on a promise. Hey. We will deliver x in twelve months. Okay. I really need x, so I’m gonna sign the deal. What if they don’t? When they don’t, did they just negatively impact your three year growth plan? So go back and ask others. How often did they hit the metrics around their road map? How often did they, in quotes, pivot? I love the phrase pivot. Some ways, I think of it as, yeah, we didn’t deliver on time, so we’re going to come up with a different reason. Pivot. I know I’ve been there. I’ve said those words myself. But make sure that you’ve got a true partner, and that’s really the fundamental. And this is another phrase I like. I’m not sure who said it, but they said, any fool can make it sound complicated. If they cannot express to you in terms that you understand and can comprehend so that it fits into your business plan, probably is the wrong partner partner for you. So if it doesn’t make sense to you and you need more time, then wait. And if we go back to the three items right at the beginning, every decision is a half life. You can make a decision, but if you are willing to make a decision and know that you could get it wrong, but you’re ready to make a better, more informed one as you learn more, that’s good for you. That’s what you should do. But don’t be indecisive. Your team will not appreciate not knowing what the direction is and the time frame for it. And then the last one is don’t rush. If it doesn’t make sense to you and you need more time, then wait. Find a partner, not a vendor. Thanks, Leah. Thank you, Nick, and very much appreciate you taking the time out of out of your day. I know you’re, right in at the Verisk show right now, so excited to hear what you what you learn learn there. I do have a couple of questions for you. You talked about, a nonnegotiable at PureClean being SOC two compliance for a vendor. Can you can you explain what that means to the audience? Just Sure. And so SOC yeah. So so SOC two is essentially, a demonstration of a commitment to security. Basically, that’s it. SOC two is an AICPA compliance certification that demonstrates that you meet certain elements within your organization. This is important because the the the clients that we have, predominantly TPAs and insurance carriers, it’s a nonnegotiable for them. They require us. They require PureClean to be SOC two certified. We are. But there’s always that element, and the most fundamental one is around security. They will always ask this question. Is my data secure? That’s it. Is my data secure? That’s gonna be one of the fundamental questions. Now our franchisees are putting data into a partner’s platform. I am accountable for that platform. So if there’s a breach in that platform, I need to know, and I need to know that that partner will communicate. Things happen. They always do. But any platform should have a SOC two certification for us. It’s a nonnegotiable element, and it’s for that reason because our clients expect it from us. We expect it from our partners. And ultimately, policyholders’ personal data is held in those systems, and it needs to be secured. And, Nick, how does that, or have your requirements around the around SOC two or around your vendor selection, have they changed or shifted with with AI vendors and and AI solutions? So so the the fundamentals have not. We do take a look so the fundamentals have not because as those underlying technologies change, the requirements in SOC two in particular are changing to support that as well. So we get audited every year. Our partners get audited every year, and they get audited on different things. And there are five principles. I can’t remember them all. Perhaps I should. But the the important thing is, are you continuing to keep up with the changing pace of technology and also the changing pace of regulation? That that that’s really important as well. So that’s so it’s a it doesn’t change as a fundamental, but the granular elements within those components will have to adapt over time. One more question I have for you, Nick. You you at PureClean, you have a lot of relationships. I mean, you’re at the very store right now with TPAs and carriers. What’s the temperature like around between, you know, restoration shops and TPAs and carriers around the use of AI? Is everyone just is it everyone cool with it? Are there is are there any, you know, red flags or risks that carriers, TPAs, are are looking out for and maybe cautious or wary about the use of AI to for a restorer? I I think the majority are very excited by it and excited by the fact that it can bring a lot more efficiency. As Taylor was saying, you know, the collection of notes and, you know, having more comprehensive information collected in a more rapid cycle, I think that’s really good. I think areas of technology where where they are going to be very interested is is around scoping and estimating. Having the ability to get better quality estimates is really important to them, and it’s really important to them for this reason. The efficiency, of their claim cycle, their claims review, is increased. When that efficiency increases, their costs actually go down. And believe it or not, I think insurance carriers like to have money. So anything that helps them keep money in their pocket is a good thing, and anything that helps them increase value and reduce cost at the same time is also a good thing. So they’re actually very, very interested in certain areas, not just, you know, image recognition and stuff like that, but, you know, areas around estimating contents and so on so so that you can be far more efficient in in that collection of information. Okay. Great. We’ve come to the end of our time, Nick. So is there any any final words, anything you wanna make sure that people take away from your your conversation today? I I I think, as I say, be willing to make a decision and ensure that you get a partner, not just a vendor. Make sure that you know what you’re getting into. And as always, let I wanna thank Leah and the Encircle team. What a a tremendous team that they are. And if you have any questions, you can get those through Leah to me, and I’ll be happy to respond as well. Appreciate it. Thanks so much, Nick. And Thank you. We’ll talk soon. Next up, we’re gonna move we’re gonna move into some an Encircle more specific Encircle conversation with our with Ronuk. Ronuk, I don’t I I’m terrible. I have a I have a hard to pronounce last name, so I just never even try. Is it Raval or Raval? How do I say that? Raval, I think, is what I go with. Alright. Yeah. Alright. Well, just a quick intro of Ronuk. He is the chief technology officer and cofounder of Encircle. Ronuk has been programming and building for as long as he can remember. He’s a graduate of a very prestigious program, the computer science program at the University of Waterloo up in Canada. Immediately after graduation, Ronuk met Paul Donald, who’s the CEO of Encircle, and they founded Encircle in August of twenty twelve. So for nearly fourteen years, Veronica has been overseeing the architecture development and engineering of the Encircle app. Over that time, Veronica has seen the company grow from the three employees that were working in Paul’s garage to where we’re at now, which is over a hundred and twenty five employees. And from our early beginnings as a contents inventory tool to what Encircle is today, which is the industry’s most trusted application for field documentation. And now, Ronuk is is leading the team to build Encircle AI, and that’s gonna bring our team and our and our platform to the next level to transform it into greater workflows, greater efficiencies for the restoration industry. So Ronuk is going to take over, and he’s gonna talk to us about how to think. And I use the quote think because that’s part of Ronuk’s presentation, to think like AI. So I’ll pass it over to you, Ronuk. Cool. Thanks, Leah. Thanks for everyone for, you know, coming to this webinar and giving us this platform to try and do this with vendor neutral presentation and really try and spread, good information across our industry. So, yeah, I wanted to talk about think like AI. Really, my life has been AI for two two two and a half, maybe three years at this point, and I’ve been heavily immersed inside AI. And what I wanted to do was try really hard to build an intuition for how AI works and try and give you a working knowledge so that you can effectively use, deploy, debug AI, and then maybe even understand why or what kind of struggles you might run into scaling AI as as you scale your deployment across your organization. There’s a thing that Nick said in the previous presentation that I’m gonna harken back to. He said something along the lines of any fool can make things complicated. And if they can’t explain things to you in a way that you understand, then what do they know? I’m paraphrasing there. But, hopefully, by the end of this presentation, I won’t be the fool. I will be able to explain things in a way that you can understand. And so very quickly, we’re gonna cover analogies to describe the LLM behavior, how LLMs can actually generate a response, how errors creep in, and what you can specifically do about it. And the last thing to highlight on this slide is that picture there with the robot. The pictures here are gonna be very essential to my explanation, so pay attention to them. So, yeah, let’s jump right into it. So to start, my the presentation is titled Think Like AI, but that’s a lie. It uses the word AI there, but AI means a lot of things. Specifically, I wanna talk a lot about large language models, which are LLMs, which are the key innovation in this wave of AI. So someone had asked about what are the kinds of AI that we’re talking about, and then you mentioned ChatGPT and Gemini and Grok and Cloud. And those are examples of the sort of low level models that we’ve unlocked new capabilities in in this generation of AI. But and they’re driving a lot of innovations, and those are the large language models. There’s also applications, and we’ve talked about scoping, which is a thing that I’ve been very heavily in deeply involved with. But applications can use other kinds of AI, perhaps some older techniques, perhaps some techniques that are more reliable or more robust, things like classifiers or, yeah, all sorts of things. I’m not gonna get get into the technical nitty gritty here, but I wanna talk about the kinds of AIs that are currently dominating the new cycles, which is these large language models. So not not all AI. Let’s talk about LLMs a little bit more specifically. The title also lies because it talks about the word think. And one of the key takeaways I want people to have is that LLMs don’t think, at least not the way you do. We use words like think, reason, plan because they adequately communicate the idea or the approach the LLM is taking. So when something when when we say, oh, you know, the AI is breaking down the task into smaller subtasks and it’s making meaningful progress against those subtasks, we kind of liken that to reasoning because that’s how a human would reason through a complicated task. But that’s an analogy. And analogies are useful, but they have limitations. Right? Taylor, in her presentation earlier, one of the tactics she was mentioning was, like, give the AI a persona. You know? Tell it that, hey. You are somebody that has ten years of restoration experience, and that works. And we found that improves the quality of the AI response, but it’s absolutely not how you would talk to a human. Like, I’m often reminded of the the matrix scene where, like, he downloads all of the information and then comes out of it with I know Kung Fu. That’s that’s how the AI is working there, but it’s not how a human would be cast into that role, and they would absolutely say something like, no. I don’t have ten years of experience. You can’t just tell me I have ten years of experience. But AIs are shameless. They will absolutely take on that persona and off they go for better or for worse. So takeaway here is be careful when applying the analogy the other way. AIs we use the words think, reason, plan, but if you approach your model with the way humans think, reason, or plan and then try and use that to predict AI behavior, you’re gonna run into some issues. So be careful. Alright. But analogies are useful. They are a way to make technical content more approachable. So I’m gonna start with my own analogy, and I wanna talk about chat. Chat is the fundamental interaction paradigm that these large language models operate under. Essentially, they take the dialogue that has been presented so far. They see the very last thing that the user asked them to do, and then they generate a response, a completion of what the next sequence in that dialogue should be. And if you do that back and forth, you get a chat. And the analogy I wanna drive is that a chat response, any single chat response is kinda like asking the large language model to cross a river. And the picture here is the has the robot on one riverbank, and it needs to cross this river. And it can take some steps along the way. There are some stones there that to help it along. And it’s trying to get to this riverbank at the bottom there. And the LLM needs to choose a path and what steps it step on along the way to cross this river. So a chat is a river. If we go to the next slide, if a chat is a river, a a single step is a word. Technically, technical jargon would say it’s a token, but I’m gonna use a word here for simplicity. And every the every step that the AI takes, the large language model, it’s a permanent part of the output. That word is now part of the output. And most importantly, a large language model cannot step backwards, and you’re gonna see how that single insight explains a lot of the errors that creep in a little bit later in this presentation. So a chat is a river. A step is a word. The width is the complexity of the task. So a simple task is a small stream. LLM models might often be able to cross it in a single step. And there’s no harm if, you know, the LLM kinda stumbles a little bit or steps in the water. It can still make its way across the district. So simple tasks might be summarize this piece of text or tell me the sentiment of this piece of text. Is it positive or negative? Like, what does the user think? Those are simple tasks for LLMC states. Whereas complex tasks, we talked about scoping and estimating. That’s a complex task. It’s a raging torrent. And so an LLM might have to take many steps to cross that river or and if it goes down the wrong path, it it can may not be able to recover. So Chad is a river, steps are stones, and task complexity is the what whatchamacallit? I lost my train of thought. It’s the width. There we go. And then lastly, model quality is the stone size here. Better models have bigger stones. Bigger stones make it easier to cross wider rivers. So we’ve seen a the rate of model quality increase has been accelerating. It seems like every day we hear of a new model, and it’s really hard to keep track of the current state of things. There’s Gemini three, which used to be two point five. There’s Claude Opus four point six, which is somehow different than four point five. There’s who knows what. They’re coming out faster and faster, it seems like every week. But they are improving. And what was hard for a model a year ago might be easy for a model today, bigger stocks. Finally, the last thing in this analogy is good guidance is kinda like prompting. A good prompt nudges the LLM into picking a better goal, avoiding pitfalls by picking the going down wrong paths or thinking that, like, a stone that it thinks might look good in the moment and is worth stepping on is act you know, with the right prompt is actually not the path to go on. So in the picture here, you know, there’s some signposts that say, this stone’s wobbly and this stone’s slippery, and that’s the value of a good prompt. It reinforces what good steps look like so that you can ex get some predictability out of the large language model. Okay. That’s the analogy. Let’s talk a little bit about how things get wrong. And the word hallucinations gets thrown around a lot. And what in in reality, as the models are getting bigger, hallucinations are becoming less and less likely. There’s not so much that things hallucinate. There’s actually we’ll talk about the next slide called misalignment where things are starting to become a bigger problem. But to talk about this briefly, large language models are trained to take the next step even if they are at a dead end. And so because once they’ve taken a step, once they’ve put a word in the output, they have to They are trained and incentivized to generate the next word. They can’t go backwards. And so I have an example here of a sentence fragment, the first man to walk on the sun. And that let’s say that the LLM has already decided to take those steps to say walk on the sun even though it’s impossible to walk on the sun. It’s very hot. But the first completion that I got, and I had to go back to a model almost two years ago to go down this path to get a true hallucination, was where the model generated was Arthur Burns, a totally made up name, a totally made up person, and then concluded with in nineteen sixty nine, which is the moon landing. So it got it made up a person, which is a hallucination, and then it kinda tried to tie it into a real fact, but in a completely wrong way. So that’s a true hallucination. But as the models have gotten bigger and the other two examples are much more recent models, you can see that the second model tried to kind of walk backwards by putting out more words. It’s it’s backpedaling a little bit by saying, oh, actually, no one can walk on the sun. Forget I ever said that. And the the last example is is going in the in a direction of, oh, I’m just gonna make this sound like a joke. So, you know, they walked they did so at night. The sun wasn’t there. So be it. So that’s hallucinations. Let’s talk about misalignments. And so this really connects really well with something Taylor said in the AI readiness test. If where she said, if the process isn’t documented, then the AI will guess to fill in the gaps. And in this example, I’ve asked the AI to tell me a good story in one sentence, and the the AI spat out a story. I’m not gonna read it all you can, but I personally don’t think that this is a particularly good story. I don’t think it’s compelling. I don’t you know, it’s not what I look for in a story. And by virtue of me just, you know, relying on the LLM to know what I mean when I say tell me a good story, the LLM is gonna fall back to whatever good they were trained on. That’s not necessarily what you’re looking for in good. So by virtue of not having a process, by virtue of not knowing the persona you wanna give the AI and what you expect in the response with concrete examples, you’re relying on the training data and, you know, these off the shelf LLMs are trained on copious amounts of data to fill in the gaps, and those gaps might not be what you expect in the output. So that’s misalignment. And the last part of this presentation, what you can do about it. Specifically, you, the the individual that is interacting with this large language model. You might be trying to automate a part of your workflow, or you might be trying to come up with a prompt that you could share around with your organization to try and build a repeatable process or an automated process to to do things better. But and I will caution, though, that the you here is easier to do as an individual, and the amount of engineering and the amount of testing that you’ll have to do to try and scale it up, to try and navigate the rivers that have to be crossed over and over again consistently becomes harder and harder as you try to scale the use of any AI tool across an organization. So it’s really for the individual you, not the collective you. But what can you do? You can make better prompts. You can give better guidance. There are some best practices. I have listed some bullet points here of things you can Google to make better prompts. There’s a lot of guidance out there. But in summary, giving AI examples of dos and don’ts, good examples and bad examples of the kind of outputs you’re looking for is really effective. Giving the AI as a persona, as Taylor put it, what’s called mixture of experts, giving them a set of personalities to adopt when formulating the response works really well, and then putting ground truth in the prompt itself. So saying, here is the relevant sections of the IRC RCS five hundred that you should consider. Instead of relying on the AI to pull it out of its training data, putting it in the prompt itself is very, very effective. And then the last tip I’ll I’ll leave you with is try and reframe the task in ways that you would be that you know would be represented more in the training data. So these like I said, these AI models are trained on datasets essentially the size of the Internet. And across the Internet, if the standard unit of drywall sheets is four by eight sheets, try and ask your question or your prompt or, you know, align your prompt to what would be frequently show up in the training data. So don’t ask for it in square meters in this example. Alright. And then the last set of of things to talk about. As I said earlier, the complexity of the task is the size of the river. What you can do is you can try and break down the task into smaller sets. You can make it make make the task itself smaller, make the river smaller to increase the probability that the AI can cross it reliably and correctly. And so planning is the idea that you ask the AI to generate a plan first, which is instead of going off to generate the response, let’s just iterate on the plan where you can review it, and you can make adjustments to the plan before it goes off and starts to do the thing. This is also the where a human in the loop comes in, which is something that we’ve spoken a lot about and is a consistent theme that you if you come to Encircle webinars, you will hear a human in the loop a lot. And reasoning, aka thinking, is something that is built into newer models where they they as part of generating the response, they will have an implicit step where they do subtasks. And, usually, there’s a little drop down where you can, you know, unfold it and see what intermediate steps it’s taking. It’s internally deciding how to break up the river into smaller rivers to cross, and you can kinda see how it’s coming crossing those smaller rivers to come up with the final response. Okay. I think that’s all the slides. Let’s just do a quick recap. Okay. So in recap, analogies aren’t perfect, but they are useful to build an intuition. LLMs generate responses word by word. They can’t go back, but they do have an overall goal in the response. They are taking into account what you’ve said so far, and then they are getting to a point. Hallucinations and misalignments are more likely. They the more complex the task gets, the more likely they become, the more divergent the response will be, especially at scale. So even if you get it working for a small set of examples, if you try and deploy it at your organization in uncontrolled environments, you have to really be concerned about that. And then finally, we covered some tactical approaches to get the most out of your usage of Ella. That’s all for me. Thank you so much, Veronica. There’s a couple of questions, that have come in. Carlos, I’m gonna ask that one first. When using LLMs, is it better to create an agent and feed it IICRC training data and then use that agent for a while to continuously answer questions until the next big update or version or to use specific paragraphs? I think probably prompts and use it in a single chat interaction with the LLM. Depends on the use case. I would say that the big difference between a sort of what I what I consider agent and workflow is who’s driving the conversation. If if you’re in a workflow setting, it’s you, the human, that is driving the conversation to a specific goal. Whereas in an agentic setting, the agent is sort of taking over and it’s deciding what to ask you next. It’s pulling data out of yourself, but also various tools that it may have access to. So it’s like the the first level of answering this is the is the like, who who has control here? Second to that, the I would say that, like, grounding the LLM in with whatever, you know, standards or other things, whether that be SOPs or, you know, rules that you follow in your organization or just things that are unique to your context. The more you can put that in front and center, the more effective the response will be. Great. Right, as as CTO at Encircle, I think you’re the most qualified person to answer this question. And I think it’s called I’m gonna say it’s complicated, but I’ll let you answer it. Has Encircle selected an AI platform to work with, or are we, and I, we, the restorer, expected to choose one ourselves? No. We would not expect our restorers to bring their own AI to it. I think the magic of Encircle is focusing really deep on what the people in the field need, and then knowing that restoration contractors are there for a job to do, which is not wrangling with technology. And so I think that, I like, I’ve been there since day one. Things like offline mode, which were unheard of, when we built it, are part of our DNA because we recognize that fieldwork is messy, and we can’t make our users jump through hoops, because of technical constraints. So having them try to bring their own AI, I would certainly be opposed to. I think we would rather engineer do the hard engineering work ourselves, and then package that up in a way that’s easy to consume. And I think I think you’ve answered that there is that Encircle isn’t choosing a specific AI LLM to work with. We are we have built what we’re calling Encircle AI, which is a fully engineered system of a bunch of different AI tools that we can reach to to solve problems and and provide applications. Is that fair? And and that’s and that is touching upon one of my first slides, which is that, like, we will use the best tools to solve the specific problems that that that that we have. And sometimes that means working with a vendor. Sometimes that means going in there alone. Sometimes that means combining two approaches and making a hybrid. I will say that and Diana is gonna talk a lot more about this, but we do have we do understand that we work in a very regulated industry. Confidentiality and privacy are, you know, top of mind for everyone. We are SOC two certified, and all of that. So even the vendors that we choose, will have to go through a rigorous selection process. K. I think I have time for one more question. Zach asked, if he has or if his business has very clear SOPs or processes, what are some practical ways that he can insert AI into that workflow to automate a process? Oh, depends on the the specifics, really. The there’s there’s so much there’s so much noise in in terms of, like, what people will tell you all sorts of ways that it could work. Like, there’s low code builders these days that you can kind of try and talk your way through a problem. There’s the models keep getting better and, like, things like and the model companies keep trying to come up with new ways to integrate themselves with their workflow, most recently with Claude Cowork comes to mind. So it’s very dependent on the exact process itself. Oftentimes, the challenge becomes not that the AI can’t do it, but how to connect the AI to the relevant systems in a way that’s secure and not patched together. And that’s also very important. So, yeah, I have nothing like, this would be a a conversation over a beer or something and not I think not a webinar. Zach, I think I can give one example that specifically that Taylor mentioned at the the beginning was what Southeast Restoration is doing. They looked at a very specific problem, which was doing a a a summary of a of a walk through. So doing a damage description and dictating that into their phone and then injecting AI into that to create a summary. So a very specific problem with a very injecting AI into that very specific workflow to create an output that saves that saves them time. And, coincidentally, that’s that’s a tool that we happen to be about to release in the Encircle platform, whereas you can summarize summarize videos to to really reduce that time, things like scoping and estimating as well. Ron, this was amazing. I feel like there’s a lot of people on here who would love to have a beer with you and and pick your brain about about AI and LLMs. So thank you for sharing your expertise. Laura, you asked a question about hands on training. We can get we could definitely get back to you about specific Encircle training and how to use Encircle. We have lots of we have our customer success managers. We have weekly live training sessions. We have our learning hub where you can learn asynchronously. So we have lots of ways to get you trained, so we’ll we’ll reach out, Laura. Alright, Ronuk. Thank you very much, and I’m sure I’ll talk to you tomorrow. Yeah. Thank you. Great job. Alright. Next, I’m going to bring in Joe Ledbetter. He is a restoration consultant and AI architect. He’s also a thirty year restoration veteran and a self taught agentic engineer. Agentic engineer, those are not words that I knew went together until the last twelve months, but here we are. So for the last year, Joe has been working alongside Ronuk and our entire development team at Encircle, creating and building Encircle AI, a fully engineered enterprise grade engine that’s gonna power some of our truly game changing features and applications, over the next months and years. So after a year of hard work, we are at the precipice of seeing some of the first applications to emerge from Encircle, including automated mitigation scoping, instant item descriptions, video summarization that I just mentioned. So these are tools that will save restorers hours of time without disrupting how they work in the field. And when I think of the Venn diagram between somebody who is really passionate about restoration, and somebody who’s equally or really passionate about AI and technology, Joe Joe’s Venn diagram is a circle. They are overlapping. Joe is so passionate about restoration and bringing AI solutions into restoration workflows. And he’s going to, share the restorer’s guide to AI and share some lessons from thirty years in the trenches. So, Joe, I’m gonna hand it over to you. You are muted, Joe. We’ve got an audio issue. Say it again. Go ahead. All set. There we go. Thanks again. Welcome, everybody. As Leah said, thirty year plus veteran in restoration. Right? So I know the pains that all of you deal with intimately. They’ve kept me sleepless at nights most like much of you. And I did that for, you know, my basically, my entire life up until AI started to become commercially available. And I had played with machine language and some other, you know, automations prior. But, you know, around twenty one, twenty twenty one, twenty twenty two, all of sudden, we had got commercial viable, like, LLMs. And I was like, wow. These things are really cool. Man, I wonder if they can help solve these problems that we have as restores. And it then led me on in a massive journey to where we are today. Right? And I started off as a hobbyist, and and now I’m responsible one of the key people responsible for developing our AI tools here at Encircle. But, man, I learned a lot of things on the way, and they were really painful. And as a restorer, I know that all of you are going to AI tools, and you’re, like, trying to solve these problems. I know you are. That’s what I did. And most of you, if you already haven’t, are gonna find some of the same failures that I’ve that I found, that I struggled with, that I dealt with, that led me in difficult places that didn’t actually create a solution. They created more problems. Right? And so it brings us to the point now of twenty twenty six where, as Leah mentioned, I’m a self taught agentic engineer. No formal education. But I took the problems that we had as restore so seriously, and I wanted to solve those problems. And the AI gave me the capabilities to start to solve those problems. Funny thing, though, and just, again, just not being a technologist, I had no foundational understanding of anything that Ronuk had spoke about in his prior conversation. And I was just trying to cross the river, as you can imagine, with a prompt. Right? So here I’m trying to solve this big complex thing. Like, how do we develop a scope? And I was trying to do it with a prompt. And, like, just so you know, like, that doesn’t work, like, at all. So I’ll save you all the hours and hours of trouble. Like, you’re not gonna build a scope with a prompt. It’s not possible. It’s not even, like, obtainable even by the best technologist. Right? However, there’s a whole lot of great things you can do with prompting very specifically. And so but here’s, like, the big things that I was, like, failing at. Right? First and foremost is leakage of data. When you’re using, like, the free tools and even sometimes paid tools, but when you’re using the free tools, like, you’re exposing your customer’s information if you start uploading things like estimates. If you start uploading your SOPs, well, guess what? I hate to say it, but they’re no longer your SOPs. Like, now that ChatGPT has your SOP in it, every restorer, anybody who goes to search anything or get an answer about anything restoration, your SOP start to become present. And here’s how I know that. I was trying to do a simple task where I wanted to compare two estimates. Right? No big deal. I put them into chat GPT, and outcomes is what, at the time, was a great, you know, output. It wasn’t for about a year later while I’m doing other research. That file showed back up in an output of a nonrelated search. That scared the crap out of me because it was, like, instantaneously realizing, woah. Woah. This isn’t, like, anything we’ve ever used. You can’t just put stuff in there. Another another main issue was that, like, when we get into prompting, we’re gonna talk about it in a in a minute, is you start to build these things called guardrails. And when you build so many guardrails on your prompt, meaning levels of instruction, you actually end up making a box. Like, you you end up you don’t know it because you’re restorers. Like, you’re not a technologist. So you don’t really realize what you’re doing, but you make this box for the AI to work within because you’ve given it all these parameters. But now the AI can’t actually do what it’s intended to do, And your output now will often be a failed output. It may look and read and appear correctly and seem logical, but it’s actually not the actual answer. And that’s a that’s a big, big problem that, again, as a restore, you wouldn’t even understand that you’ve just put all these instructions in place. You think you’re getting better answers, and you’re actually getting farther and farther and farther from the truth. That’s a really big issue, a really big mistake that I made myself equally. So, again, when we get into prompting, we’re gonna talk about it, but don’t use too many guardrails and instructions because it’ll actually work against you. The other thing that’s really scary is and, like, listen. I’ll just be honest. Right? Like, I’m a horrible speller. You know what that means? That means I don’t write that well. Like, I’m not a good writer. So if I read something that is well written, it carries, like, a level of authority in my brain. Like, oh, well, that’s this is clearly written by somebody intelligent or something intelligent. And that builds a blind trust when you’re using large language models because large language models write better than all of us. Like, no competition. And so when you start to ask it things, the responses seem really good, amazingly impressive, and you’re like, dude, if we don’t get on this AI train, like, we’re gonna be left behind. Like, that’s a sentiment that it leaves for all of us. Right? But the problem is is most of us as restorers are not good writers. And so it was really easy for the LLM to fool us all and say, hey. Look at how well written this thing is. It’s this blind trust of assuming that the good writing equals correct facts, and that is the farthest from the truth. The other attribute and failure that I had was the DIY tech. Right? You I appreciate more than anyone else all the restorers who are trying to use AI right now, and they’re trying to do it themselves. Right? That was me five years ago. I completely understand that. However however, building it yourself, here’s the things that you don’t know that you don’t know. You don’t have a foundation in technology. You actually don’t understand how the LLM operates. And more importantly, like, you’re missing a whole lot of education to actually build the thing that you’re looking at and something that Nick had referenced. Most restorers, when they go to do this, they don’t really understand how to explain or that they’ve even identified the problem itself that they’re trying to solve. And so when you don’t know the problem so definitively, when you start to build anything, not only are you a lacking a foundation in technology, but you’re lacking the foundation of the actual problem. And you may say, well, but this is the problem that I deal with. Sure. You may internally understand the problem, but did you are you able to communicate that in building technology? Right? These are the main mistakes that I kept making, not being a technologist, but being a restorer. These are the same failures that you are also going to be dealt with. And I just wanna highlight the data leakage thing for a brief minute. Don’t be the restorer who ends up in court in a year from now because you put missus Wethead’s contents, information, and file into an open LLM. I hate to say it, but one of you is going to be that victim. So please don’t. Like, just don’t put data that’s not yours into an LLM that you don’t have control over. Please don’t. With that being said, prompting. Right? Let’s talk about prompting. Well, prompting, truthfully, was twenty twenty four. What I mean by that is the LLM models are advancing at such a great rate that I would be failing you if I if I sat here and said, hey. Here’s how we build great prompts, guys. And you have to have a role and a definition, and explanation and what you expect is the output, and you have to you no. That’s an entire class and skill set that as a restorer, I don’t encourage that you go develop and understand. The LLMs have advanced so far enough today that the best way to build a truly is to ask the LLM to build you the prompt. Like, don’t don’t try to become a prompt engineer. But here’s the secret sauce. When you’re asking the LLM to build you the prompt, I don’t want you to type. I do not want you to type. I want you to use your voice and talk to the prompt. Talk to what you want built. And the the importance of that is context. And context has far greater value than any sort of prompt engineering you might be able to come up with on your own. See, it’s the context that gives more depth to the problem and solution you’re trying to solve and create. And so how do you build a better prompt? You ask AI to build the prompt. You give it some basic information. Here’s what I’d like the output to look like. Here’s the challenges that I’m focused with. And then give a verbal dump of information. And don’t hold back, and don’t be afraid to be like, oh, I said this out of space, and I said this before that. AI doesn’t care. But the context we get from a voice is far greater than anything you could possibly think you can tell. With that being said, this methodology allows you to be the editor and not the author. So instead of you trying to figure out what the right prompt is to do, instead, you just get to look at the output that it gives you and says, I like this output. I think this is a really good prompt. Right? And then iterate. And then here’s the other here’s a really good tip. When you wanna increase the output to make the output better of any prompt or use of AI, I’m not telling you to threaten the AI, but, like, here’s the here’s, like, a good thing to say in your conversation with an LLM. Hey. I need you to do a better job at what you’re doing, or I’m just gonna go use chat GPT. Or if I’m using Gemini, right, like, I’m gonna position it against another LLM. Now I know on the surface that sounds like really infantile and silly and whatnot, but it’s not. And let me explain why. All LLMs, all of them were trained on human created information. So everything that lived on the Internet, everything that’s in books, everything that’s in music up until a few years ago was created by humans. And you know what comes out of all of that content? The human psychology. And what do humans have? They have a thing called fear. Now I’m not saying that the LLMs have an emotion or feel fear. What I am telling you, though, is that when you bring up that I’m gonna go use this you’re basically using psychological warfare on the LLM, and it understands it. And the output is actually greater. I would encourage that you try that as that respects goes. But more importantly, when you’re working with prompts and you’re building this stuff, do not use the free versions. Please only use paid versions. That’s a good way to protect your data, protect your SOPs, protect your profit processes. Right? And that’s there there’s just a a a simplicity of using the paid version. It’s worth it. Right? The challenge is is how scalable does any of this even become. So, like, someone in your operation could write a really great prompt based on the instructions we just talked about. But is that even scalable? Are you gonna do? Share the prompts across all your technicians? You can’t get your technicians to take pictures. You think they’re gonna store prompts and know when to use them? Probably not. Prompts just don’t scale. They work for a one off task for an individual. Right? But, again, this depends highly on the user skill, an individual prompt, versus a business application that upskills your entire team. A one off prompt has a high risk of hallucinations, like, as Ronuk mentioned. And the worst part about these hallucinations, Ronuk’s examples were very oh, obviously, nobody nobody’s walked down the sun. Right? Like, that’s like, we all kind of understand that. Here’s the hallucination that none of you are gonna understand. When it starts speaking policy language somehow, someway in your assessment that you wanted it to write or in the email that you wanted it to write, and you read it and you’re like, oh, yeah. That’s good. No. It’s not. It’s total garbage. It’s total garbage. It had no reference for it. That’s where you have issues. Unlike an application layer for a business where the validated becomes repeatable and you get good results. A prompt can’t scale, but a process can scale. Right? The other thing that it’s individual prompting is good for is creative writing. Right? It it’s good at creative writing, but don’t be the restore who goes and use ChatGPT to write this email response to the adjuster, attach your file, and then send it off. Wonder why? Because that response that you had ChatGPT write reads as a very professional document unlike the unsupported file that you just sent to the carrier that’s missing pictures, missing two days of readings, missing context. The adjusters will see that, and they’re not dumb. They’re gonna say, oh, what’s the they’re gonna be all interested in your narrative because it was well written. And then they’re gonna go crack open your file and be like, what? This doesn’t match. Like, and then now your file just goes away. It just gets set to the side. Be very careful at how you’re using AI. Right? Because it can be a benefit to an individual. It’s almost impossible to scale those benefits across the workforce unless there’s purpose built architecture. Right? That is the key to repeatable, successful execution of an LLM. There’s an architecture involved. And for all the restores listening out there, we’re not talking about the architect that builds the home. We’re talking about technological architecture. Right? And so the techno technology architecture that has to take place is something far above any restores often understanding. But that’s what’s required to make something work in your enterprise for everybody. It’s very unique and different from a single stand alone prompt. That’s why prompts can be helpful at solving a problem. But let’s look at the problem of scoping. It’s a thousand problems you have to solve, so a prompt’s not getting you a scope. I wanna iterate that again because I know so many of you are out there saying, hey. Write me the scope, and you’re trying so hard, and you’re getting fifty percent hallucination, and you just don’t really realize it. In order to get complicated things done with AI, there must be an architecture in which the operation runs through. Big takeaways at the end of the day, LLM is a writer, not a restorer. You are the professional. You are the ground truth. Ground truth is something we speak about in technology as as a foundational truth. Right? And so you, the restorer, are in the field. You know what is real, what is not. AI does not know what is real and what is not. It can write good, but it it’s again, like Ronik points out, it doesn’t think and it doesn’t reason for the most part as you would understand. Right? The liability is real. Using AI actually increases your liability. Why? Because you are into restore already have an enormous amount of liability. Every time your technicians leave the building, it starts from when they leave the building till they get back. They endanger the entire company with liability nonstop. Right? Like, raise your hand if you had a technician get into an accident with a company vehicle. Raise your hand if you had a technician talk about mold to a customer and there was no mold, or maybe there was mold, but you didn’t really wanna talk about it. Like, we are exposed to liable situations nonstop. If you think that using AI is gonna help you, I caution you to say, hey. It’s totally gonna create more liability if in fact you don’t have somebody like a tailor at least understanding the technology within your business. So that raises, another question. By a show of hands, how many are ready to hire another employee just to maintain technology? Yeah. Probably not. Right? So, again, really need to think through your use of AI. And then the last thing that I wanna bring up and and, again, this contradicting to myself, but it’s all this build versus buy. Taylor’s company has spent an enormous amount of money to have somebody as smart as she is to help create clarity in their technology and their company. Right? Purple Clean has a CIO dedicated to the technology for their company. Right? They these are big organizations investing lot of money to do the things the right way. Okay? But for the majority of restores out there, that’s not us. We’re not the big, you know, hundred million dollar companies. Right? We’re all doing a couple of million. Right? That’s the average restore at best. I would encourage the average restorer to be a restorer and not a technologist. Like, you hire subcontractors for a reason. If you wanted to build a true solution, right, the true solution is gonna be hundreds of thousands of man hours to actually build. It’s like the equivalent of the DIY method for homeowners. By a show of hands, how many restorers tell their customers they could do it themselves? Right. None of you do. Like, that’s not a thing. Why? Because we take pride in the fact that what we do is complicated and difficult, and it is. Like, a homeowner doing this themselves is probably not the best idea for the average homeowner. This applies directly to technology as a restoration contractor. You’re a restoration contractor. You shouldn’t be seeking out, you know, how to build technology. Right? And and I think that’s important to understand. And I’m not discouraging any of you from experimenting or trying to use AI. In fact, I want you to all go and use it as much as you possibly can, but I want you to be cautioned. I want you to be extremely cautioned and understand that the outputs may not be right and often won’t, and that it’s gonna be difficult to scale and and and caution you on how much time you’re about to go spend behind just getting people to use AI because it can actually hurt you more than it will help you. Thank you so much, Joe. I think I I’m gonna reiterate what I said at the beginning about the Venn diagram between somebody who’s passionate about restoration and and passionate about AI, and Joe is they overlap because Joe Joe’s Venn diagram is a is a circle. And I think, Joe, I think you’ve touched on some nerves because we’ve got a whole bunch of questions, but, unfortunately, we don’t have time to answer them all. So one of them that has come up a few times is your golden rule about not putting a customer data into a free version. What can you expand on that is the so the question is subscribing to the ChatGPT business plan the same as what you were talking about that’s safer and better overall and reducing liability? Like, how do you keep your data private in an LLM like ChatGPT? Yeah. So having the paid version of any of the LLMs is ultimately the good solution a simple solution to protecting your data and your customer’s data. However, I would still encourage you not to put customer’s personal information out there in any way, shape, or form. And here’s why. All it takes and this is this is how l l LLMs work, by the way. All it takes is for your one customer who’s quite savvy, who goes to chat GPT in the middle of her claim and starts asking questions. Did my restoration guys do it right? Here’s some pictures of their equipment. Here’s this. Here’s that. It knows who the person is. Like, it knows that it’s Judy Smith. And if you put Judy Smith’s information into the AI, when Judy Smith goes and starts searching and doing her own due diligence on her own claim, her information is gonna show back up in her face, and that’s the explosion of danger. Like, you you you really run a big risk of these things surfacing back up. I I actually and I and I’m not a technologist, so I’m probably not the right person to say, you know, like, where the data safety actually lives. But I can tell you that using an LLM, even in the paid version, you’re running a risk of personal information exposure, and and there’s actual, like, legality about that. Right? And there there’s legal framework. I I wouldn’t suggest that you do that at all at all. And then and then so you understand in our position at Encircle, we take that so so seriously that we purposely exclude all private information, names, addresses, phones, etcetera, from ever being input into any of our AI solutions. We actually intensively remove all of that stuff before it goes into process with the LLM, And we have very secure LLMs that are not available to the public, that are not being dumped into large language models. Like, even though we know our data is secure, we still remove it prior to moving it forward. I have time for one more question. I’m actually over time, but I’m gonna ask this one. For some of us that aren’t super AI savvy or have only used prompts used prompt chats with, say, ChatGPT, can you explain the difference between that and an agent? And how does one go about setting up an agent? Yeah. I mean, that’s a great question. Unfortunately, there’s just not enough time to really dig in and give you a a good answer. The short is, using ChatGPT is a chat, so it’s a one way thing. I’m gonna ask the query or ask it to do a test, and it’s I’m gonna send it off, and it’s gonna come back. An agent is far more autonomous. Basically, you establish the level of agent and its instructions. So think of making a prompt on how your agent is to act, and then your agent just does. And and some of the most advanced agents today, they actually do so much more without you asking for it. Right? They’re automatically programmatically doing things. And so, again and there’s all different types of agents that you can make and whatnot. I as a as a restore, I would say just stick with your chat GPTs or your Geminis of the world using a one shot opportunity with them. Building agents could actually expose your technology systems to a lot more risk. And right now, there’s just not good safety guardrails around an agent you going over your data in your system. You you run a big risk of exposure there. K. Thank you, Joe. I hate to to kick you off, but the music is playing on the stage after you’ve accepted your award, and I’m gonna have to yank you off because Diana is waiting in the wings. And Diana’s gonna take things take over from Joe and talk about you’ve you’ve got this you’ve created a great prompt. You’ve got this a great solution. How can you test and validate and evaluate that solution to make sure that it is that is good. And we talk we’ve talked a lot about what does good look like, so Diana’s gonna talk about how we validate that. Before before she takes over, I will just do a quick intro of Diana. She is our senior platform product manager here at Encircle. She joined Encircle last year as our senior product manager on our platform team. She’s been in tech for eight years, six years in product management. Her experience is in building platforms for automating workflows across different industries like banking, insurance, and media. She is excited about AI driven productivity, but, passionate and committed to responsible deployment of AI and safety first evaluations of technology. So it’s a good balance of enthusiasm and excitement about the possibilities of technology technology, but grounded in, safe deployment and making sure that these are, tools that work for for our customers. So I will pass it over to you, Diana. Thanks, Leah. It’s really great to be here with all of you. I’ve been working really closely with our AI services team with Ronuk and Joe and our product managers the past few months, just spending a lot of time thinking about how to make Encircle AI reliable and consistent for you all to use. So that’s very top of mind for me. So, before I dive in, let me just clarify kinda what the session is about. We’re talking about validating prompts. So we’re not talking about the one off, you know, chats that you’re having with ChatGPT or Claude, whatever system you’re using, or you’re looking for help on something specific. So we’re talking about when you want to build something reusable. So you are working with a prompt and you wanna reuse it for your own work or for some internal team process or maybe generating documents. Anything where consistency and reliability really matters. I’m gonna leave AgenTex solutions out of this. That’s a whole other beast for from a validation evaluation perspective, but we can get into it. So on screen, you can see we can have a scenario to consider. You’ve just spent an hour crafting the perfect prompt for writing an assessment report. You’ve tested it on a bunch of stuff, and it’s spitting out something really professional and very thorough. The question here is, is it ready to use on all client work tomorrow? So the reality before we jump from there, the reality is, like, most people are gonna say yes. You know? There’s always an excitement and momentum that comes with working with AI because you’re like you’re you can very quickly build a solution, and we all wanna just put it immediately into work and because it’ll save us a bunch of time, and it’s just really exciting. But the smart thing here, it really is to say, not quite yet, especially if you’re using on client work, and to do a little bit of extra validation. So we’ll talk about why testing a few times isn’t quite enough. So next slide. So a problem we all have when it comes to testing our prompts is that we unconsciously pick out happy path scenarios. So this means we’re probably grabbing jobs or data points that are very complete, very straightforward scenarios, what I call clean examples. So if we look at that assessment report scenario, you’re maybe putting inputting data that’s you’ve got all the notes, all the documentation, everything is at hand on hand. It’s a straightforward scenario where it’s just purely water damage, nothing else, and the job went exactly as planned. So the data that you’re providing using as a part of your prompt is gonna be very clean. The issue with this is that we’re teaching our prompt to only handle easy cases. But in the real world, that’s not how things usually go. So what’s the reality look like? The business reality here, go to the next slide, is that there’s likely missing details. So maybe the team hasn’t inputted all their notes yet, and there’s still rooms that are being assessed. And then on top of that, there might be some urgent deadlines that you’re working against, but then, you know, the conditions on the job are still changing, or there are just some really complex requirements where it’s not just water damage. You found mold or there’s some structural issues that you’ve identified. So the gap here is that your prompt was trained on the perfect scenarios, and it might not know how to handle anything that varies too much from it. So the next slide here is, you know, this is a fundamental issue that we need to solve for. Like, just because something worked once or twice or three times, does that mean it’s ready to be used professionally on a daily basis? So how do we bridge the gap of you filled something pretty cool and probably reusable and but then you don’t have the time to really validate it in a way that a company that’s dedicated to building software and technology for these types of things. So I know you’re very busy doing your actual job. You know, Joe mentioned this. Like, we you might not have the time to spend time to do the validation that, like, someone like us at Encircle would be doing. But there is the some checks that you can put in place that can just help you think about, what kind of problems you can come up with, run run into when you’re building your prompts. So this is where the concept of smoke testing comes in. This is a term boring from, like, development term, a software development term of just checking core functionality. So there’s a few things that you should look for. First one is around consistency. So the question you wanna ask is, did the AI give you roughly the same kind of response every single time? So not an identical response, but consistent quality and approach. So what you’re doing here is you just wanna run it a few times, try some similar examples, and just to make sure that it’s giving you consistent information that you’re looking for. You know, if one output is very professional and thorough and the other output is kinda sloppy, then you’ve got a consistency problem. And, you know, as with any tool, not even just with AI, any tool or service that you’re using, you want to have consistency and predictability in what it’s giving you. The second thing to look for is around real reality alignment. So does this match how work actually happens in the field, like, reality? So compare the AI outputs to what you as an experienced professional would actually do or say or write. So would someone in my position write this? Does it sound like something I would say to a carrier or a homeowner or my team members? So what you’re looking for is that the output should really require very minor edits as opposed to, like, a rewriting. Because if it’s rewriting, then that’s that’s not useful for you anyways. So, you know, for example, if we go back to, like, the assessment report, you’re it’s writing a report, but then everything is kind of generic. There’s not enough information in there. It doesn’t reflect, you know, someone who has been in the industry for twenty years and that the experience that comes with that, then that’s a reality alignment problem that you’re having that you should watch out for. The next piece is around edge case awareness. So this is when things don’t go as expected. So what does this do when things are messy or it’s out of its comfort zone? Like, AI is out of its comfort zone. So how you would test for this is, you know, we tried the examples that are very similar examples in the first around consistency. Here, now you wanna start testing with incomplete or conflicting data or unusual scenarios that it does that you haven’t it hasn’t come across before. So what you’re looking for here is, like, whether the AI know that it’s out of its depth. So this is where a good prompting will really stand out here because, you know, a good response would be that the AI would say, hey. I need more information here for me to complete this accurately. Whereas a bad response might be an AI going out and actually just making up information just to fill in the gaps. So that’s where the hallucination piece comes in. So, you know, depending on your workflow or what you’re trying to achieve with your prompt, making up stuff can be really expensive. You know, it can back impact the work itself. It can impact, like, cost downstream or kinda break down trust with you and your team. So you want to know you want the AI to know what it doesn’t know. And last, around assumptions, visibility, and risk. So this one, what you’re looking for is can you tell what the AI is assuming, and what happens if those assumptions are wrong? What is the risk associated with that? So this can be as simple as you can build it into your prompt of, like, asking it to list its assumptions, or you can even just ask it afterwards after it’s generated the output. And then reviewing them and saying, what is the business impact if any of these things are wrong? So kind of the the piece around, like, risk calibration here is, like, if something is low risk, if you’re using the prompt to generate something just for you personally to use, that’s not going you know, no no other eyes are going on it, probably low risk. So if it is slightly off, it’s probably okay. But whereas if you’re building a prompt to put together an estimate, for example, then if there is anything assumptions are made in there, then there’s probably a lot more that’s a lot riskier for you as an operator. Right? So the more business critical the output that you’re asking for from AI, the more careful you wanna be around the assumptions. So then, you know, I think this could kinda, like, captures the main red flags that you would be looking out for when you’re filling your prompt. One, AI misses important details that you would catch. So if you’re finding yourself reviewing the outputs and it’s missing important information and you have to fill in the blanks for that, then that’s probably not saving you any time and not not something that’s production ready. And then another one would be it requires a lot more editing editing than actually starting from fresh. So in that sense, probably not helpful again. And then this one is is the next couple of ones are very obvious, and you don’t want it to get to this point because it probably should have passed your test before getting to your team or your customers. So if you are sharing a tool with your team and they use it a few times and they say, okay. I I don’t wanna use this anymore because I can’t you know, it’s very unpredictable, then that’s a clear sign that it’s not ready for use. And then, again, going out to customers or external stakeholders that are questioning the quality of your work, then that’s a a big problem. So as Joe mentioned and Veronik, you know, AI is very good at storytelling. It’s very good with words. But in the restoration industry, if the narrative doesn’t match and align with the evidence, then it’s definitely not good enough to go out. So if you’re seeing any of these patterns, you know, some tweaking might help it, but it might just end up being a task that’s too complex. You know, we we talked I think Joe talked about this a little bit. When something is a little bit too complex and requires more systematic evaluation to get it to a point of, okay. This is ready to use across my company. That’s okay. Let my just be say, okay. This is not a thing for me to solve This is something that maybe I should outsource. So everything we’ve talked about so far is really around what, you know, you as an individual might do or a small team might do. But I want to talk about also what an AI evaluation process should look like at scale when a company is dedicated to building AI solutions or what it should look like anyways. So at on at scale, so like what we do here at Encircle, things kinda shift. So it’s kinda a lot more we’re running our our solutions against a much larger dataset. So rather than, like, ten or twenty or thirty, we’re running against hundreds, if not thousands. And then we’re trying to use real production data as much as possible so that it reflects the, like, the messiness and, like, the true scenarios of what’s happening out there. If we don’t have real production data, we’re very thoughtfully creating synthetic data that can simulate that. And then using representative data too as a part of the testing process. So building the dataset itself is such a critical piece of being able to capture all the variables in which AI could fail in the solution. So being really thoughtful with our product managers and being like, hey. What are the the variables that we should be considering? And then making sure that lives in our dataset is really important. And then analysis at scale, so it’s no longer intuitive. It’s not just like a gut check that you’re doing. It’s really about having something systematic in place. And so at each step, we’re kind of identifying what’s off, and we have a human in the loop labeling what the issues are and categorizing it, and then we build tools to actually look for those issues in an automated way that so that, like, we can do this at scale and also be able to monitor for for issues in production as well. And then the last piece here is that we do have metrics tied to business risk. So, you know, every type of failure that AI can fail with does not have the same impact. So we’re looking at the business impact or the user impact for each failure and then deciding what the threshold is for what is acceptable before it goes into production. So I yeah. I wanna show you high level of, like, what what you would expect a professional AI evaluation process to look like. So this is what we’re doing here at Encircle. I’m not gonna walk through each of these steps, but there are, we do have a nine step process that we with decision major decision points in place that we follow so that we can ensure that what we’re building meets the, like, success criteria that we’re establishing, and we feel confident in releasing it to you as the users. So, you know, you’ve noticed that, like, we build the solution. We’re not building the perfect solution, but then we start by building the dataset that we can test against. And in that process is where we refine, analyze, measure, and improve on that solution to get it to a place where it it becomes, you know, production ready, so to speak. And then we build out evaluations as well, automated evaluations. There’s there’s a we’re not just building that initial solution. There’s other solutions that we have to build within this in order to for it to even get to production. So there’s a lot of steps involved. And, you know, when we’re building tools to use at scale, like, it has to be something it has to be designed to be reliable. So this is kind of what we’re doing today, and you should expect all your any vendors that you’re using that’s providing you AI solutions to be able to share an evaluation process like this so that you feel confident in what they’re doing and they’re just, you know and it’s not just someone behind the scene putting something together and saying, okay. You you have to pay for this now, and good luck. So let me leave you with a few key takeaways. So good validation takes time, and poor validation will cost you time and money, potentially reputation further downstream. So, you know, put in if you’re gonna build something yourself, like, put in the time to do some good validation around it, and then make sure that you test on messy real world scenarios and not just clean examples so you’re not teaching your prompt just to handle easy cases. And the goal of AI isn’t around perfection. It’s never gonna be a hundred percent accuracy or so it should just be predictable and reliable so that you can trust it to do work that you’re trying to optimize for. And finally, set some clear thresholds of what actually what good looks like for you. So, you know, this keeps keeps you from having to chase the perfect prompt or sharing it with your team prematurely or using it in with client work. So it’s not you know, you wouldn’t give a new employee access to customer files without training, so don’t give your ASLF AI without testing. It’s not about being paranoid or slowing down innovation because it is, like, a really it’s a really exciting time working with AI and seeing what the potential is and, like, optimizing your operations and workflows and all that. It’s just about being taking a more professional lens with it, and it’s it really is a tool that could sell serve a purpose in your organization. So do the level of validation that makes sense to do that. And, you know, start with the checks we have today. And if you’re building something that needs to scale up, then you can consider that in in the meantime. But, ultimately, just test it before you actually trust it. And that’s it. Thank you so much, Diana. That was so helpful, I think, at an individual level, and then just understanding what what we should what we should be expecting of our software vendors when it comes to responsible deployment and, responsible use of AI in enterprise solutions. So thank you so much. I am going to, I don’t think we have any questions, and we’re out of time. So that works so well. I can transition over to our next guest. We’re just waiting for him to join the webinar. Thank you, Diana. We’ll talk soon. I’m gonna move over to James Larosa. So I don’t know if you guys have noticed, but we start we structured this this boot camp, this AI webinar in a in a specific way. We started with sort of those boots on the ground, those restorers who are working in their businesses every day and looking at how they can implement AI into their businesses. We transitioned into some technologists and looking at, you know, enterprise level and how Encircle is building building building AI and how that can translate into your individual use of AI. And for our last two sessions, we’re gonna transition to actual solutions and and how AI is already being used to drive real improvements for restoration contractors. So I’m gonna hand things over to James. He’s the CEO of Restoration Growth Partners. Restoration Growth Partners is an all in one digital growth partner exclusively for restoration contractors. They’re delivering ROI first digital marketing that helps businesses grow faster and smarter. They provide a comprehensive suite of services including, is what what James is here today, to talk about AI search engine optimization, Google Ads, local service ad ad management, local SEO, and traditional SEO, review generation, and transparent reporting through an ROI tracking portal. Super interested to hear what James has to say. I know from a con as somebody who who does content marketing, everything is constantly changing in SEO and AEO, and I can only imagine what that means for the restoration contractor and how they get visibility in their markets and how people are searching. So really excited to hear what James has to say, so I will hand it over to you, James. Thank you very much. Great. So thank you for, everybody for attending. Thank you for having me here. I’ll, hop into it. I’m gonna try to go a little faster through this presentation because I think we’ll have a lot of Q and As, so I wanna leave some time for that. So the first thing I wanna talk about is really how the way homeowners are finding restoration companies is changing. Right? This is more of a high level overview, but the entire market is shifting. What I first want to start with is we do have all these different AI agents out there. Right? We have ChatGPT, Claude, Google Gemini, Grok, and hundreds of others. The interesting thing about this is there’s a lot of talk that people say, well, Google’s gonna die. Google is dying, and that is not true. Looking at the data from twenty twenty four to twenty twenty five, the amount of searches on Google has grown by twenty one percent. So while these other platforms are existing, Google is still growing. Right? Now on the flip side, with all these platforms, these platforms are also growing. Right? Going from twenty twenty three to twenty twenty five, thirty eight percent of people in the United States are using AI in some way or the other. Right? And it is rapidly growing year over year for a multitude of different use cases, but especially for finding local services, local restaurants, things like that. Right? I think the main point for everybody in here to kind of take away, it’s that digital marketing is fragmenting, and what I mean by that is we still have what I call the king, which is Google, but we have all of these other platforms underneath of it. So this fragmentation is not a bad thing. It’s actually a very positive thing for all restoration companies. The reason being is because all of these platforms are new, you can actually have what I call the first mover advantage. Right? You could submit yourself now, do the things you need to do to show up number one, and reap the benefits of it moving forward before your competitors do it. The analogy that I like to say is, you know, if you were ten years ago or maybe even fifteen years ago, if you invested into Google, if you look at your competitors, a lot of them did do that, and they’re still cemented in the number one, number two spots, and they’re reaping the benefits of taking that first mover advantage. So on the next couple of slides, we’re gonna get into how do you actually take that first mover advantage. Great. So first, we’re gonna cover advertising in the AI area, so running paid advertising. Then we’re going to talk about how do you show up organically without having to pay for it. So again, I’ll reiterate my last point is Google Ads. This is both Google pay per click and local service ads this is still going to be the number one paid lead source for restoration companies, And the reason being is there’s just a massive amount of people who go to Google, especially for home services, so this is still going to be your bread and butter to do right and to get results from. With that being said, with all these different platforms, some of them are starting to come out with advertising options, and some of them are not. The biggest one that is starting to test the waters with running paid ads is going to be ChatGPT. They’re starting to test this on a multitude of kind of more enterprise level businesses, but what will happen is that’s going to trickle down to people like us, restoration companies, so on and so forth. The very interesting thing about chat GPT ads that is completely different than Google Ads or even Microsoft Bing, their search engine as well. Is ChatGPT for right now, this can change, but it is not based on keywords. They are using the AI to understand the customer’s search intent and then map those to the appropriate ads. So it’s gonna be a very different experience moving forward with all these different platforms. Hopefully, it becomes easier, but again, there’s there’s just a lot of opportunity here. Other platforms like Perplexity, they’re already testing, like, sponsored follow-up questions, which are also ads. One of the biggest players in the space is Claude, and they’re actually taking a no ads approach. So again, there’s all of these different platforms, and they’re all taking different types of approaches with paid advertising. And one more point I want to make when it comes to paid advertising is the early mover advantage again. And so because there’s not going to be a lot of restoration companies and also not a lot of marketing companies testing out ChatGPT ads when it first comes out, this is a great opportunity for everybody in here because if there’s less competition, it means your cost per qualified lead is going to be significantly cheaper than running Google Ads, or maybe you’re paying thirty three mile radius or inquiry leave for leads as well. It’s gonna be significantly cheaper on these other platforms. Great. Then we are going to talk about the organic side of digital marketing. Right? Showing up organically, showing up number one without having to pay for it. Right? So there’s a lot of talk between SEO, which is search engine optimization for things like Google and Bing, and then there is AEO, which is for an AI engine optimization. Right? The beauty about this is around eighty to ninety percent of optimizing your business to show up in platforms like ChatGPT is taken care of by traditional SEO. So if you’re doing things like having a very fast loading website, having high quality content on your website, so on and so forth, that is already gonna be the foundation of you showing up in these AI platforms. Right? I think one of the key differences between, like, foundational, traditional SEO and AEO, optimizing for these AI search engines, is the way you write content. Right? When it comes to Google for majority of the history of Google, it’s been kind of keyword based. Right? Figuring out what keywords to use in content, how do you structure it, where do you put it. Right? AI doesn’t necessarily care for keywords. What they care for is high quality information that they can trust because that is how they make their recommendations to the people searching. Right? So an example of this is, you know, being the answer, not just a result, is creating content for almost every single instance your customer could encounter. This is like burst pipes, ceiling leaks, flooded basements, things like this. It’s not enough anymore just to have a water damage page on your website. You have to have all of these subpages talking about every situation they could potentially go through. The third thing on here, and this is, to me, if you can take anything away from this call, this is the main point, is reviews for your restoration business. As business owner, operations manager, no matter what position you’re in in a restoration company, getting reviews is the number one thing that’ll help you succeed on Google and also in these AI platforms as well. Right? And the good thing about this is you might be in a market where maybe you have one hundred reviews and your top competitor has one thousand reviews. Right? And you say, well, James, this is great information, but how am I ever gonna catch up to these people? Right? They’re a bigger company. They have more job volume, so therefore, they get more reviews. Right? One of the main things about the algorithm is the algorithm likes to see consistency and freshness versus a high volume of reviews. Right? So if you’re consistently getting reviews on your business profile, even on things like Facebook or your Better Business Bureau, that is going to matter more than the total number of reviews for your business. And then last but not least is the Google Business Profile. When it comes to Google, this is the most important part of Google. It’s gonna generate the highest amount of leads at the lowest cost. Right? So making sure this is up to date with pictures, with posting, with an FAQ build out on your business profile, and then obviously reviews as well ties into that. But these AI platforms are really putting a heavy emphasis on the Google Business Profile when recommending local businesses, so making sure you’re staying up to date with that is extremely important. Great. So we talked about both the paid ad side and the organic side of things. Now I think it’s important to take it one more step further, which is really capturing the reporting data and the effectiveness of your campaigns and then making decisions to make those campaigns better, Right? So one thing that I like to say is whenever we talk to potential clients is they say, Well, I get, I know, a hundred phone calls a month from my website. My website’s great. And then you say, Well, out of those a hundred phone calls, how many of them are qualified leads? Then how many of those are actually turning into a job? And that’s where a lot of restoration owners, in my experience, that is where the data falls off. Right? A phone call is not a market qualified lead. A phone call could be somebody who’s calling looking for a job. It could be a vendor trying to sell you on something, or it could simply just be a spam phone call. Right? So this is why it’s extremely important to actually qualify every lead using AI. Right? And this is not a super technical thing to do. It’s very simple to set up. But basically, as you have an AI and you train it on the data points that matter to you. As a restoration business owner, what makes a qualified lead? For our system, the way we do it is three main points. Number one, are they in the serve are they in your service area? Number two, are they looking for services you provide? And then number three, is the person calling can they give you the authorization to allow you to start the work on the property? So renters do not count on this. It’s gonna be a homeowner. It could also be a property manager who doesn’t have an established relationship with a restoration company and they need help. They have the authority to give you that authorization. Once you collect this data and you figure out phone calls to market qualified leads, then you can start saying what is your best performing channels, right? What are the pages on your website that are driving the best results? What are the keywords that are driving the best results? And then you can make optimizations on top of that. And then last but not least, if you are running paid advertising, you can feed this data back into specifically Google. You can tell Google that, hey, these are phone calls, and then these are market qualified leads, and then this is the revenue from these leads. Then at the end of the day, once you have enough revenue, you can then start optimizing for dollars in, dollars out, which is eventually where you want to get. You don’t want to optimize for clicks. You don’t want to optimize for calls. At the end of the day, you want to optimize for dollars in and dollars out. Great. And this is everything I said, an example of all this actually put into practice. This is a company we work with based out of Chicago, Romextera Restoration. By us utilizing AI, scoring these leads, getting the revenue, and deploying the optimizations I talked about with the website and the paid ads, we’re able to bring their cost per scope water damage or ESR project to under five hundred dollars. So this is not a lead. This is not a phone call. This is actual projects that are entered into their CRM that cost under five hundred dollars. So when you’re able to get all this data, you can really make powerful decisions to improve the campaigns and ultimately the bottom line of your business. Great. And then here are five quick action plans that I would like everybody to take in here. Number one is running a free audit on your website. You can go to website greater dot com, and it’s gonna tell you a nice general overview of your website. What’s good, what’s bad, is it optimized for AI platforms because there are certain optimizations that you can do. Number two is building answer based content. So again, a great example is, my kitchen ceiling is starting to leak. What should I do? Right? This is the natural way people are starting to search, especially with these AI search engines, is they’re just speaking naturally to it like they would a human, so you need to make content that serves that purpose. Obviously, getting reviews and getting them consistently is huge for both Google and for AI. Tracking market qualified leads, you need to know what leads are real and where they’re coming from. And then last but not least, at the end of the day, you need to use that data and actually action it. An example is feeding that back into things like Google Ads and being able to increase the effectiveness of your campaign based on the data that matters. Cool. And three quick takeaways. We’ll get to the Q and A if we have time. Search market is fragmenting. This is a positive thing for everybody in here. It is not a negative thing as everybody can get the first mover advantage and submit themselves on these new platforms before your competitors do, and then you’re playing catch up. Also, with running paid ads, these new platforms are gonna be surprisingly cheap, is my opinion, for the near term of running ads and getting results, much cheaper than running ads on Google or buying leads from third party lead vendors, and then obviously just tracking your market qualified leads and ROI and making business decisions based on that data. Thanks so much, James. I was definitely learning a lot. I’m deep in the throes of learning about specifically con content and how we can we can show up in these AI search these AI searches. So definitely super interesting. We do have a bit of time for q and a if anyone has questions for James. Oh, one just came in. When do you believe ChatGPT will be doing ads? So it’s rumored they are already testing it. When it’s available to people like us, because it’s currently available to enterprise level, I am not really sure. I’m going to assume it’s going to be at least sometime this year, as they’re they’re a growing business, they wanna increase their revenue, and ads are a part of that. Alright. If James, if somebody wanted to get started on this path to sort of modernizing their lead, their funnel with for AI tomorrow, what what’s the first thing they should do? The first thing they should do, and this is the easiest, is optimize your Google Business profile. So professional pictures. There’s an FAQ section that you can fill out. Get your reviews, and make sure that you’re posting at least a few times per week. A lot of people don’t know it, but the Business Profile section has like a mini blog posting feature. So just putting that high quality content on there, again, along with your website, but on your business profile will really help you in the near future. Great. And I’m just gonna jump back a slide, so if anyone wants to reach out to James to ask any additional questions or leverage his services, his contact information is on the screen. You’ll also be when when the webinar ends at three o’clock today, you’ll also there’ll also be an option where you can check a box to to have James reach out to you if you’re if you’re interested in learning more from him. So yeah. Any other questions before we before we move on? It doesn’t look like it, so I will transition over. Thank you so much for joining us, James. We will transition over to our last but not least presenter of the day, Jacob Cleveland. He is a partner at Breezy AI. I came across Jacob in a recent a r article that he wrote for CNR Magazine. And like I said, he’s a partner of Breezy AI, which is an operating system designed specifically for restoration businesses. He helps contractors eliminate missed calls, deliver consistent multi channel customer communication, and automate the administrative workload that slows cycle time and delays accounts receivable. Before joining Breezy, Jacob served as the CTO of one of the top ten serve pro franchises in the US, so he’s got that boots on the ground restoration experience as well. But he began his career as a consultant to b to b enterprise companies helping align sales, marketing, and product strategy. So today, he brings all of that experience together, the restoration restoration experience to help businesses modernize their operations and compete in a rapidly changing market, which is, I think, one of the key themes from today, which is everything is rapidly changing. There’s a lot of noise out there. We wanna try and turn down that noise a little bit. So I’m gonna turn it over to you, Jacob, and thank you for your patience and waiting around. The floor is yours. Thanks, Leah, and thanks for everybody for sticking around all this time. I really appreciate it. Like Leah said, my name is Jacob Cleveland. I’m with Breezy AI. And about a year ago, honestly, almost to the day, I was exactly where many of you are. I worked at a restoration company, and I was researching how to implement AI into our business. I attended webinars just like this one, and I looked at every tool I could find. Next slide, please. So from smart glasses for remote coaching to AI powered mobile training games, I really did look at it all. But then I realized I was thinking about it the wrong way. So I shifted my focus a bit and started with the outcome I wanted and then worked my way backwards into the technology. And for me, what it really came down to were two things. Next slide, please. Increased revenue capture and decreased labor cost. Alright? So today, I’m gonna walk you through each of these goals, my thought process when I was choosing a vendor, and then I’ll give you some real world use cases and practical tips on how you can achieve these same goals in your restoration business. Next slide, please. And when I started to drill down into how to use AI to capture more revenue, I started with what James just spoke about. Alright? How to make our phones ring. And since our franchise was in a rather competitive market, probably like a lot of y’all, and, unfortunately, some folks in the franchise system can’t control their own website content. So our ability to drive organic SEO was limited. Our main lever to pull to make our phones ring was paid search. For us, a water loss would cost about three hundred and fifty dollars every time that phone rang. So before I started pumping money into more marketing, I wanted to know how well we were doing at capturing the leads that we’re already calling. What I discovered was terrible. I lifted I listened to our after hours calls first. We were using a call center, probably like a lot of y’all, and the customer experience was horrible. And folks were getting annoyed with the a plus minute script that the call center folks were drawing through, and our leads were dropping before they ever even got to us. Then I started listening to our business hours calls. Honestly, they weren’t much better. So the conclusion I came to was before I dropped a bunch of money on making my phone ring, I needed to ensure I could capture those leads when I came in. That’s when I decided I was going to look at AI intake. Around this time last year is when I started to think right. Customer service was one of the first use cases of AI, and AI voice had finally gotten to the point where I felt like I could trust it with my customer interactions. And then we dug into our missed call numbers, and that’s really when I knew I had to do something. Next slide, please. So this slide shows real data that we at Breezy see in almost every customer of ours when they come to us, and it mirrors my experience almost exactly. So Breezy data shows that even in top performing restoration businesses before AI, about twenty five percent of calls go unanswered. They hit voice mail. Right? An average restoration location gets about four hundred and fifty calls per month. That doesn’t include spam or hang ups, which are on the rise. And about twenty five percent of those go unanswered. So that’s about a hundred and thirteen calls a month. And from our data, we see that one in ten calls are for service. Actually, almost right to the point, so ten point one percent. That’s eleven high intent leads per month we were missing out on. And the average water loss is somewhere around four thousand depending on your market, right, and revenue potential. So when you multiply that four thousand by eleven, that’s forty four thousand dollars a month. Over half a million dollars a year, we were losing just by not picking up the phone. And that’s when I realized the fastest way to increase our revenue with AI was rather simple for us, answer every call. At the end of this presentation, I’ll walk you through how we evaluated vendors. And everyone in this webinar will also receive a PDF containing step by step instructions. I know I do agree with all the presenters here about the challenges and the privacy issues of using consumer grade tools. But I also know the struggle of trying to meet payroll. Right? So if you’d like to build a DIY system to automatically log every voice mail and missed calls, summarize those calls with AI, and then alert your team in real time so no leads flips through. Check out that PDF that you’ll all receive. Next slide, please. And it’s it’s not gonna be any big revelation to anyone here that in restoration, our largest expense is our people. A lot of folks, right, in the ether of marketing and AI in the news are talking about how AI is going to replace people. In all honesty, it likely will, some of us. But how I think about decreasing labor cost is more about increasing the efficiency of team members. Right? And there are three main strategies that I’ve seen help dramatically reduce labor cost through increased efficiency. Those are data capture, speed, and scale. Next slide, please. So I’ve already spoken about using your phones and intake to capture more revenue, but another giant benefit we found at the AI intake was clean FNOL data, first notice of loss data, every single time. Every time an employee has to call a customer back to get their correct email address or ask if it’s self pay or insurance, that’s labor cost. Right? And AI can pull in real estate data, so you get your year billed, your square footage, and listed owners. All of these questions that cause intake to slow down and eat up your team’s time. Now in the fields for years, and Encircle’s great at this, right, good software principles included what was called one thought per screen. Think of this like TurboTax. Right? Instead of giving you a long checklist of tasks, it chunks it up so you have one thing to do per screen. AI allows that linear task list to be driven by more variables than simply what was the previous answer. In practical application, this means AI can help field staff determine complex next steps by a whole bunch of data points, moisture readings, day days the DQs have been in place, right, as the self pay or program work, and on and on. It can also help turn unstructured data, like field notes, right, sentence structure, into structured data. This is important because as more and more insurance companies use AI to approve or deny claims, having your job files structured correctly can make the difference between getting paid today or maybe getting paid three months from now. All of these efficiency gains result in less time spent per employee, and that leads to not just reduced labor cost, but also increased capacity capacity per employee. Now one of the biggest opportunities for efficiency is actually much quieter and happens in the back office. If you watch how an office coordinator or a JFC, a job file coordinator, spends their day, a surprising amount of time is spent not making decisions or helping customers. It’s spent merely moving information, creating a job in one system, copying it to another, sending emails to notify the next person, and on and on. It’s repetitive, and it’s constant. And as your work volume increases, that workload grows almost perfectly in proportion. You double the jobs and you often feel like you need to double the admin support just to keep everything organized. Now this is where AI and better data capture can make a real practical difference in your business. When your job information is clean and structured, the system already knows what’s happening. And once the system understands the state of the job, it doesn’t need person to manually push every step forward. Just like we what we spoke about for field staff, the data itself can trigger the next action. A job gets created and a task is assigned automatically. Right? Drawing hits completion and the next visit is scheduled. A scope is approved and billing prep begins. Instead of somebody typing, retyping, emailing, and chasing, the workflow simply advances on its own. The role of the office team then shifts from acting like human connectors between software to focusing on exceptions, decisions, customer service. It’s not very flashy, but it is incredibly effective. By removing hundreds of small manual touches that quietly eat up every hours every week, the same team can comfortably handle far more job volume without adding headcount. Next slide, please. So everyone on this webinar will likely nod their head when I say about ninety percent of what we do in restoration is communication. We send the same emails, again, back, tell them the same thing that was in the estimate that they didn’t read and on and on. This is where AI becomes really practical because most office communication follows predictable patterns. It’s the same type of updates, the same questions, the same responses. And when something is repeatable, it’s a perfect fit for AI. You don’t need new systems or complicated integrations to do this either. If you’re using Gmail, the AI is already built in. So Gmail now includes a help me write feature powered by Gemini. It kinda looks like some diamonds on there that can draft emails for you correctly inside the compose window. Instead of starting from a blank screen, you click the button and type some simple instructions like write a short friendly update telling the homeowner we’ll pick up the the Hughes tomorrow or draft a professional status update for an adjuster. It can generate clean drafts in seconds, and then your team just reviews and tweaks, right, keeping that human in the loop, as they say. You’re not replacing the person. You’re eliminating the slowest part, which is writing everything from scratch. If you’re not using Gmail, you can use ChatGPT. There’s actually a section on how to do this in the PDF I’m gonna share. And it gets even more useful when you apply it to everyday restoration workflows. Right? You can paste filled notes and ask AI to turn them into clear summaries, right, keeping that personal identifying information out. You can have it clean up bullet points into a professional email. You can ask it to shorten a long explanation, right, into three sentences. Not a lot of us have time to read long explanations these days. A lot of teams also build simple internal prompts for common situations so staff can generate consistent messaging in seconds instead of minutes. Right? And that brand consistency is very important in that communication. And over time, that consistency can actually improve the customer experience because now your messages are clear and professional instead of rushed and full of typos or grammar issues. But what this really does is it shifts your team’s role from typing to thinking. They’re not staring at a blank email writing the same message for the hundredth time. They’re reviewing, personalizing, and sending. Now for storm response. Right? We just got through the, big winter storm. Well, this is a huge opportunity. I was talking to a CEO of a large restoration franchisor, and he told me that after they implement implemented AI overflow in their call center, they captured more leads in one minor storm event than they had ever captured during any storm event in the history of the company. And that’s because AI can scale up well beyond the capacity of any single office or call center. And then beyond capturing the lead AI, and this is actually something we’re rather proud of at Prezi. It can score your leads based on many, many data points and then rank them from most valuable to least valuable. So, you know, during a storm event, you’re capturing the most amount of revenue possible using the finite resources you have ahead. Now I’m gonna shift gears here for a minute and talk about something a little bit less attainable at the moment, but will likely become ubiquitous in the industry over the next few years, and that is AI powered field guidance and supervision. There are already companies like KnowHow that are doing this using pre uploaded content and surfacing that information through a chat interface for field staff. But some of the really cool things that are on the horizon for twenty twenty six, likely in the later part of the year, is equipping still field staff with smart glasses, right, that allow AI trained on millions of job photos and data to then guide an employee with a base level of training to tackle tasks that previously required specialty training. This is already happening in fields like veterinary medicine, and it’ll help owners and restoration leaders tackle the current labor shortage. In addition can you hear me? Are you all having a hard time hearing me? I just saw it hop off. I’ll speak a little louder. Maybe you can hear me now. In addition to guiding field staff, one of the huge benefits of this technology is its ability to record, sort, and tag billable service items. If there’s no picture of that third trash bag, you’re not gonna get paid for it. Right? But what’s been holding this technology back from being in widespread use so far is really two things. Battery life. Right, current consumer grade smart glasses have a rather short battery life, which make them unusable in a real world work environment. And the second is industry specific software designed specifically for our needs in restoration. Both of these challenges are starting to be addressed by some very capable hardware and software manufacturers. So if you’re like me, you’re probably going to keep an eye out for what’s to come towards maybe middle end of this year. Next slide, please. Now I remember on my first day on the job, I was riding out with the crew, and it was a huge shock to me when I realized that the customer had to agree to us starting work before they ever knew how much it was going to cost. It was also a shock to this customer. Restoration is the only industry that I know of where this happens. This creates a lot of downstream friction. Right? Huge outstanding AR. It’s a major problem in the industry. I actually have it in my notes, to talk about how no one had solved AI estimates yet. But after hearing what Encircle is working on, me, like y’all, I see in the comments, are very excited to see what they built. Now dispatch. Right? AI can create meaningful speed gains in dispatch. That’s because most delays don’t come from big problems. They come from small manual steps that happen all day long and compound. Someone answers the phone, writes down details, checks the calendar, calls the text, waits for a response, on and on. Each step only takes a few minutes, but when you stack those together, they start to slow everything down. Meanwhile, trucks are sitting idle and jobs aren’t starting. At scale, that waiting time is expensive. Right? And but when you pair AI with good data capture, a lot of that coordination becomes automatic. If the system already knows the job type, location, urgency, crew availability, it can instantly recommend or assign the best crew. Instead of a lot of back and forth, dispatch happens in seconds. The dispatcher isn’t juggling logistics. They’re reviewing and handling exceptions now. Ai also reduces downstream delays by making smarter decisions upfront. You can optimize routes, cluster jobs by geography. This is even more valuable during a storm event. And sequence work more efficiently, so crews spend less time driving and fewer trips are wasted. Even shaving twenty or thirty minutes of travel time per crew per day adds up to hours of extra productivity each week. That’s real capacity without needing to scale up your people or buy more vehicles. In practice, AI doesn’t really replace dispatch. It removes the small bottlenecks that slow everything down. So work starts sooner and jobs move faster. Right? Next slide, please. So when we were evaluating AI vendors, we really wanted to look for a handful of major things, right, specific criteria. Our number one was we knew they had to be restoration focused. There are a lot of general AI tools, general AI answering services, likely your telephony provider, right, even offers one. But what we wanted was one that was trained on IICRC guidelines and knew the industry. Next, after listening to our calls, I knew whatever AI we chose. It had to be empathetic, not robotic. Right? When our customers call us, they’re freaking out. Their most valuable asset is in danger. So I wanted that I wanted an AI that could empathize and provide friendly support without sounding like it was reading a script. Next, scale. Right? We asked, can they handle a storm event? Do they only do after hours, or can they handle twenty four seven with intelligent call routing? Right? Can they take it when a storm events hit and we get five, six hundred calls in a weekend? Right? Then we looked at security and reliability. I know that’s been a big topic today. Right? You wanna know how are they handling your data in Encircle SOC two type two. You really wanna look for something like that. That shows that they are prioritizing security. And then do they have redundancies and failovers in place? Right? Sometimes AWS goes down. Right? Sometimes Google Cloud goes down. Are they reliant on just one vendor, or can they ensure that your business stays stays operational no matter what? One thing we didn’t look for, because we just didn’t know it’s possible, was a vendor that could also offer their own telephony system. So office phones, right, things like that. These days, you can actually bundle your telephone cost and the AI together, and this allows you to record and analyze every call for business insights. Right? So this also includes inbound, outbound, and interoffice. And lastly, this was a big one for us. Do they integrate with the tools we already used? Right? Internal communication tools like Teams, Slack, our job management platform. This was important because this is really where AI moves from being a call taker to an orchestrator, helping move jobs efficiently through our workflows. And it was that exact journey that led me here today. So I personally went from being a Breezy customer and seeing the impact on our business to being a Breezy employee, helping other restoration leaders increase revenue and decrease labor costs. Next slide, please. And I honestly, I love this stuff. Right? I love helping other restoration leaders be successful. So if y’all have any questions about AI or anything that you think I might be able to add value to, it doesn’t have to be about intake or anything like that, please feel free to shoot me an email. I’m always here to help in any way I can, And thank you for sticking around all day to hear me talk, and I hope this has been helpful in some small way. Thanks so much, Jacob. Yeah. Super, super interesting to hear your journey and hear what you think where where you think things are headed and and what the what the benefits are. We do have one one question that just came in. Does the AI have unnatural pauses in conversation? Well, that’s a great question, and it allowed me to talk about something to look for in the next, probably, three to six months. Right? So what you’ll notice in certain vendors, I am very happy to say that ours is not. Luckily, we have some pretty amazing people that have about fifteen million AI voice interactions underneath their belt. They did AI voice for Intuit before. We get people saying, yes, ma’am. Thank you, ma’am, all the time. We love that. But in some vendors and I highly encourage you to to do your due diligence. Right? What you’re gonna notice in that elongated pause is because of speech to text technology, which is really all we have to work with right now in AI voice. And so it takes the speech, translates it to text, then creates the text response back into speech. That creates a latency. Right? And a part of that latency also comes from do they have on premise hosting? Are they using cloud hosting? Right? So all of that creates it. But what was just released a few weeks ago by NVIDIA, and there are some other folks working on it, Deepgram, FABI, things like that, is called speech to speech AI voice technology. And so it removes that middle layer, and we’ve had the ability to try out some of the very, very early alpha versions, and it’s phenomenal. You don’t even know that you’re not talking to a person. So I actually have a demo line. So if anybody wants to shoot me an email, I’ll send you the number so you can try out our AI. We actually got the best the best compliment we could ever believe, which was somebody said, I don’t think it’s real. So, yeah, give you give you that line. Try it out. I don’t know, Jacob, if you saw the comment, but it got the laugh out of me that plot twist, Jacob is AI. You’d you’d be surprised how many times I hear that. So yeah. And that we love hearing that. It just goes to show that we’re doing something right. So, yeah. Thanks, everybody. Well, thank you so much, Jacob. That brings us to the end of the end of our webinar. I just have a few wrap up items. So I’ll say goodbye to Jacob. So thanks again, Jacob. Appreciate it. And I think that is it for us for today. I’m gonna stop sharing. Thank you so much for spending your time with us today. We really appreciate it was three hours of your time. And I don’t take that take that lightly that you’ve spent that with us.

Click the menu icon on the right side of the control bar to open the full list of sections and select to go directly to a specific section.
What You’ll Learn
- How AI systems work and why confident outputs still require human-in-the-loop oversight and critical thinking
- How to work with AI more effectively, including practical prompt fundamentals and boundaries
- Where AI is already being applied across restoration — including marketing, intake, documentation, and coordination
- How to assess which workflows are ready for AI and which are not
- How firms are approaching AI strategy — what to prioritize now and what can wait
- How to evaluate AI tools responsibly, including data security, liability, and governance considerations
- How restoration companies are preparing for AI as a background operating layer rather than another stand-alone tool
Meet the expert panel

Jacob Cleveland
Co-founder
Breesy.ai

Nick Hindle
Chief Information Officer
PuroClean

Ronuk Raval
Chief Technology Officer
Encircle

Dianna Lee
Senior Product Manager, Platform
Encircle

Taylor Carmichael
Director of Systems & Instructional Design
Southeast Restoration

Joe Ledbetter
Restoration Consultant & AI Architect
Encircle

James Larosa
Chief Executive Officer
Restoration Growth Partners
Get complete and consistent field documentation everytime.
Learn how Encircle can help you and your field teams ease the documentation burden.