Encircle AI: A preview of the future of restoration workflows
See Encircle AI in action
In this on-demand session, Shaun Coghlan, Kash Mirza, and Mike Aho give you an exclusive first look at how Encircle AI is set to transform restoration workflows. They show you a preview of 5 features — practical tools that will enhance field documentation, speed up contents inventories and transform the way you scope a job.
Disruption without interruption
Encircle is taking a purpose-built approach to AI innovation – embedding it into the workflows you’re already doing, so you can unlock speed and accuracy without disrupting your teams in the field.
Good afternoon, and welcome everyone to this exciting webinar. We’ll just give everybody a moment to get in and get settled, and then we’ll kick things off in a couple of moments. Alright. We’re gonna get we’re gonna get rolling here. I see lots of people are still rolling in, so we’re gonna get started. But please welcome as you’re as you’re joining. We’re really excited that you’re here for this Encircle AI preview of the future of restoration workflows. My name is Leah Vusich, I am the Director of Product and Content Marketing here at Encircle, and this is a super exciting event. About a month ago, we kicked this AI conversation off by talking about our our vision for AI in the restoration industry. And we promised a preview of what we’re actually working on, and that’s what we’re here to do today. Super exciting. There’s a lot of interest in this in this topic. We had over 1,200 restorers registered for this for this session. And we know that a lot of those groups are teams of people who are watching together in one room. So that number is is amazing, and we’re so excited to share share what we’ve been working so hard to bring to the market. Quick introductions. We’ve got Shaun Coghlan, the Director of Product Management here at Encircle. He steers the ship on all of our features that we bring to market. We’ve got Kash, he is our Senior Product Manager. He leads our contents content squad as well as, some of our fields tools that he’s gonna talk about today. And then we’ve got Mike Aho. He is our Product Manager for our water mitigation Hydro product. So I’m gonna stop talking. I’m gonna disappear. I will be answering questions on the back end. These guys know if they see me appear, they’re talking too much and we need to move on and, or we have a question that we need to answer. So, over to you, Shaun. Perfect. Thanks, Leah. Super excited to be here today. I’ve done quite a few of these webinars. And like Leah said, we did the our first AI one, just over a month ago, with huge reception. The fact that we have well over 1,200 people registered for this shows the enthusiasm for kind of using technology to improve the restoration industry and then, you know, how Encircle can can achieve that. So just kinda quickly look at at our agenda for for today. So I’m gonna do a quick recap for those that, attended the webinar at the end of September. Kinda at the end of it, my part, I gave kind of an overview of what our vision is of how, AI can be used within restoration, kinda used within the Encircle functionality. So we’re gonna do a summary of that and then talk about, data security and privacy. It’s really important to us since day one, and we’re putting that into everything, for AI as well. And then we have a list of five features that that we’re gonna go through. I hinted at these at our last webinar. We’re gonna show you where we’re at with each one of those features. We’re really excited. We’re enhancing the field documentation workflow. We’re improving and enhancing the contents workflow, and we really believe we can transform kind of the scoping workflow, with with what we wanna show you today. One thing I wanna talk about is kind of first question I think a lot of people are asked, like, hey, Shaun. When are these features coming out? And I’m gonna I’ll I’ll summarize kind of that for for everyone right now. So, right now, we’ve been working on the foundation for each of these technologies for the last year in our product. So understanding kind of the AI that exists, whether that’s machine learning, LLM, photo reading, and and data extraction from photos. And we’ve been working hard at understanding that technology and how we can apply it to the specific restoration use case. And now we’re getting to the point where we’re starting to develop the experience that we wanna, provide to you. So we’re gonna be showing you the experience that that we’re gonna be doing for each of these features. The goal is that throughout the first part of 2026, all of these features are gonna be rolled out. The field documentation enhancements, they’re gonna be part of your Encircle subscription. So these are core workflows to you. And then the contents pack out and the scoping tool, once they’re ready, you’re gonna be able to add those onto your onto your, Encircle subscription. The specific details, we’re not gonna get into them today. Once these features are ready, as usual, we’re gonna do more webinars and what’s new webinars going into 2026. We’ll talk about the specifics of, of each of those features once once they’re ready. So, let’s move on. Let’s kinda move on into, you know, what is our approach. And Encircle’s philosophy from from day one has always been about how can we help you, become more efficient and effective in the field to do your jobs. And a core part of that job is documentation, and that’s really where Encircle came out of. And kind of a a thesis that we’ve always had is we want to kind of disruption without interruptions. We don’t wanna interrupt your workflows. And if we boil down the restoration workflow to something as simple as possible, this is kind of what we get. A job comes in, whether that’s from a TPA, a private job, direct carrier, carrier work. However you get the job, you need to meet the homeowner, meet that property owner. First thing you need to do is kind of assess and document what you see, then you can kinda go out, produce the scope, understand the estimate, then you do the work. And, actually, this is kind of a cycle. As you do work, you may expose more things that need to be done. You need to document that. Does that affect your scope and estimate, change it, and kinda keep going until kind of that that property has been restored to kind of pre loss conditions? And this is kind of how we’ve approached it. And and for the most part, Encircle has really been focusing on the documenting the loss and really helping you solve that, organizing the data into rooms, helping you generate reports, all of all that data so you then you can share that with well, that’s the property owner, the homeowner, third party estimator or an internal, you know, national estimating desk or share that with kind of an adjuster or or a TPA to help prove and and kinda show everything. So this is the the generic workflow, what Encircle’s been doing. What we wanna do is continue or we’re gonna continue to do that and enhance technology. And, this is kind of this this Encircle hourglass that we’ve talked about. And we’ve this this has been our, our philosophy from the beginning, and we’ve now been able to expand it because of technologies like AI that are now available for us to take advantage of. So, really, what we’re talking about is, you know, what happens in the field is what happens. And, you know, there’s that proverb. You don’t get, paid for what you do. You get paid for what you document. So where Encircle comes in is being allowing you to document, kind of this unstructured information that you have in the field, whether that’s photos and videos, three sixty photos, notes, then readings if you’re doing mitigation. And previously with Encircle, we would allow you to take that and patch it up into really nice reports that you can then share with people. What we wanna do now is taking that to the next level where as that data kinda gets funneled through our system, we can overlay AI, artificial intelligence, our experience in the industry looking at standards, and help you then transform that data into more and more outputs. But what’s critical is you are in the loop. You are in control of it. You decide when and what gets processed, and then you decide what you wanna do with it afterwards. And we’re gonna help you take all this data, turn it into valuable information, and then allow you to put that into outputs. And for us, the first step that we’re doing in terms of this really transformative spot is focusing on scoping as an output. But on the input side, as we’re gonna show you, is really how else can we improve those workflows, and we’re gonna walk you through that. So as I said, if we break down kind of the the circle of restoration and the workflows, we’ve really driven home ease and accessibility of how we do photo documentation. And now with the new features that Mike and Kash and myself are gonna show you, we’re gonna speed up and and make that process even more efficient. And then more transformatively, taking all that information and being able to translate that into an actionable scope that you can then use to generate an estimate, figure out what you need, and where you need to send your teams, understand how you need to communicate with other people and have information that you can use to communicate that. And our belief is with this enhanced technology, the time that you spend in this cycle on every job, we can help further reduce it and make it more efficient, and more and let you do the things you need to do, which is helping the homeowner, helping that property owner get back to, their lives. They just have this catastrophe, and that and that’s your job. And if we can add any technology that makes that smoother and makes that easier and faster for you, that’s what we’re gonna do, and that’s what we’re gonna show you here today. So the first thing, before we kinda get into that, I really wanna clear the air on data and privacy and security. So as per our terms and conditions, this isn’t changing given the fact that we’re introducing AI into our app. You own your data. When you capture data, it is yours, and that’s kinda what our terms or conditions are. When we use AI, we are not taking your data and using it to train an LLM or train some external system. We look at the outputs that we create through the prompts, and the technology that we’ve put together on top of AI to help us refine the outputs that come there, but we are not sending your data off to another company for them to mine. You own the data. We’re not using it to to train someone else’s technology. The other important thing is you decide when it gets processed. So you’re gonna decide what data on a job you’re gonna pull in and put into your scope. You’re gonna decide what data, what video you want to summarize on our behalf, or what readings you want to be read, and Mike will walk you through that. And as always, from a security perspective, what we do is end to end data encryption. From the time you capture data on your phone to when it gets synced with the Encircle servers and shows up in our web app, all of that is encrypted. And when we use services that connect into the Encircle service, that is all encrypted as well. So we ensure that your data is protected and the privacy is there because you own it, and we we really take that, seriously here at Encircle. Let’s move on to some of the exciting stuff. So the first field documentation feature that we’re gonna walk through, Kash is gonna walk you through this, is, video summarization. So, Kash, you’ve been at Encircle almost two years now. You’ve done a lot of work, within our app, improving a lot of things. Can you tell us kinda where have you taken, or how are you applying AI to the work that you’re doing for for our customers? Yes. Of course. Thanks, Shaun. So for video summary, I’m really, really excited to showcase this feature. If you just go to the next slide here. If a picture is worth a thousand words, then you can really imagine how valuable a video can be. But videos can be long, and they can be difficult to review in reports that already have a lot of data. And so this video summary feature transforms your Encircle videos into a documentation powerhouse. It does it by generating transcripts for any videos created within Encircle. But more importantly, it takes that transcript and transforms it into a clear formatted summary of the video content. It outlines key topics, discussion points, action items, and organizes all of it easily for you to review. And what’s more, these summaries can then be converted into notes with a single click of a button, meaning that you can edit them afterwards and even include them in reports alongside the original video. And this is what I find so exciting about this feature is that it works not only with English, but it supports French and Spanish videos as well. So you can imagine for a team with multiple languages, it’s possible for your team members to speak in the language they’re most comfortable with knowing that the insights and the summary generated from Encircle will be automatically translated. And so you can imagine the impact of this feature, instead of taking notes manually in the field, you can opt to take a video, speak about what you’re experiencing, and Encircle will do the work for you to format and summarize. I’ve heard this from folks in the field before, where not only is it difficult to fuss around with your phone’s keyboard while on-site, but sometimes if a homeowner is present, there’s a negative view that your team members are just on their phone even though they’re doing real work. And this feature lets you document seamlessly, visibly, and professionally without missing anything that you might have missed just taking photos. So now I wanna walk through what the experience is like actually using this feature. So if we just switch over here, I’ll share my screen. Awesome. So this here we have is a video that you’re taking in the field. This is kind of a small floor, the average moisture reduction rate or MRR in the living room area is fifteen percent over the last 24 hours. It’s just a small snippet of a longer video just for the demo purposes, but we’ve processed the whole video through our AI to be able to create the summary. But you can see here Brandon is talking about general trends for readings and goes on ultimately to talk about the source of loss being a leak behind the fridge. And so once this video is completed and you’re done, you’ll actually be able to see here the opportunity to add a video title and then opt to generate a summary either in English or in French because, like I said, we support that automatic translation. So once the video is completed and it’s been uploaded, you can see there, the summary is being processed in the background. Once the summary is completed, you’ll get a notification saying that it’s done, and you can either click the shortcut or you can click into the video to see the summary itself. When you’re in the video, you can actually see the transcript attached directly to the video. So you can see the transcript here. But more excitingly, if you click into the summary section, you can actually review the video summary that’s been generated automatically from the content. This really showcases how powerful this feature is, turning a simple video into an audio into clear professional documentation. It’s definitely gonna be different depending on the data of the video itself, but you can see here in the summary, it’s described the context of the video. It includes specific data points that were discussed, and it broke down the topics of the video into easy to follow subsections. And that’s the exciting part is that not only have you created this documentation, but with a single click of a button, you can convert the summary into a note so that it can work the same way as you’ve been doing your current documentation and integrate seamlessly with any of your reports or any of the workflows that you’re currently already doing. So very quickly, I want to just showcase what this would be like using a Spanish video. So you can see here we’ve got a video in Spanish. And so once you’ve completed your video in Spanish, it’s the same workflow. You complete the video, and you can choose between generating the summary in English or in French. And then once the summary is completed, uploaded, and processed, you can immediately view the AI generated summary in the selected language. So this is in English. And so you can see that every member of your team can now actively contribute to providing detailed field insights regardless of any of those language barriers, and Encircle will handle the rest to enable professional, clear documentation, and really multiply the impact team can have. And best of all, like I’ve shown you, this feature is integrated directly into your existing video creation workflow. So if you’re already taking notes, this already immediately unlocks a treasure trove of field data without any extra effort on your part. So I’m super excited about the impact this feature will have. Back to you, Shaun. Perfect. Thanks, Kash. Yeah. This is something we’ve been talking about for a long time, so it’s exciting to to see it out. Ethan just mentioned that had a question, like, what about Spanish? As Kash said, I think the videos will take the Spanish input, and we can convert it into French French and English for for those. And then, actually, Jonah had a question around, are you doing voice voice notes? I’ll talk about that in a second, Jonah. Don’t worry. You’re getting, we’re getting ahead ahead of the ahead of the game here a little bit. Great. Perfect. Thank you very much, Kash. Let me, let me take over sharing again. Screen share. Here we go. There we go. Perfect. Okay. So the next feature I wanna talk about is kind of building off of actually what Kash and his team has been doing with video summarization is. Let’s take that technology, and can we apply more of it to just our note functionality in general? So what we wanna do is very basically add in kind of what you expect to happen with AI kind of within a note. So we’re gonna offer features that allow you to kinda clean up and organize notes that you’ve provided long notes. We can help improve spelling and grammar and formatting. If you have a long note, we can create action items out of it. If you’re at our webinar earlier this month, we talked about kind of rich text fields that’s coming out. We’re able to format, and add bullet points and check boxes within a note. We’re gonna take advantage of that feature leveraging AI. And then similar to vo video summary, if you have a note in your app, we’re gonna allow you to translate that to English, Spanish, or French. You’ll be able to convert that. So, again, just like video summary, the impact we’re trying to do is saving you time in editing and reviewing, clarity and consistency across the teams, and really help you provide these professional notes that you wanna add into your into your report. So let me, just share, the demo here. I’ll get back to the demo. There we go. So here, in this note, it’s kind of a fake note we have here where someone has, kinda written a basic summary of what’s been happening on the job. And in fact, Jonah, this one’s for you. People don’t tell anyone. We haven’t really announced this yet, but we are working on kind of a voice note feature as well where you can dictate into our app, and we’ll transcribe what you say directly into the note. Today, if, on some on some of your smartphones, you can do that with the keyboard, but we’re gonna embed that directly into our notes. That’s coming, early next year as well. But once you have this note, so if you’ve spoken that in there or you’ve typed typed it in there, you’ll be able to kinda, you’ll have our AI AI sparkles. Basically, on that note, you’ll be able to click and choose what you like to do. So if you wanna summarize this exactly like Kash just showed using that same technology, we’ll be able to format and organize kind of that long note that you’ve provided into different sections that are kind of relevant to a summary within within restoration. Or if you want, you can you know, let’s fix gramming and speller and break out this giant note that I just spoke into paragraphs, add the appropriate, punctuation, and kinda fix some of that spelling. Thirdly, find action items. So if you just wanna talk into your phone now or type into your phone, okay, team. We gotta do this, this, and this in the kitchen, and then you can create a note in the bedroom. We have to do this, this, and this. We’ll be able to translate that and provide you kind of the new tick boxes with our rich text notes, and you’ll transform that note kind of into action items. And then lastly, if you have a note like Kash said, if you have your crew speaks many languages, they can and types many languages. If they’re comfortable in Spanish or comfortable in French, they can type it, and then you can then translate to English, French, or Spanish. And then you have the ability of inserting it into the note either at the bottom or at the top. So I think these notes enhancements, like AI summary and everything that we’ve been talking about is how can we help you in the field so you can just do the job of repairing the house and make documentation kind of even even easier. So with that, we’ll go back to the presentation. Alright. Now I’m gonna hand it off to Mike to talk about moisture reading. So Mike’s guy on my team that’s been focused on mitigation, particularly water mitigation with our Hydro functionality. And, Mike, I think over the last year, you and the team have really worked on improving the experience within Hydro, specifically with with readings. Right? And, you know, what what can we expect from you going forward? Yeah. Yeah. That’s right. So we all know that, taking readings is really a high effort to high volume thing on the that people do on the job. It takes a lot of time, and it’s also highly subject to human error. And so in the past, year or so, we’ve launched a few things that make it real foolproof to take some types of readings. So we launched the Tramax integration to make moisture content readings way easier with one click. And we also integrated with weather stations, you can now take exterior readings just by tapping into the local weather station. And so now we’re taking a little one step further to allow you to take any type of reading you want with just one click. So we’ve introduced what we’re calling smart moisture readings. Now what does that mean? So, basically, what that means is that just through snapping a photo, you’re now able to get all this extra information just in one click. So we know that having a photo is is key to having, extra evidence when you’re taking your readings. So now what we can do is we can snap that photo of your meter. We can scan all the readings that are on that screen that you’ve just taken a photo of and populate the fields and that feed your reports in Hydro. And so this automatically inputs all your data entries with no manual entry, and it works for all types of moisture readings using a digital digital meter. And so the impact of this is, well, we can eliminate all human error because now there is no manual input. So it’s gonna reduce the likelihood that you’re gonna make mistakes in the field and have to correct some of your data. And, also, this is gonna force people to take photos. So a lot of times we find that technicians, they might take readings or forget to take a photo, or they might take a photo and forget to input the readings. And now with just one click, you’re getting all three of those data points. And so it’s gonna be much harder for any kind of reviewer to actually question your reports. Secondly, what it’s gonna do is it’s gonna make it’s gonna improve the accuracy and consistency in all your reports. So if everyone is following the same process on every single reading, you’re always gonna get the same result. And if you’re always getting the same result, all your reports are gonna be consistently the same with the same results. And so it’s gonna be much harder to figure out, you know, gaps and where your data might have gaps in your reports, and it’s gonna make it way more consistent for reviewers. And finally, it’s really gonna speed up the documentation of readings in the field. So, basically, right now, this isn’t gonna add any extra steps into your workflow. It’s actually gonna make things easier and probably less work to do. And in doing so, you’re actually gonna get more robust data into your reports for reviewers. So it’s really gonna be a big improvement, and it’s really new intuitive way of capturing readings in the field. So what I can do now is I can show what this might look like or what this does look like. And so if you’re on the task list in Hydro, you guys are familiar with this page. Let’s just say let’s take an affected area air reading. You click on that task. And the current state is what you would do is you would have your meter on the wall or the material and or holding in your hand in this case, and you would just manually log in temperature and relative humidity. But with this new workflow, you just have to hit add meter photo. And in the background, what we’re doing is we’re figuring out what readings you’re looking for based on the screen that you’re on, and then you just simply snap the photo. And like magic, we are giving you both of those readings. So, again, this is three data points that are gonna be reported on with just one click. And then on top of that, we still calculate some extra little ratios that are helpful for you to to assess what’s going on in your job. Hit save, and it’s as as simple as that. So this is gonna be a really intuitive, improvement for taking readings in the field, and I we think it’s gonna shave off hours of time and reduce a huge amount of, manual errors. So with that, I will pass that over to Kash. Well, before we get there, Mike, actually, few few people have asked asked this question. Does this work on any meter, or is it only specific meters? Or do I have to specify meter before before No. So this this will work on any kind of meter that is a digital screen. The the meters that will not work on currently is gonna be those dial meters. So such as, like, a concrete meter that uses the traditional dial. We’ll get there, but currently, it does not work with those meters. But any digital meter, you won’t have to specify. It’ll just know what to take. Perfect. Awesome. Thanks, Mike. Yeah. I think that was the number one question coming in, coming in during that segment. Cool. Let me take over screen share again. And while I do that, there were a few questions that came up on the kind of note enhancement stuff that I’ll I’ll briefly mention. So someone asked about check check boxes, or does it just act as bullet points kinda as we’ve shown in the past? It’s an active tick box, so can once it’s in your note, once and you move into the edit form, you can check it and and uncheck it. So that’s great. So it’ll be fully kind of interactive at at that point. K. Perfect. Kash. Someone also asked was, are we only doing stuff for mitigation? What about content? And the timing couldn’t have been more good question. Question. And over to you, Kash, because you’ve been again, you’ve here for two years. Not only have you done a bunch of work on the field side, but I think you’ve been working on implementing some changes to to our contents and not just this feature, feature, but kind of other things. Right? Yes. Exactly. We’ve been doing a lot of work on the content side, and so this is why I’m kinda very excited to showcase some of the upcoming AI features for content specifically. So if we just go to the next slide here. Awesome. So this is a feature that I’m really looking forward to showcasing to you. Anyone that’s done a pack out knows how tedious it is not only to box and pack up those items, but to then document and then add a description. I know at Encircle, you’ve done a great job of making that field experience a lot more seamless, but this feature, AI descriptions, really brings it to the next level. As soon as you snap that first photo, an accurate high quality description will begin generating in the background. It runs while you work, so that means that you continue you can continue to take notes, take additional photos, etcetera. And as soon as that description is completed, it’ll show up right at the top of the view for you to review. And Encircle’s AI works for you to create a clear searchable description, but it also pre fills additional data like quantity, brand, and if it’s visible, it can even read model or serial numbers. Any and all of this data being generated integrates seamlessly with your existing Encircle reports and workflows, so you don’t have to change anything about the way that you work to immediately benefit from this feature. Now over the course of a pack out, AI descriptions can save you hours from manual data entry, but this also helps you improve the accuracy of your descriptions and ultimately makes these items easier to track, identify, and search over the life cycle of an item. Now if we go to the next slide here, I wanna show you what, really quickly what makes Encircle’s AI different. As you’ve heard before, we’ve tailor made our solutions to be for the specific contents use case. Even for something as simple as a mustard bottle, which I’m sure many of you have dealt with when going through fridges at the site of loss, you can see here that if you process this image through a generic AI model, you’ll get a description, but it’s unfocused. And it would require you to edit it further to actually add something like this to report. It doesn’t have the professionalism or the focus that you’d need to actually save you time and energy. Now Encircle’s generated descriptions, they’re clear, they’re concise, and they’re focused on the important information that would be useful for you for your end goal. Specific brand information, calling out clearly what the item is, identifying things like volume, weight, and size if it’s available, being able to parse out the bunch of data that’s on a label or kind of contextual on the object and figuring out exactly what’s important for your use case. And the goal of these descriptions is really to make something that works and that you can add to a report without any extra work, review, or hassle. So now I’m gonna share my screen, and I’d love to show you kind of what this experience is like. So just give me one second. Awesome. So you can see here on the screen, as I’m sharing, that, you know, imagine you’re in the field, you’re packing out items, and this is a very common use case here where you are packing out a homeowner’s collection of books that you’re documenting. So once you’ve taken the photo with AI descriptions, as you’ve taken that first photo, the description automatically begins generating in the background. And as you can see, once completed, it’s visible on the same screen for you to review as long as you’re online. No extra buttons, no extra triggers. If you want to edit or change anything or add more details, you can still open the details pane and still be able to kinda complete the same workflow that you’re used to with contents. And that’s honestly the most exciting thing about this feature for me. Usually, when I demo, I’d like to show all the new buttons, all the new workflows, but that’s the magic of the AI descriptions feature. It just works. And Circles AI does the work for you in the background so you can save time and focus on what needs to be done. As Shaun mentioned, we’ve been working hard to make many enhancements to contents experience. So what you see here is actually the new contents view that’ll be releasing soon that’s built from the ground up with this new AI description feature in mind. And you can see here with dispositions and boxes being sticky between items and are accessible anywhere within the camera flow, for a lot of items with this AI description feature, you won’t even have to open the details page. Your workflow becomes snapping the photo, letting Encircle AI do the work, and then continuing on with the rest of the stuff that needs to be done on-site. Like I mentioned, in the case of being offline, the description will be queued and will process as soon as you reconnect to the network so you can trust that this will work in any scenario. Now I wanna end off by showing just how powerful the item description is. It’s not just, blindly describing what’s in an image, but it’s contextualizing and identifying what’s important. Now I know this is simplified for the demo, but these are actual inputs and outputs from Encircle’s AI for the descriptions. You see for this book, for example, even though there’s a lot of visual information on the cover, the description isolates the name of the book, identifies the author, correctly identifies if it’s a paperback, and even highlights the color. But you can kinda see from this example, these are fairly straightforward examples of things that have clear labels. But where this AI shines is in items that are hard to identify in the field. And so what I’ll do is I’ll show you a couple side by sides of some examples that showcase just how powerful this feature actually is. So you can see in the side by side here that we have a device. And kind of at first glance, you can’t really recognize what it is in the field. But you can see through the AI processing, it was able to not only recognize that this device is a CPAP device, but it’s also able to pull out the specific brand details as well. So it’s able to contextualize specifically what this device is. If you look at another example here, we can see that this is kind of a controller, but the AI is be is it capable of identifying specifically that this is a PlayStation five dual sense wireless controller. So it’s able to give you that much more detail, specificity, and accuracy when it comes to descriptions. And in terms of impact, this is huge. I’ve attended several pack outs myself, and I can tell you there’s a lot of times where you encounter an object and you you can’t just really figure out what it is. And this is especially true for teams that have a language barrier on-site. So with this feature, every team member can contribute to the documentation efforts. And I can see I’ve seen it before where only a few members sometimes are training descriptions and become a bottleneck. This not only fixes that, but really simplifies your training requirements. I’ll show you more examples. So you can see this is a rice cooker, and you can see the AI was able to recognize not only that this was a rice cooker by contextualizing the image. It’s also able to take kind of the context of the buttons to identify. This is a rice cooker. This is branded by NutriBullet, and it’s got a digital display. This is really interesting. You can see that it’s processing a pair of shoes, and you can see here that it’s able to identify a lot more detail than you think. This isn’t simple OCR. Nowhere in this image, for example, is the word Under Armour, but the AI is actually able to not only read the logo, but identify the logo itself as the Under Armour logo and provide that information under brand. I’ll show you another example, clothes. Clothes are a big example. The description identifies color, material, any images or decals. You can imagine how useful this is when the homeowner calls you asking for her husband’s favorite shirt from his favorite show, and you have to search through a bunch of items that are just saved as shirt. And so this is able to give you that identification and that specificity that really, really helps. And lastly, decorations, cups, odd odd things in a household that are really hard to describe. This is an example of one my favorite cups. It’s shaped like a bird, but how many times in the field would you call this cup or mug? So this feature not only saves you time, but increases the quality and professionalism when it comes to how items are documented and show up shows up in reports. So this not only gives more comfort to the homeowner, but as I’ve mentioned, it just makes it easier for you to find items over the life cycle of a claim. So overall, I think this is the kind of enhancement that really makes a big difference. Big impact, minimal change to your workflow. As Shaun and Leah have said, disruption without interruption. It’s not just about streamlining your content’s experience, but it’s really like having Encircle being another actively contributing member of your content’s team. And I think that’s really, really exciting. So that’s all for me, Shaun. Perfect. Thanks, Kash. Lot of good of feedback and commentary in the in the chat. I know it’ll be lot lot of excitement around this feature while we get it going. A lot of questions of does it work offline. And kinda as you said, we queue it up, and then as soon as everything is synced, that’s when we’ll start generate generating the the names. You just continue to do your pack out rapidly, and then we’ll provide those descriptions as soon as as soon as it gets connected to our to our service. Exactly. That’s wonderful. Alright. We’re at, 12:36, and kinda let’s move on to, AI scope builder. I wanna be cognizant of everyone’s time. So we really believe this, of where we’re heading. And as myself and Mike and Kash showed you, there’s a lot of work we can do in the field around improving how we capture and document, a job loss. And in and in fact, this is, you know, Encircle’s backbone is documentation. We’ve pioneered the category of field documentation. We’re one of the one of the first and, continue to be the leader in terms of the mobile first approach. Right? Your team works in the field. You have phones in your hand. We wanna make sure the experience in the field is as easy and as smooth as possible. And, that’s kind of our value proposition. And everything that we’ve done and we’ve talked about is to enhance that and make it easier and more efficient for your team in the field to document what they need to do and move on and actually do the work that that they need that they need to do. And, like I said, no one else sees the field kind of with Encircle. And by overlaying kind of our experience in the industry, talking with our customers, looking at kind of the standards and everything out there, we can overlay AI and make purpose built AI solutions for our industry and improve the workflows that you do in the field. And now we wanna take that and bring it to what do we do with this information? How do we take all this amazing data that you’ve gathered and you’ve collected and turn this into information that you can then use to whether that’s create an estimate, tell your team what to do, figure out what are the next steps on the job. Really, this is this is our goal kinda with with the AI tool. And, the first thing we wanna do is, kinda send out a poll for for everyone. And, really, the question that we’re putting out there is, how does your team usually determine, the work that needs to be done on a mitigation job? So fill in the poll, while you get time, and I’m gonna kinda move on and start talking about the the Encircle, scope solution. So our scope tool, as I said, is really taking the data that is on the Encircle job and turning it into a ready to review scope kind of automatically for you. So taking your data, letting you select what you wanna be part of the scope, and then producing this output for you to review. And that’s critical. Things that we’ve talked about from the beginning is you are in control here. We are helping transform data into information and then allowing you to review it, to edit it, add more information as you need to, and then use that to help, help yourself do the job and and do all the workflows that that you need to do. So with our kind of purpose built rule engine that we’ve done with AI, we take in your photos. And it’s not just looking at the photo. We break down aspects of the photo looking for materials, damage, water, water staining everywhere. We really analyze the complete depth of each of those photos. We take in dimensions, whether that’s from floor plans, room sketches, or other means. We’re able to read the dimensions that you add onto a job and translate that into our scope. This helps us understand the context. We understand the effective materials. And all of this is backed by logic driven by the IICRC standard. So validating everything that we output in our scope for you is kind of validated and backed by IICRC logic. And, really, the impact that we’re trying to do is save hours on researching and drafting scopes. So the goal isn’t to replace anyone. It’s to let you use your time best way possible, which is let’s we can do the research for you and present to you, no. The data you put in, this is what the job looks like, and allow you to then add your own SOPs on top of it. Add your own, you know, carrier rules into it if there’s anything unique for the carrier for this particular job. Allows you to focus on that reviewing and validating the information versus the researching. And we believe this will speed up estimates. It can speed up approvals and really overall efficiency because it’s something you can do on day one or day two or whenever you need to to generate kind of the work the work that you’re doing. So now let’s look at let’s look at the output. So what I’m gonna do is show you, so this is one of our internal test jobs that that we use. I’m gonna show you how kind of you don’t necessarily need a lot of data to produce a full scope, but it’s really important that the more information, the more of our features that you use, whether that’s Floor Plan, setting up Hydro and putting in readings and creating drawing chambers and identifying dimensions, creating rooms, adding notes to each of those rooms, adding an overall note, the more data we get, the richer and more, full that scope is gonna be when we produce it. So if we look at this job, it’s very simple. It’s a it’s a water loss. We have essentially one general note that outlines kind of the the project details. So it’s a water loss. This is an insurance job, so it kind of was phoned in that a washer overflowed and kind of impacted, the laundry room itself and then the the the basement, the, underneath the laundry room. Kind of a bit of details from a content perspective or data within Encircle. We have, four rooms that were that were produced. So we have a floor plan as well. Just let this load up. My Internet is slow today. There we go. So we have a laundry room. Within the laundry room, we have, I think, eleven photos, storage room, a rec room, and a hallway. And with that data, I’m gonna be able to show you what we what we produce. But the first thing I wanna do is quickly show you how you’re gonna actually be able to oops. Let me so the actual process of creating a scope, it’s gonna be as simple as selecting the rooms that you want for for your job, hitting generate, and then you’re gonna get a scope kind of within Encircle. And that scope, is gonna be formatted, and it’s gonna allow you to edit it. But, really, what we wanna show you is what that actual scope looks like on a job. So I’ve I’ve done that here, taken this data, and run it through our, our scope tool. So as I said in the beginning, we’ve been working on the foundation and the technology in the background, and now we’re starting to work on bringing it into the app. And we’ve been working with a handful of customers to ensure that the data that we produce is very accurate against the job. And the ones that we’ve shown it to, they’ve been really, really excited about the job. So once you generate the scope, this is what it’s gonna look like. When we introduce this into the app, it’s actually gonna be an editable document. So you’re gonna be able to review it, remove lines, add lines, edit it yourself. But for now, we’re able to produce kind of a PDF, and this is it. This is what we’ve done. So based on that one general note in about thirty photos on a job and a floor plan with dimensions, this is kind of eleven page scope that we’ve been able to produce. And we’ve broken this out into a variety of sections that, to us, makes a lot of sense and mirrors how, you know, customers we’ve talked to and our experience in the industry, how scopes are laid out. So first and foremost, we give a quick overview of the timeline of the job, kind of the dates that were added into the claim. We have a quick summary of what the loss is. So what’s the project information? What was the cause of loss? Is this a category three or four? And all of this data is validated underneath, and we’ll I’ll get to those details in a minute. The next part, which is really nice, is this project narrative. And what we feel is very important with this part is it’s a paragraph or a bunch of paragraphs. So you’re gonna be able to take and kinda copy and paste this into a 24 hour report or share this with the TPA on that initial, inspection of the job that kind of explains what happened on the loss, kind of the details of what, what needs to be done. So in this case, you can see it was a water damage event on October 26th. Mitigation work began on the seventh, which was based on the dates that were inputted in the job. The category two, just from those photos and one notes, we can see that the affected area was kind of a washing machine overflowed and impacted the the rec room. So it’s really good summary of what’s happened on the job. After that, we go a breakdown of our room by room summary. So in this case, there were three affected rooms. There was a laundry room, a storage room, and a recreation room. In each of these summaries, we reference the standards within the IR CRC to justify why it was affected in the work that needs to be done. Really providing valuable insight and and, again, justification as to as to the work that needs to that needs to happen. Next, at a at a generic level, we pull in IICRC and have a reference section that talks about kind of the category of the class, and the next step within mitigation in relation to the IICRC. So, again, giving you detailed information to help you, you know, explain why you need to do the things and justify it against, kind of against the standards that that exist that we that we work towards. The next thing that’s really exciting is we now break this down into a room by room section. So the first is, we have a kind of a summary for each of the rooms. Was the room affected? What was the damage summary? What are the dimensions that we’ve been able to pull from the information? The other thing is we’re very clear about the assumptions that we’re making. So if you provided dimensions, then we’re gonna use those dimensions as part of the scope. If they weren’t provided, we’re gonna make some assumptions based on standard room dimensions, but then you’re gonna be able to edit that information afterwards. So really giving you a clear view of what’s happening in each room. And I think what’s really exciting where it gets really real is we break down the work that needs to be done in terms of the task. And, again, these are recommendations that we have drawn out of all the data that we’ve that you’ve provided in the scope, as well as kind of the leveraging the IRCRC standards and general practices. So making it very clear, that the line items that need to be done have to work kinda within within each room. So in this case, we have a two foot flood cut because we had photos that indicated that was what was needed to be done. We noticed in the laundry room that was a washing machine, a dryer. So having the line items to detach and reset those are really important, and we help capture all that. We have a safety section and an effective material section for each of the rooms, which, again, break down kind of, you know, help you understand what needs to happen on the job so you can then tell your teams kind of what what to do in the field. And then we repeat this for every room. So what you’ll see is here’s a list for for the laundry room, but that same list is unique for, in this case, the storage room and then is unique as well for the basement, rec room. So we’re really customizing the details for every room and providing you a very quick summary within minutes of you generating the scope, to get an overview of everything. And then lastly, once we’ve gone through each of the rooms, we do provide, equipment recommendations. So what’s really important here is, again, these are recommendations. We’re leveraging the IICRC calculations. We’re not on the site. You know, AI isn’t on the site. We’re taking the information that you provided and leveraging that to calculate kind of what equipment do you need. And right now, we’re focused purely on kind of the core equipment that is used on jobs of dehus, air scrubbers, and air movers. If you use things like inject a dry, the great thing about our scope is it’s editable. Again, you can, add on that to your scope. You manually enter it in. The other thing that we are pride ourselves on is kind of trust and transparency. So we provide the details of how we got to those calculations leveraging the IR CRC s five hundred. So being very open and transparent of what why we picked those numbers and how we calculated those numbers for you. And if you didn’t use Hydro where you’re able to set up drawing chambers, we’re gonna intelligently group rooms by main by floor. So in this case, a main floor and a basement, to create these chambers and then provide you equipment recommendations against those different chambers. And we even can call the fact that in this claim that you provided us, there was an unaffected room, and we call that out specifically. So that’s why it wasn’t referenced kinda anywhere else. So we really think this, the way we can transform this data that you’ve inputted very quickly and very easily in Encircle, then within minutes create this fully, flushed out scope. We’re really excited for this. There’s an opportunity for, kind of a bit of a paradigm shift where very quickly on a job, you’re able to understand exactly what needs to get done, and then you can tell your crews what to do as well as kinda share that information with whoever you need to share that with. That’s kinda where we’re at with, the AI scope. We’re really, really excited about it. We’re gonna put up another poll. So after seeing kind of this, output that we’re creating, we’d love to understand, you know, how valuable you think this tool could be for for your office. So with that, I think, I’ll turn it over. I think, Leah, you’re gonna come back and see if there’s kind of any questions that we have from, from the audience. The are there ever questions? We’ve been, I’ve been frantically typing in the background trying to get to everyone’s questions. So we’ve got about five minutes five minutes left. So apologies if I haven’t got to your questions. Type type in your questions, and if we don’t get them to get them to you get to them now, we will get back in touch. There’s a lot of very specific questions around workflow and can we do this and can we do that? As Shaun mentioned, we’ve been really working on the, like, the foundational elements in the back end. And now we’re building the UI, so it’s great to see all the things that you guys want to be incorporated, and we will, I say we, Shaun and his team will take all of that feedback into account to make sure that we get this feature right. And if it’s not exactly the way you want it right out of the gate or if it doesn’t do everything, you saw a lot of my responses, there’s some winky faces. So, like, we understand where the value is is in this feature going forward, and where we’re we’re just getting started. That’s that’s the message is, what you’re seeing today is just the start of the opportunities that present themselves, and Shaun talked about the hourglass and all of the downstream impacts that we can that we can make with AI, and we understand, we understand our customers. So thank you for all of your feedback. Okay. I feel like I kinda just need to, like, close my eyes and pick a question because there’s so many. You talked about a lot of the questions around smart moisture meters have already been addressed. One of the questions, Shaun, that came up is what AI are we leveraging on the back end, for for our AI solutions, if you’re able to answer that? Yeah. I I’ll I’ll I’ll get into that. I won’t get into too many specifics. That’s kind of our CTO drives that. But we so we leverage Microsoft Azure cloud services. And then through the Azure, AI portal, we have a opportunity to use a variety of different, third parties, whether that’s ChatGPT from OpenAI or Cload from Anthropic. And part of our process is when we create our prompts, we actually test it against a variety of different solutions and make sure we pick the best one that works for kind of that output. A lot of what we’ve been working on in Microsoft and OpenAI have a really close relationship. It’s been leveraging ChatGPT, but we have the ability to to use a lot of different, different options that are that are out there. Lot of questions around scoping, Shaun. What some of the questions around Floor Plan. Is a floor plan necessary to run, to to use our scoping tool? No. Floor Plan is not necessary. In fact, if all you have is a general note or a note that has a bit of a description, we can take that and create kind of a scope for you. Think the thing that’s important to note is the more information and more clear data that you provide us, the richer that scope is gonna be. But if you don’t do a floor plan, that’s fine. If you make notes that mark down the dimensions, we’ll be able to understand and extrapolate that information and apply it to the right section within within the scope. Although we encourage everyone to use Floor Plan because I think it’s it that’s to us, that’s one of our first transformational features to to help speed up the dimension process. Does the will this work with, only with new jobs, or would somebody be able to go back and do a scope with an old existing job? Yep. Once once you make this feature available, it’ll be available kind of across all of your jobs. So there’ll be a new tab on the web or a new section on mobile that you’ll be able to go back and run, existing existing jobs against and and create a scope. Great. Will the we had a couple of questions around particular use cases for scoping. So we talked about mitigation. Will it be relevant to fire losses, rebuild, that sort of thing? So for our kind of initial entry into this, we’ve really focused on mitigation jobs, specifically water mitigation jobs. So a specific, you know, a content or rebuild one, that’s kind of in the future. We’re definitely gonna expand our scoping features, but we’re really focused kind of on that water mitigation scope at this point. So thank you so much. We are at time. Thank you to Shaun, Mike, and Kash for showing us the future of Encircle AI. And like I said, I think the theme, we talk about disruption without interruption. So AI is a very disruptive technology, but restorers rely on their critical field workflows, and time is everything. And so we’re looking to use AI to disrupt and make your workflows more efficient, but we don’t want to interrupt the things that you’re already doing and the processes that you already have. Super excited about this. Thank you everybody for attending, and we will see you at the next webinar. Thanks, everyone.

Click the menu icon on the right side of the control bar to open the full list of sections and select to go directly to a specific section.
Field documentation enhancements coming soon
- Video Summarization: turn video into notes
- Notes Enhancements: clean, translate or summarize field notes
- Smart Moisture Readings: Capture moisture readings automatically from meter images
- Item Descriptions – Auto-generate contents item descriptions from photos
Transforming scoping with Encircle AI
Encircle is building a smarter, faster way to go from documentation to defensible scope—by turning the documentation you already capture in Encircle into a fully automated, room-by-room scope in minutes.
Estimate pushback almost always comes down to a disagreement on scope. A properly completed, clearly justified scope leaves little room for doubt —and makes every estimate easier to defend.
The impact:
- Cut estimating time by over 80%
- Recover hundreds of dollars in missed revenue on every job
- Eliminate hours of back-and-forth with adjusters
Scoping isn’t just another feature – it’s the launchpad for everything Encircle will build next.
By automating the foundation of every job, Encircle AI is setting the stage for a faster, more profitable future in restoration.
Meet the expert panel

Shaun Coghlan
Director, Product
Encircle

Leah Vusich
Director, Product Marketing
Encircle

Mike Aho
Product Manager
Encircle

Kash Mirza
Sr. Product Manager
Encircle
Get complete and consistent field documentation everytime.
Learn how Encircle can help you and your field teams ease the documentation burden.