136: AI and EHS Software

January 28, 2026 | 1 hours  02 minutes  12 seconds

Do you have questions about AI and the role it’s playing in EHS? Is it a trustworthy source, and how would you know if it is? What are the benefits and implications? Jill sits down with three of HSI’s top executives who are leading the charge in AI technology development for EHS software. Join Jose Arcilla, Chief Executive Officer, John Hambelton, Chief Technology Officer, and Mike Case, Vice President of Product, as they break down what’s working with AI, precautions you need to know, and how they’re creating tools that grow with the people who rely on them. You’ll hear how they’re designing tech that learns from real work, minimizes risk, and supports better choices, all while keeping humans firmly in control. From guardrails around ethics and human oversight, and questions to ask before selecting AI-powered solutions, this episode pulls back the curtain on AI development, what’s coming next, and why it matters for EHS professionals.

Show Notes and Links

Transcript

Jill James:

This is the Accidental Safety Pro brought to you by HSI. This episode is recorded in January of 2026. My name is Jill James, HSI's chief safety officer. And today I want to discuss artificial intelligence as it impacts our profession. AI seems to be everywhere. If AI were a person, we'd be bumping into one every day. Sometimes as a helpful stranger who gave us good directions and other times as a know it-all that took us down the wrong path. AI is moving so fast, it's nearly dizzying. Look over here. Look over there. It can do what? And some of it's fun. Some of it's frenetic. Some of it's kind of freaky. And for our profession, it can be all of that and a great help aid. Yet, how do we know what is a good help aid? What can we trust? As EHS professionals, we excel in identifying and mitigating risk. That's our job. Yet with this emerging frontier, what are the risks we don't yet know because of the speed with which it's all moving? For all of those reasons and more, I have three guests with me today who are on the front lines of AI for the EHS profession, and they happen to be my colleagues. I promise you, this is not an infomercial or a sales pitch. You know that I wouldn't do that to this audience. Rather, I want you to hear how the sausage is made from the people who make it and the care with which they do the work. My guests today are Mike Case, vice president of Product at HSI, John Hambelton, chief technology officer at HSI, and Jose Arcilla, chief executive officer at HSI. They're joining us today from Washington, Oregon, and Texas. Welcome to the show. Now, as is tradition with the Accidental Safety Pro, I'd like to start with your origin stories. Jose, can you tell us a little bit about yourself? How did you come to this work and helping the EHS profession and specifically in technology?

Jose Arcilla:

Yeah, absolutely. And thank you for having me on the show. First of all, hopefully I don't confuse too many people with Jose and then the English accent. I grew up in the UK and obviously going through school and college, I had a very big focus on computers, technology, the sciences. And so that really got me into working with technology companies. And so I spent all of my career really in technology, software providers, which naturally over the course of time led me to the US, of course, as a huge hub for technology innovations. And so that got me to Texas where obviously it led to HSI. But there is a connection actually to workplace safety. My mom used to work at a hospital in the UK called Stoke Mandeville Hospital. It was in the town where I grew up, and it's world renowned for spinal injuries. And so at a very early age, I used to go to the hospital with my mom, they'd do movie nights, and I got to meet a lot of the patients there, and I got to speak to them. And whilst many were flown in from around the world, whether it was to receive treatment or to give their families a bit of respite, I remember talking to a gentleman that had a workplace incident, and it kind of stuck with me at an early age because it was one of those situations where you never used to think about workplace safety and compliance. It was a case of someone who had gone to work, was doing his job, had an accident, and he was left paralyzed from the neck down. And all of a sudden you start to think about your own parents. And my dad who used to work in a factory, when he goes to work, what could potentially happen? And it's kind of like sits with you for the rest of your life if you grow up. So it was an opportunity that when HSI came up and the nature of its work, it really resonated to me.

Jill James:

I have not heard that story from you before. Thank you for sharing that. That's beautiful. John, how about you?

John Hambelton:

Yeah. My career as a software developer started at a really young age. At the ripe age of 13, I was brought into a three-year grant project to flip the classroom where we were teaching educators how to integrate technology into their classrooms in the state of Oregon. From there, I had the privilege of working for several web startup companies during the dotcom era. Following that, a brief time as an instructor and advisor for our local colleges, and then really growing in my career here at HSI with the initial opportunity to pioneer emergency care online training, but really spending the last 10 years moving beyond the training and creating solutions, addressing the complex data workflows and logistics of EHS professionals. Technology and education and growth is my passion. Again, being able to teach teachers how to use technology, integrating in the classroom, pioneering some of the early online training technology within this space and other K through 12 spaces, I really have a belief that technology can grow and develop individuals and groups, and there's so much opportunity today to continue to do that. At a young age, I wanted to be a doctor, that was kind of like my little child hope and dream to be able to go and to save lives and help people be healthy. I'm really grateful that my skills in the technology space, I was able to combine that with those original goals to be able to actually put work towards protecting our workforces, individuals, and operate and help people in the continuous effort of growing themselves. It's a super rewarding industry to be a part of, and I'm really grateful to be a part of it in this weird secondary way. Totally recognize my background is quite a bit different than the EHS professional, but had the opportunity to meet with several EHS professionals during my career at HSI can have really come to appreciate the parallels between our work within risk management, compliance, managing a complex network of individual difficult control variables, and then always managing that balance of speed, quality, and cost. I feel like I share these issues with the EHS professional in the software development space and happy to do it in that context. So I'm grateful to be here and looking forward to the discussion.

Jill James:

Thank you, John. Since age 13, holy cats, I did not know that about you either. Mike, you and I have known one another longest. Let's see if you can surprise me, tell us about yourself.

Mike Case:

Sure. So actually, John and I have very similar backgrounds. I started growing up with computers. My high school that I went to in the mid '90s was one of the very first in the state of California to get wired for the internet. And I still remember logging in and checking the weather using a service called Gopher, the weather for the Minneapolis Airport, and just thinking how cool it was that you could just get that information in real time. From there, like John worked at several internet startup companies in the dotcom era, and that was exciting and fun, and I learned a lot. But by the time 2006 rolled around, I was looking for something a little more stable, and I ended up taking a job with a company that was building online safety training. We had a catalog of online safety courses and the learning management system, and I worked with that company to help transition them through several kind of technology evolution cycles, embracing broadband internet, and then mobile and the modern internet technologies that we take for granted these days. The common thread for me through all of it is that at the end of the day, what we do, it keeps people safe, helps reduce injuries, and I think makes a meaningful impact on people's lives.That's been kind of a positive thing about where I've worked for all these years. And now having been adjacent to the EHS space for almost 20 years and spending time with customers and at trade shows, I feel like we know the space pretty well, and I think it's a pretty rewarding area to be.

Jill James:

Yeah. Yeah. The reasons that drew you to the company is what drew me in as well. Meaningful work for the profession, for the people. Jose, wondering, would you mind sharing with the audience where the HSI AI journey started?

Jose Arcilla:

Yeah, absolutely. Look, AI has been around ... And hopefully a bit of education comes in now. AI has been around for a long, long time. Yeah, you can go all the way back to the first mechanical machines ever made. For me, that journey, when I think about AI really started in the 1950s, when the touring test came out about how can you tell if there's artificial intelligence with some of these new modern technology devices were coming out. And so long history, and I think growing up, certainly in the '80s with the advancements of computers, always got really excited. You can call me a nerd, love gadgets, love technology. And so joining HSI, I spent a lot of my initial time working with John and Mike on the technology side. And as we were obviously growing through acquisition, as well as internal development, part of the conversation naturally turned to, well, how do we supercharge what we do for the customers? And that really led to the whole discussion around AI because there was a lot of new advancements around large language models. And so obviously part of my job is hopefully to have some really good ideas, but obviously Mike and John are the ones that turn those ideas into reality. And so all the way back in 2023, I said that like it was a long time ago.

Jill James:

Feels like it.

Jose Arcilla:

But for AI, I guess it was. Back in 2023, we had undertaken a lot of work to really build this holistic platform, but we obviously needed to supercharge the capabilities of the safety professional because I think we can all appreciate that safety departments are obviously, or sometimes I should say, overlooked in terms of the budgets they need, the resourcing they need to really have a meaningful impact. And so you can digitize, you can build a lot of capabilities to really help them. But how do you build something that really makes them far more effective and efficient and then starts to turn the discussion of a safety culture onto being more proactive and predictive to really mitigate issues? And that's where AI came into the journey. And so we spent a lot of time looking at the possibilities of AI, how we could use our system, our data, to truly make it something that would help the safety professional. And of course, we partnered with other groups to help us learn about AI to really help bolster our capabilities. But it was always founded in the desire to, how do we empower the safety professional to do more, to be more effective, but to really move beyond the reactive into that proactive. And the rest really was just the determination of the team when they all saw the vision and the value of what we could really do for a safety professional.

Jill James:

Yeah. Good, good. I would like us to talk about what you were just ending on, Jose, about how we consider the EHS professional in what we're building. And so would some or all of you mind talking about like, how do we connect with the EHS professional? How do we get into their heads and figure out what they need? I mean, I know you guys all ask me a lot of questions, but let's talk more about how we do that so we're not just creating things in a silo. Go ahead, Mike.

Mike Case:

Yeah. So I'll go ahead and talk about our customer advisory board to kind of build on what Jose was saying. Back in 2023, when we're really starting to take a hard look at the new tools and capabilities that we're rapidly advancing onto the scene, we had one of our customer advisory board meetings and we used some of our time with our customer advisory board members to ask them about how they, number one, were already using AI. And it was interesting to hear even back in 2023, how those safety professionals were already kind of exploring ways that they could use it to be more productive, but then to ask them where they would trust deploying AI into the everyday things that they were doing. And we definitely heard a lot of stories about accuracy concerns and being suspicious about the capabilities of AI to provide not just an answer, but an accurate answer. We've seen AI be confidently wrong many times now. So we listened to that. I think the common theme that came back was that they were looking for ways to use AI that expanded their reach, that helped them be more present, more available, even when they weren't physically able to be in a place, to be able to leverage their expertise to do things like find hazards or understand what the safe working method was for a job that was about to be performed, to help answer some of those questions, because they can't be everywhere all at once. We certainly recognize that the safety pro wears a lot of hats and is often kind of stretched thin. So coming out of those conversations, we started thinking about what are some ways that the AI technology that we were looking at could help expand that reach to help confidently extend the safety pro's knowledge and experience in ways that could make the jobs that workers were performing safer and more efficient.

Jill James:

Yeah. John or Jose, anything you want to add to that?

John Hambelton:

There are also a lot of low-hanging logistics that are not necessarily exclusive to the EHS professional, but are common to software and the normal logistics of dealing with risk management and data and the flow of data. This is very much a 2023 example, but it's one of the areas that I still feel really passionate about, and that is in data summarization. It seems like such a simple thing. It seems like a technology that's table stakes across software these days. Even this conversation that we're having right now is being recorded, AI summarizations are being generated as we speak. But for the EHS professional, just being able to simplify the logistics of the day-to-day, and we see those comments coming back to us, challenges to be able to solve problems like being able to share data to different audiences. We spend so much time gathering data, making sure that the right people see the data, making sure that it's all accurate. One of the things that we have not been able to historically scale is simply the ability to provide really data dense objects. Let's just take an incident record, for example, that there's so much data that comes into a single incident and being able to make that consumable to executive audiences, for example, AI allows us to turn that into a button click. So there are just a lot of common logistical pieces where we can trust those pieces because they're properly referenced, all of the source material is still there and available, but now we're able to scale the output of those. So we're getting both trust and speed, which has been super important for us in developing some of these initial features as well as applying AI to our own internal workflows.

Jill James:

I mean, Jose, you said you were a nerd. I think we're a company of nerds here, and I think it's good nerding at this point.

Jose Arcilla:

Absolutely. Yeah. Look, Mike and John give you great insights in how we're able to engage and understand through things like the CAB in terms of needs and technology. And obviously John is always looking from a technology perspective. I think very small adage to that is looking at the amount of customers we engage with, looking at best practices, working with yourself, Jill, getting your insights from your background. There's also that component of listening to what's of importance, not necessarily connected to AI, and then figuring out how we can deliver that through the advancements of AI so customers truly get the benefit. Because sometimes you really don't know what you need, you just know you're trying to tackle a problem. And obviously our goal, Mike and John's, myself, and the entire company, is always, how can we solve that problem? How can we be more effective and more efficient?

Jill James:

Right, right. I mean, that's one of the key things we do when we meet with our customer advisory board. They may not be coming to us saying, "Hey, can you do X, Y, and Z with AI?" More so we're listening to the problems, to your point, Jose, that people are trying to solve for, and then Mike's and John's ears are like, "Oh, that sounds like something we could solve for using their expertise and background." So it's a fun interplay when we're together with customers. I started out in the introduction talking about AI is everywhere and it can take you to good places and it can take you to not such good places. I'd like to have you share a little bit of cautionary tales, if you would. What are the problems with AI or what should people know about? John, you were talking about data and data sourcing earlier.

John Hambelton:

Yeah. So really early on in applying AI to my own day-to-day, again, going ... It's the same story, it's the same technology. Very early on, we started taking advantage of AI note takers within our meetings. We're constantly meeting about anything and everything, and we had an opportunity to have several discussions where a team member was out and we got to provide summarizations to that team member later on. And again, this was several years ago. AI hallucination is still very much a thing today. It was worse back then. And one of the first times I actually shared an AI, fully AI generated asset with another team member, that asset inside of it, not having reviewed it thoroughly, actually had a hallucinated call-out that was very negative for that person that we shared that information with. I was very lucky to be able to go back, speak with that individual and point out, "Oh no, here's the moment in time where I made the mistake. Here's what was actually said." And we were able to resolve that. But that actually was a foundational learning experience for me in applying AI. Human in the middle is still very important. Building trust, but also having inherited trust from the AI and its outputs, but also within your own workflows, having opportunities to treat that as you would human-generated data. There's going to be human error. There's going to be AI error and that there's checks and balances and receipts all along the way. We can't just expect the magic to happen and trust it wholeheartedly. At the same time, it can do a lot of powerful things. It can scale a lot of problems and it can give us a lot of new opportunity to consume data in ways that we couldn't before, either because of cost, scale issues, what have you. But again, from a trust standpoint, and again, we apply this internally within our own day-to-day interactions with AI and within the development of the services that we build is there has to be a very regimented set of logs around what the AI is able to do, provide evidence of why it made the decisions it did, come with those receipts, and then you can effectively come in and apply those to logistics where in the past you may not have trusted. All of those things are growing and the trust is growing on a day-to-day basis. So we absolutely have to take it from a risk first standpoint, but absolutely think about trust as you're coming into it, but there are steps and there are logistics that you can bring in that will help you build that trust and prevent risk.

Jill James:

Yeah. I mentioned the term data sourcing, and I know it's something that internally we talk a lot about as we're developing things like, where did this information come from? In case a listener hasn't heard that before, can one of you maybe talk about what does that mean and maybe give an example of where we are sourcing things internally or how we do that?

Jose Arcilla:

Yeah. Look, I'm happy to take a crack at that and then Mike and John can correct me. It's very typical they do that. So look, I think as John was talking about, obviously AI, certainly in the very early days, far more common, far more talked about, but the AI systems would have hallucinations. That was a term that was used, still used today in terms of how it was providing a response. And that's really because what it's doing is it's data that it's pulling from, in essence, is the internet. And so whatever data out there exists is what it believes to be true. It can't make a judgment call per se on itself. It's certainly not there yet today, but it can't really interpret, is that data correct? Where's its data sources? And so that can cause risk and concern on the validity of the response you get from AI. It's one of the reasons, as John said, human in the loop is really, really important because you want to make sure that whatever AI is presenting to you, you can look, review, and approve any actions that you therefore want to take on what AI is presenting. Now, from our perspective, our goal is always to use known sources, known data, information that we know is factually correct. And so when we use AI, we use it on our library of content that we've built with industry experts. All of that content has been validated, whether it's through regulatory specifications, regulatory requirements. Again, people like yourself, Jill, experts who validate the content to be true and factual, that becomes the source library of our AI system. So it's not just the information in the platform itself, but our content. We do pull in third party data from regulatory bodies and from partners who have validated sources of data, and that's how we're able to respond with AI with a level of confidence. But we still always like to have that human in the loop as an extra safety step. Because certainly what we don't want to do in the area of workplace safety and compliance is certainly provide some kind of response to an incident, to an issue, to recommend some kind of training to a potential customer and their employees, which is actually incorrect because that leaves them exposed to all manner of issues up to and including injury on the workplace and we certainly don't want to do that. It's why we now consider data to be king again, for that content to be king, because it's what feeds the engine. If anyone is going out and just using the internet as their source, who knows who provided that information, where it's coming from. It could be someone who thinks as an expert creating content online about the best safety practices and procedures, which could be completely and factually incorrect.

Jill James:

Right. Are you getting the junk food or are you getting the nutritious stuff, right?

Jose Arcilla:

Exactly.

Jill James:

Yeah. Yeah. Thanks for sharing that. I mean, it's really what we do to mitigate risk and build trust for our customers. I know in terms of how AI has changed the landscape on what consumers expect and how we interact with computers, can one of you talk about, what have you seen? I mean, you've all talked about your deep history, but what are we seeing today in terms of expectations when people pull up to software today? What's changed?

Mike Case:

I'll take that one. I think it's been interesting to watch how chat type interfaces like ChatGPT are reshaping the way people expect to interact with software. To me, it's been kind of an interesting social experience to see how I've changed the way I interact with the internet. If I think back to a few years ago, when I wanted to learn about something, I would go to Google or my search engine of choice. I would type in a search query, I want to learn about what it means to be safe in this kind of situation. And I would get back a list of sites and I could then open up those sites and read one by one the information that each site had. It was kind of a slow process. Now, when I run that same search, Google will summarize, give me an AI summary of that information I'm looking for. And while it's of course good to go verify the sources and read the underlying source material, for a lot of things that I'm doing I don't need that level of accuracy. And so many of the searches that I run now, and with me just reading the AI summary, just a quick answer, maybe to a conversation that my kids and I are having at the dinner table, we'll look something up and I no longer need to read two or three different websites to get the answer. I just get the answer back in the response. Those chat interfaces are reshaping the way that we expect to interact with stuff. And it's making the traditional graphical user interface that most of us are familiar with, pointing and clicking to navigate around. It's make making that less important because we're retraining people to expect that they're going to be able to find the information they're looking for, not through a series of menus and clicks, but just by typing in your question and getting a direct answer to the thing you're looking for or asking an action to be performed and then seeing that action performed on your behalf. I think it's one of the more transformational technologies from a user interface standpoint that we've seen in a very long time. And it's exciting to see that play out as a technology enthusiast.

John Hambelton:

And in parallel to how you're interacting with data and interfaces and doing discovery, individuals now also have the ability to generate technology to solve tasks more than they ever have before, where you can have someone who may not necessarily know how to code or work with complex software, be able to perform tasks for themselves or on behalf of their teams that they would typically have to either send to an engineering team, send to a technologist, or purchase a piece of third party software in order to resolve. A recent experience for me was I had a really large payload of data that I knew had answers to a question, but previously I would've had to write something in order to process that data and provide the outputs. I may have needed to purchase maybe some advanced BI software to process and store all that data and then start to work through that individual pieces of softwares interface to gather insights where now I can, through the same chat interfaces that Mike was talking about, I can simply provide the data and start talking through my requirements and start getting discoveries and working with that data and chatting back and forth and either generating software or letting these interfaces that have access to other software behind the scenes help me through those problems where those problems before required either code knowledge or some type of other technical knowledge in order to solve those problems. So that's another really exciting piece for us too. Our engineering teams are doing this on a daily basis now. Other departments within HSI, lots of exciting opportunities to process problems in new ways because there's new technology just through a chat interface at our fingertips to be able to solve those individual problems. And that's only going to evolve as software evolves and integrates those capabilities into their own software. And we're thinking about that on the daily.

Jill James:

Payload of data. I'm certain no one has ever said that on this podcast before. That's fantastic. That's fantastic. Hey, I want to get into a discussion for our listeners about some things specifically that we've built in the care with which you all did that work for reliability and risk avoidance and all of the things that we're talking about. But before we dig into that, I wanted you all to talk about how ... Jose, you started out talking about how we really got into this journey in 2023, but when you decide from all of your seats at the table to start building something, how do you know it's not going to expire tomorrow or the next month or whatever? How did you thoughtfully decide how are we going to do this to future proof it, I guess?

Jose Arcilla:

Sure. So look, that's a really difficult proposition to try and answer. Certainly when sitting down with the development team and all of the heads talking about, look, how do we take this new technology and start to build it? For us, it wasn't necessarily about future proofing the feature sets or necessarily the tools that we're using or implementing. It was really about how do we make sure that whatever we build into the system is almost like Lego, it's interchangeable. If we're using the latest and greatest AI capabilities and AI engines today, well, I think we all have an appreciation that it's moving so quickly that could change in six months to a year, to two years. So how do we make sure that we're able to transition to the latest and greatest technology? And so it's really having that conversation about re-architecting our platform to ensure that we built the appropriate layers and orchestration layers that say, "This is how we want AI to interact with our system, with our data, to make sure that we could interchange that as we needed to." So even with some of the technology we have, and I'm sure John is squirming in a seat listening to me explain it, for us, it's really a case of we treat our own AI capabilities and features like third party systems. So even though it's ours, we build it. We treat it that way because we know in the future our AI capabilities will change and they already have changed since we started. This is really a learning exercise for us as well when we started all the way back in '23, but it could be again, how are people going to interact with our system? As John mentioned, being able to no longer use that graphical user interface, no longer typing and searching through list, but really having a conversation is what we have to keep in mind. And so we certainly spent time thinking about, well, someone else's AI agent could at some point be connecting to our system. When AI becomes a lot more of that personal agent, we could have other systems connecting to us to analyze our data or to get feedback from us. It kind of reminds me of a film, I don't know if you guys have ever seen it or if your listeners have seen it, it's called Her. I think it was back in 2013 where it was about a new operating system that came out and it had an AI agent. And that AI agent was kind of like transitioning from the desktop to the actor's handheld device and it went everywhere with them. And it was more about the conversation. It was no longer typing. It was just talking to your AI agent and your AI agent, providing you with insights, having that conversation, giving you the data you need. It's incredible film. If you get a chance, certainly watch that. But that's kind of like where we are today. A few months ago, I was playing around with the new voices that came out on both ChatGPT and Gemini. And I was in the car waiting for my wife and I was playing around with it, I switched it on and I left it on listen mode. And of course, the wife comes in, jumps in the car seat and says, "So what are we doing now?" And the device piped up and said, "No idea, but I can make some recommendations if you want, because there's a great brewery just down the road." And I was like, "Wow, amazing." Of course, the wife told me to shut it down and I should only listen to her. So yeah, interesting. But again, it just talks to the fact that it's evolving so quickly and what we do today is going to look very different tomorrow.

Jill James:

Yeah, that's beautiful. The subject matter expert in the car was your wife is what we're hearing.

Jose Arcilla:

Correct. And that's never going to be challenged by AI.

Jill James:

Right. That's awesome.

John Hambelton:

I really want to echo what Jose said here, because I think it's really important. Not only have we applied this to how we're integrating AI into our software and our daily workflows, but any technology decision maker, any individual who's looking to scale themselves and solve problems with AI, being nimble has to be a top priority. We've gone to extremes in this where we may have a problem that we want to address with AI and day one we're putting multiple irons in the fire because things are moving so quickly. To Jose's point, the landscape is going to be different tomorrow. You have to be nimble for when your first decision may have been the wrong one or that vendor may just not exist tomorrow or that technology integration point may be disproven for now given accuracy or the current technology of the day, but being able to rapidly move from one place to the next has been really important for us to be able to take advantage of the early wins. So just really wanted to echo that point. I think it's super important to realize, and especially in this industry where it's so regulatory driven and compliance based, and there's a lot of red tape between the initial idea and the final implementation of the decision, have multiple items pending through that whole pipeline of decision making so that if the first idea doesn't pan out, you've got the next one already partway through the process ready for the next stage.

Jill James:

Yeah. Beautiful. Yeah. I really would like to share with the listeners the story of a journey out of many of the solutions that we have, and we have a lot, I mentioned in the opening how the sausage is made. I'm wondering if y'all could talk about the journey of our hazard recognition solution that we have in terms of the care with which you designed it, how do you get good information? How can we trust it? How did that one happen? If someone maybe wants to start with, what is it to start with?

Mike Case:

I can kick that off. So Image Hazard Recognition is a tool that we've built that allows a user to take a photo of something and have AI view the image and identify potential hazards it sees. So think about it. I'm showing up at a work site before I get started, I just want to make sure that there aren't any hazards that I'm not aware of. I can take a photo, run it through this process, it will give me a list of potential hazards that it sees, and then I can, from there, either confirm or reject, again, human in the loop on the identified hazards. I think it's a great example of how we approach most problems. We want to start with the problem. In this case, we're thinking about how can we expand the reach of the EHS pro? How can our site supervisor be at all the places work is happening at once? This solution comes from that problem and recognizing that there's a new kind of opportunity with AI and this new technology to apply to that problem. And so from there, we work with the technology team, so John and his team, to say, "All right, we see an opportunity to solve this problem with AI. This is kind of the vision that we have in mind for how it could work. Where do we go from here?"

Jill James:

John, I can hear you want to speak. I wanted to talk about this particular solution has a recognition that Mike is talking about. I remember when the two of you, John and Mike, you set a meeting on my calendar and you're like, "We have this idea of this thing we want to do, but we need your help." And they come to me and they're like, "We want to be able to identify hazards in a photograph. Do you have anything, like photographs that you could give us that you know for sure have hazards in them? And can you tell us exactly what's in the photograph so that we can ..." Am I sharing too much about how the sausage is made guys before I go on?

John Hambelton:

Not at all.

Jill James:

Okay. And I remember both of you were so excited to get these photographs from me. And from my career, I shared them. And John, you were so excited about one in particular, which was of a giant metalworking band saw. And what excited you about it was that the background in the photograph was all gray and so was the machine. And you're like, "Oh, this is going to be such a good ... " Jose said we're nerds, we are. But that's how the baby part of this particular thing started. And then John, please go on. I loved being part of it.

John Hambelton:

I knew you were going to mention that one. I was like, "Oh, please talk about the bandsaw one." And I still get kind of goosebumps over it because there's magic there. The technology we were using, it's even better when we first started that experiment, it's so much better even now. What it was able to capture and differentiate was just incredible. We were kind of expecting, okay, this will catch the obvious, this will do the things that can be done at a moment's glance. But when we started seeing the outputs from the AI and the feedback from a hazard recognition standpoint and the bulleted list of things that it was going through, and then backing that up with, again, those data sources that Jose has talked about, those curated data sources that Jill, you and your team and our content experts have oversight on and make all those correlations, that was one of those first real moments for me where I was like, "Okay, we've really got something here." My other really favorite part about image processing is it checks three of the boxes for me for making decisions about where to start inserting AI in your day-to-day processes, right? You want to give it tasks that you don't want to do. You want to give it tasks that are difficult to scale, and you want to give it tasks that have oversight, but still you can insert human in the loop and grow it from there. Image processing, hazard recognition from images and image processing in general checks all those boxes. Again, going back to that data payload, we'll say it twice in this podcast. So EHS professionals, they have payloads of image data that is potentially coming in, whether that's video capture, daily inspections, the potential for a library of images to exist within a EHS professional's day-to-day life is immense, right? But they don't have the human capital or even the desire to go through and look at all of those and analyze them. And it's really, really difficult and expensive to scale where now all of a sudden, not only we can process those and do things like recognize what's in the image, validate what is there and even generate that as data so it's more searchable, like tell me what this picture even represents and turn that into searchable data. It's doing that at scale and now it's even solving more complex questions that are relevant to the EHS professional like hazard recognitions. And then it's generating all this data and putting it in a way that you now have oversight on. So we've generated a lot of AI information, EHS professionals and users still have oversight, and now we can start doing more advanced workflows as we gain trust over that system over time. So today we're just going to capture and recognize. Then we're going to start recommending actions to do based off of what we captured. And then in the future, we're going to start automating those actions or assigning actions to other people automatically to go perform action against what has been processed. And then we multiply that complexity over time. And that's where we see all of this heading.

Jill James:

Yeah. I mean, the scalability you're talking about, John, is exactly what got me excited as a professional, because anyone who's listening, who's been an EHS professional for any amount of time, you know that our eyes are uniquely trained to identify and see hazards. And we're not scalable, it's just our two eyes. And maybe we've trained a safety committee somehow to see certain types of hazards, or maybe we've trained some supervisors or managers to do that. I just saw this solution in particular as our one example we're giving today as scaling those eyes of the EHS professional so that then as a profession, we can come in and do the exception management to say affirmed or, "Wow, I didn't even think about that or see that sort of piece." And it's exciting and fantastic. And also, I'd like to hear about in terms of the precautions to make sure we're doing things well and we're not leading people down a stray path, what sort of safeguards do you all have in place when you're building something like this?

John Hambelton:

There's a whole new set of compliance controls that our teams are responsible for when building out these solutions. This is one of the, especially a few years ago, one of the scarier parts about how technologies and hype grows for something like this, because organizations are going to move forward, technology is going to move forward before the compliance and oversight does. And thankfully, there have been people who've recognized that and invest a lot of time in helping us understand the additional controls that we need to put in place when bringing in technology like LLMs. There's a lot of common sense that happens as well. We develop a piece of software, that software's going to go through hundreds of different controls from initial inception to production delivery. So all of those still apply to any AI technology. I talked about vendor management and all the security control oversight that has to go through with just bringing on a new partner. But in addition to that, there are very specific controls around AI that is specific to AI that we have to now consider as well. The industry is trying to keep up with it as best we can. ISO has come out with a standard, which is great progress. But anytime we implement AI into any piece of our technology, at an absolute minimum, we're making sure that it only has access to the data that it's allowed to have. We go through a stringent round of tests around hallucination and there's a very big round of tuning that happens any single time we come in. We're always learning and adjusting how it interacts with those pieces to be as accurate as possible. Every single interaction, every single decision, again, comes with logs and receipts. So any piece of data that's generated from the AI, we know why it generated that and we can go back and ask that question. And more importantly, we can monitor deviations over time. One of the things that we're most proud of when we really first started implementing these into our technology is we created a baseline that set a standard set of questions and anytime we changed our integration or changed how it interfaced with the AI, we would always check it against the baseline and we could monitor deviation. Even as AI, the LLM versions, people are familiar with ChatGPT moving from version four to version five, anytime those changes happen, we're monitoring how that change, how that AI is actually changing our baseline and then constantly adjusting. So a moving target is a way to put it nicely, but yeah, there's so much additional control oversight and it feels really intimidating. And as you're making decisions, again, going back to that, what decisions do I make in terms of implementing AI into any particular task? Give it the tasks that you don't want to do, give it the task you can't scale, but also give it the task that it can start its role with simply oversight and then build your trust from there. You follow that pattern, and those are the patterns that we followed, and you're going to be able to walk along the way, get gains and efficiencies day one, but really grow them into those more hype-based, magical ones as time goes by.

Jill James:

Yeah. Beautiful. Jose, I know that you're excited to talk about something.

Jose Arcilla:

Well, I was, but now I just feel like that was an advert for the entire development team when I keep squeezing them to do more, why they slow it down. No, look, jest because what the team have done is just phenomenal. I kind of want to go back to that image hazard. It's incredible to be able to take that image, to analyze that image to the points you and John made, really identify everything and then make recommendations to take action or provide those actions to the safety professional. It blows my mind. And it's one of those things as I always testing that, because of course everything always gets loaded into my sandbox and I'm always prodding and testing, which is hopefully not untowards given my position. But I just, again, I said I was a nerd. I say that in a loving way for all nerds. It's just we love the technology. But I know John and Mike, I'm sure were frustrated, so were many people in the office because I would literally walk around and flip over tables and take a picture. And the system would come back and say, "Yes, there's a hazard. There's a table blocking the entryway. Oh, and by the way, you're missing this sign on the wall." I'd be like, "I didn't even notice." It's just incredible. And I do just want to say, take the opportunity online on your podcast to give these guys a big high five and a thank you because it's incredible work. And more importantly, it's incredible work for the safety professional because it really does turn them into superheroes.

Jill James:

Thank you. It's so fun to work with this team. I do have to say it's so fun. And I think that, Mike, John, you're also patient with me. I know recently I brought you some idea ... It goes both ways. You bring me things, I bring you things sometimes, and then they're like ... I mean, I don't know anything about technology, but I'm like, "Could you do something that would do this and you take me seriously?" And it's so fun to work together on that. Yeah. So closing thoughts, as we establish from the beginning, the pace of this change and the numbers of AI solutions on the market is dizzying. For our listeners, what questions do you all think safety professionals should be asking before they're buying or maybe if they're using something right now, what should they be asking in terms of risk mitigation? What should raise a red flag to them? If they're asking questions, what kind of answers should they expect? Things like that, cautionary tales, buyer beware, things that you absolutely would ask.

Jose Arcilla:

And I'll take a first very minor stab at this, but then I'll turn it over to Mike and John. Certainly for me, before people start delving into this tool versus that tool, I guess for me, the first question is always, what am I trying to achieve? What is my goal? And obviously base the decisions thereafter on achieving that goal because I think sometimes people can get too tied up. I certainly do all the technology. What's the most fanciest way? What's the newest way of doing something? When really it's about what am I trying to achieve and what is the best way of achieving that, not just for my goal, my role, my responsibilities, but for the good of my organization. And so that to me is always the first question, the why. I think thereafter, and this is where you started to hear from John about the sanity checks, the checks and balances, it's really about who can I partner with that has the pedigree that is going to be there with me for that journey moving forward? Because again, what you don't want to do is start down a path and then realize, okay, well, there's a dead end in my line of sight and I now need to pivot or go back, go do something else because that can be really frustrating, really painful. And of course it can certainly be very costly. And so for me, working with those partners that are going to be there for that journey and those partners that are going to be transparent as well, because that's the other key thing. Certainly with AI, we're all learning at the same time. We're all learning at the same pace. And so making sure that you have a partner that's going to be there with you and talk about, "Look, here's what we're seeing. What are you guys seeing? How can we work together?" Is absolutely critical.

John Hambelton:

With AI, again, we are in a season of a lot of excitement and a lot of promises and a lot of change. I don't think we can recognize enough Jose's earlier comments around how today is today and tomorrow's another day and the landscape is going to be completely different tomorrow. And that scenario really has been playing out now since the excitement around OpenAI's initial LLM offerings started coming into play and those became accessible to the everyday person. We're still navigating such a aggressive season of change that it feels intimidating to navigate and there's a lot of promises, both fulfilled and broken. I think that whether from a software development standpoint where I think about organizational compliance and proper software development practices, or that EHS professional who is implementing new ways to consume their data, keep their workforce protected and safe and healthy, that there are, like I mentioned before, common sense practices, all the technology practices that you have today are available and completely applicable. You don't have to boil the whole ocean day one. You can build trust with this technology. You just need to respond in being equally as aggressive in terms of adopting it and experimenting with it. And yes, that takes time and that does take cost, but we've seen here at HSI within our own internal business practices as well as our products, that that time is we're now seeing the payouts of those opportunity with really exciting things like we talked about with Image Hazard Recognition to examples internally where we're just able to consume such large payloads of data and learn from them and keep our systems safer and bring up our development quality. All of these things are things that AI is creating individual point solutions for that are starting to come together as more of a holistic augmented process. It's something I'm really excited about. I think it's absolutely applicable to the day-to-day EHS professional who is simply trying to scale out their oversight to keep their workplace safe and to make sure their people are working smarter and there's just so much opportunity out there. And I absolutely think it's applicable right now.

Jill James:

Yeah, that's great. So what I'm hearing from all of you is asking questions like to companies that you're working with or considering to work with. Are you building to scale? How are you considering change in the future? Or is your niche really being a point solution? Where are you getting your source data? Where does that come from? How do you test what it is that you're building? Are you using subject matter experts for it? What sort of compliance gates are you using in the development? And then if you're actually using something, to commit to using it, to your point, John, so that the data only gets better. Did I miss anything?

John Hambelton:

That's a great summary.

Jose Arcilla:

Nailed it.

Jill James:

Okay. All right. Well, gentlemen, I appreciate you coming on the show. Listeners, I hope you find this helpful. We don't get a lot of feedback from the podcast, but if you certainly have questions for us or something that you'd like us to dig into more with our nerd dumb here, we'll happily come back and do that. But Mike, John, Jose, thank you so much for being here today. Appreciate it.

John Hambelton:

Thank you, Jill.

Jose Arcilla:

Thank you.

Mike Case:

Thanks.

Jill James:

And thank you all for spending your time listening today. And more importantly, thank you for your contribution toward the common good. May our employees and those we influence know that our profession cares deeply about human wellbeing, which is the core of our practice. If you aren't subscribed and want to hear past or future episodes, you can subscribe in iTunes, the Apple Podcast app, or any other podcast player that you'd like. Or if you prefer, you can read the transcript and listen at hsi.com. We'd love it if you could leave a rating and review us on iTunes. It really helps us connect the show with more and more EHS professionals. Special thanks to Emily Gould, our podcast producer. And until next time, thanks for listening.

Close Menu