How AI Could Impact Your Insurance Program [Webinar]

Join Daniel Webber and Chris Self, COO of MyEmpire Group, as they deep dive the impact AI can have on your insurance program.

Jump To:

Key Takeaways

Set clear rules for AI use

  • Approve specific tools for business use
  • Define what AI can and cannot be used for
  • Document this in a simple internal policy

Protect your data

  • Do not upload sensitive or confidential information unless risks are understood
  • Check terms and conditions (especially free tools)
  • Assume data may be stored, reused or shared

Always apply human review

  • Treat AI output as a draft only
  • Require human checking and sign-off
  • Never rely on AI alone for advice or decisions

Use AI for efficiency, not decisions

  • Good for: drafting, summaries, admin tasks
  • Avoid: automated, end-to-end decision-making

Assess risk before using AI

  • Identify where AI is used in your business
  • Understand risks (privacy, accuracy, compliance)
  • Prefer tools within systems you already trust

Train your staff

  • Explain approved tools and restrictions
  • Set clear rules on data use
  • Reinforce accountability for outputs

Align with existing policies

AI must comply with:

  • Privacy obligations
  • Confidentiality
  • IT usage policies

Prepare for insurance impact

Insurers may:

  • Ask about AI use
  • Require controls and oversight
  • Introduce future restrictions

Poor governance = potential coverage issue

Review regularly

  • Update policies as AI evolves
  • Test controls and processes
  • Reassess risks over time

Transcript

Good morning, everybody, and thank you very much for joining us today for our session on AI and insurances. Very topical, obviously. A lot of change in this area.

We are very, very, very pleased to welcome Chris Self from My Empire Group, and I’ll give him a brief introduction shortly.

As always, if you have questions throughout the presentation, you’re welcome to put them in the q and a field.

We will kinda moderate as we go. If we if we think it’s topical, we’ll fire the question as we go. Otherwise, there’ll be there’ll be plenty of time for questions at the end.

As I said before, we’re very lucky to have Chris Self here. He’s the COO of My Empire Group. He is a passionate and a highly experienced information security principal. He leads a team of cybersecurity consultants.

He’s got experience in a lot of industries, defense, government, in all tiers, energy, utilities, aviation, finance, education, retail.

He has got accreditation through lots of ISOs, experience in work to do with airports, you name it, IT. He’s an absolute whiz. We’re lucky to have him here. He has a master of information system security, by far the most qualified person on this presentation. So very very glad to have you, Chris, and and welcome today.

Understanding AI Tools

Before we get started, it wouldn’t be a insurance and risk management presentation without our usual disclaimer. Obviously, the information here is is general in nature. It’s not intended to be personal advice. We can provide personal advice in in the appropriate setting, but this is just general in nature.

We’re gonna jump straight in.

To provide a little bit of context around what we’re talking about here when we’re referring to AI systems and tools, there are many different types of artificial intelligence.

Specifically, what we’re talking about today is is the machine based software algorithms designed to simulate human intelligence, patent recognition, decision making, and all that sort of stuff.

Chris, do you maybe just wanna go through some of the more common tools and and how businesses might be interacting with these at the moment?

Yeah. Absolutely, Daniel.

So what we’ve got on the screen in the moment, as you can all see, is just a few headings or themes that we wanted to call out.

And just to break it up a little bit, so there are there are AI tools everywhere now.

They’re embedded literally in almost everything we do, and and they’re becoming more prevalent with all of the technologies that we interact with. What we wanna do here just quickly is just to give you a little bit of a thought provoker as well to lead you into after this session as well. So company provided tools, why we called that out? It’s always great to have something considered and approved inside your business. Whether or not you’re a small business or a larger business, having something approved and communicated to staff is really important. We we put Copilot there just as an example, but that could be one of the the others that are listed there that are public AI platforms.

The public AI platforms are generally those easy to access tools. Those technologies you might subscribe to, you might connect through through a web browser, and you enter some some prompts or some information to request AI to provide a response back to you.

They’re everywhere now, Dan. They’re, like I said, being embedded in so much software. You’ll see most tools, have now got AI embedded into them. So it’s it’s really important we start thinking a little bit more broadly around how we use AI because it’s it’s it’s becoming front and center with everything.

The last one there just really is is custom AI. We’re in a position now where we can create our own AI models inside our organizations to do specific tasks that we describe to that AI tool. So we can build things like little AI agents that can help us do a manual task. And some of these things have been around for a little while, but they’re just getting smarter now with with the advancements of AI technology.

So that’s probably just a quick summary there, Dan.

Wonderful. And and no doubt, when we’re building custom AI models, we’re probably using AI to do that anyway. Right?

It’s becoming more and more. Absolutely. They develop a landscape in particular for software applications is really leveraging AI a lot now. It’s it’s so fast. It’s making some of the work we do as as humans, unfortunately to say Yeah. Very slow, but we’ll come to why humans are important in a moment.

Wonderful.

I’m just gonna throw a poll up on screen. If you’d be so kind as to to answer this, either yes, no, or maybe.

The poll question is, do you use AI in your business?

You know, I’m assuming a lot of people are probably putting things in chat, helping out with emails, and replying to clients and and whatnot. So keep throwing your your responses in there. We’ll get we’ll close that poll off in five more seconds, four, three. It’s pretty pretty evenly distributed probably. Yeah.

Two thirds, yes.

One third, no, and that’s, you know, that’s that’s really interesting.

Look. It’s it’s it’s something that we’re just gonna see more and more and more. And I think as we go through this session, hopefully, the people that are using it can can use or pick up some tips and tricks, and people that are not using it perhaps might get a bit of comfort around it.

AI Ethics and Governance

So I think it’s it’s really important when we talk about AI to understand from a development point of view, and we’ve taken this content from federal government website, the Department of Industry Science and Resources.

There are obviously, governments are aware AI is coming, and it’s in our everyday lives. So they’ve put together eight ethics principles that basically guide the use and development of of artificial intelligence, particularly in Australia.

Chris, can you just talk to a little bit about this? Because I think the perception out there, right or wrong, is that a lot of the AI is kind of it it’s a bit of the Wild West. There’s not a lot of government governance. There’s not a lot of controls. There’s not a lot of laws around it.

Talk to us about how these these principles that the federal government have published operate with with general business.

Yeah. Definitely. Great great call. I think you you’re spot on around the legal context of of of AI at the moment. It’s it is still viewed a little bit like the Wild West. However, in the recent years, that’s changed significantly, not just here in Australia, but globally around trying to implement AI ethics in particular because of the impact, and you’ll see some of the terminology in in this slide at the moment, how it can impact individuals, but also communities of peoples because AI is starting to learn characteristics and patterns that it could create undue bias. So these ethics principles are are actually really pivotal and and not well, published by the Australian government, which is which is great to see the federal government starting to push something out, even in a guidance perspective.

These are kind of replicated globally. So we’re all starting to implement or provide this guidance across the world. The the key parts here is to make sure that the the AI technologies are behaving in a manner that it was built for and designed for so that we are providing great productivity improvements because that’s the key here. AI is to help us as as professionals, practitioners.

It’s not to replace, but it’s also to do it in a manner that is safe and controlled. And these principles are supported by a a oh, there’s ten other guardrails that we can share. Daniel, if it’s okay, we can share with the audience later.

Yeah.

Guardrails that help us implement the intention of these principles. Because if we can get these principles implemented into our organizations, our businesses of how we wanna utilize AI and how if we were developing AI, how we want to do it in a safe and an appropriate manner, these principles are front and center. And the top two up there is probably really, really key in the sense that you can see the word human is still written there.

One of the real big things here is to to make sure that we still have that connection with with what AI is doing, and it’s not running completely autonomously without our intervention in some areas. So that’s the purpose, I guess, of these principles is to start making sure we are thinking about the use and development of AI in a safe manner that is hopefully protecting both individuals and the community more at large.

Yep.

And we’ll we’ll share this presentation with everyone. So we’re not gonna, you know, read it through word for word there. You you probably had a flick through while while Chris was talking. So we’ll provide that along with with other information to really, you know, help. And and by all means, when you get this recording, if if you’ve found value and you wanna share it with other people, you’re you’re more than welcome to do so.

Data Management in AI

So when we when we talk about data, Chris, and I think this is where we probably start raising some eyebrows and and perhaps as we dig a little bit more, and we’ve got three three dot points here that I I don’t wanna open too many cans of worms and and and frighten people too much, but I think it’s I think it’s important to understand what the data is doing and where it’s coming from.

So if if you would so kindly just touch on those three points that we’ve got there. Firstly, how do how do the AI models get their data?

How how do people and businesses interact differently between paid and free versions? And just the last point about the terms and conditions and and what you’ve seen in your experience.

Yeah. Definitely. AI models or the use of AI tools so the AI tool, I guess, if we use Copilot as an example, has a model underneath that, a a model that helps it analyze the the prompt or the input that we provide it. And that model has a has a few few few things that it does.

The the key part that that model is there for is is to translate our input to an appropriate output. So a prompt, ask it a question to get an answer.

The key thing with the models and and probably the the the important part with with that question there is is the model needs to learn, and all of the the larger providers of of AI are doing a load of learning. They’re getting data from everywhere to to learn. So when we talk about the types of data it’s getting, it could be, again, user input. So we’ve we provided as a as a user of the AI tool, we provided a prompt to it.

That could be one. We provided a document that the AI model could assess against and provide a summary against.

There’s a bunch of paid and licensed datasets out there that the models can learn from from as well.

The biggest at the moment is probably just the Internet.

The AI model obtains a lot of data from from the public Internet to to use in its in its assessments, in its reasoning, in its interpretation of our prompt. So there’s data coming from everywhere. The the important part there is what do we put into it as as a business, and we’ll talk about that a little bit further in a moment.

The other thing that’s really important here is understanding the differences between versions of of AI. And, naturally, paid versions come with different feature sets than the free versions. One of the biggest things, and it’s linked into the the last dot point there, Dan, is that generally the terms and conditions are different.

So the free version typically has much looser terms and conditions about how the AI model or the AI tool will use your data that you’ve inputted, which means that we may just lose control completely. So we spoke about this a little bit earlier, Daniel. One of the things that we do at MyEmpire is do risk assessments of technologies.

We were doing one of these for a client. They’re about to use a tool. In the terms and conditions for that tool, it explicitly told the user that they would use whatever data you entered into their AI model. They would share it with everybody in its community.

So the terms and conditions in in itself is really key for understanding how AI is gonna be used and then how your data is ultimately gonna be used, protected, shared with others. So that paid and free version is massive there linking into those terms and conditions. The other thing that’s important with the paid and free is your level of control that you can put around it. So you may have zero controls with the free version. You may be able to put some more protective controls around the paid version, I e, not allowing the model the AI model to be trained on your data or putting in retention rules so that your data is destroyed after being used, for example.

So that’s probably the big thing there to consider when we wanna use AI is how important is our data that we’re putting into it, and and how do we wanna make sure we protect it Because there’s definitely a big difference between those paid and free versions. And then once your AI model has got access to your data, if you’re in the free version and it’s trained, elements of your data will be in that ecosystem for for some time.

Yep. Yeah. Which is yeah. We talk about you know, in previous sessions, we’ve talked, you know, ad nauseam about protecting intellectual property and, you know, all all that sort of stuff. You know, if if you’re willingly and and freely, you know, uploading it to these these systems and then it’s out there for public consumption, that’s a that’s a scary proposition. So, yeah, I would encourage everybody to familiarize yourself, you know, with the terms and conditions within reason.

Best Practices for AI Use

But what we’re gonna talk about is is some ways to, you know, interact a little bit safer with them. So Chris really touched on it there. Some some key considerations for AI use is what data is going to be used for, what the implications are around privacy, Obviously, the accuracy and transparency. Chris just mentioned that a lot of them scrape the general web. And for those of you that are alive and living and breathing, know that not everything on the Internet is correct, is accurate, is true.

So you can’t just kinda take take it for for fact.

And then also the compliance and and regulatory requirements. Now this may not be an impact for a lot of people right now, but as we kind of alluded to when we opened the session and we talked about what the Australian government is doing, this is really in its infancy. So what we’re seeing now and and Chris mentions the word guardrails, which is a very appropriate term, how it’s going to govern future use and what we’re going to start seeing from not just federal, but state level registration requirements coming out of industry bodies, industry associations, all those sorts of things, is to how can you use AI, what can you use it for, and what is appropriate use of AI.

Chris, this is this is kind of getting to the crux of where we’re heading with ultimately, what this session is about is that AI is a tool in your tool belt. It is not a substitute for your personal or professional judgment. So did you wanna just talk a little bit more about, you know, how using AI as a tool and and and and what it should support?

I think and you kind of nailed it, to be quite honest, Daniel. Like, we we need to be looking at this as a as a technology.

And with with everything, technology is is a a tool, an aid to support operations, to support our business, to support how we provide our services.

And and what that means is using it for its purpose.

So one thing one of the big things we’ve we’ve definitely learned in recent times when we’re thinking about AI, it’s it’s as good as what you put into it. And it’s getting smarter as we know, but it’s as good as what you put into it. So like like you would if you wanted to hammer a nail into a ball, you’re not gonna hit it with the end of a screwdriver. Right?

So your input needs It’s a great analogy.

Right?

So it’s similar similar here. So if we’ve if we’re using the wrong inputs, we’re interacting with our tool incorrectly. So using AI simply for that and then having expertise at the output as well as at the input will make sure that whatever is happening within some of the AI tool is gonna provide, hopefully, greater or enriched information. So it’s really key even at the input side to be thoughtful of what we’re doing with the tool.

The output then, of course, Daniel, as as you said, it’s not a substitute for the professional. The expertise, that that comes at the end to make sure that the output is appropriately guardrailed, as we’ve mentioned before, to to an output, to a a decision point. And we’re not relying solely on AI or any or technology to to make a decision for for us.

And and look, I think, like we sort of said, that working with AI is a skill in itself. Giving the right prompts, understanding how it learns, what it what is going to get you the best outcome. But if I may just go off on a on a quick tangent here, anybody that’s been interacting with ChatGPT for the last, you know, six, twelve months or whatever is familiar with the way that data is produced, how it looks, how it reads, the types of language it uses.

And and speaking from, like, a business perspective, particularly when we recruit, you get cover letters, you get CVs, and you read one line, two lines, you can immediately know that it’s written by AI.

There’s also enough in terms of AI checkers online that it’s very easy to find out what data is, you know, human written and what is artificial intelligence. So, you know, if if you’re not someone that uses AI regularly and you get some, you know, YouTube content posted on your website or send it to a client, Whilst you may think that that is, you know, great original content because it’s the first time you’ve seen it, for somebody that is working with it and and and has more kind of robust systems in place for how they leverage AI, they will look at it and just know straight away that wasn’t written by you.

And as we kinda go on here and we’re gonna talk more about how an interaction with your professional negligence, that’s going to be key because it’s going to, you know, potentially raise questions from your clients, potentially raise questions from the insurer, and not just now, but but as we get into the future.

Yeah. Absolutely. I think we, again, we wanna use it to support, to speed up productivity, to be more efficient, but there needs to be a point where the professional does its own does their own due diligence. Yeah.

And that should be a QA of an output. Call it put some quality assurance across that output. Make sure it is still tailored to the intention of of what that prompt was. So as as many of you know, and and I think it was a two to one almost of the of the participants utilizing AI at the moment.

So Yeah. Something like Copilot as an example, it’s embedded into documents, and you can use it to you can use it to transcribe this session, for example, or a meeting that you attend by using Microsoft Teams. From there, you can get the tool to provide you a a an output that tells you all of the actions from that meeting, for example.

It still needs someone to contextualize it to make sure it’s still valid, to make sure it’s still accurate before it’s just blanketly shared or distributed.

So let’s do that really important part where the tool has done a great thing to speed up efficiency Yeah.

Like note taking. However, it still needs to have that connection with with the professional to make sure it is still still valid.

Yeah. And we’ve we’ve got a few questions in the chat just around, like, what sort of tools maybe, you know, preferred over others in terms of, like, the the the balance of the terms and conditions and how, you know, egregiously they’re going to use your data and things like that without obviously providing anything overly specific. Are the likes of, you know, Microsoft Copilot, for example, a paid version, is that going to be, you know, better generally speaking than perhaps some, you know, lesser known products in a free state?

In all honesty, it it’ll completely vary. And and why I say that is I don’t think that from a t’s and c’s perspective, I think there’s a difference more so with the paid and free versions, but it’s also some vendors have a greater focus on privacy and security, and then therefore it’s it’s baked into potentially free versions. Gotcha. Right?

So it it is still important, I think, for us to assess and validate the technologies that we’re using. And, again, if we’re gonna sign up to anything, even again, signing up you’re not signing up through a commercial arrangement, but you’re signing up to a license arrangement with a free version still. Understanding those t’s and c’s is really important. Just because you haven’t spent anything could still pose a significant risk to your business.

So I think I can’t give you a clear answer, Daniel, then Yeah.

We should be looking at the t’s and c’s.

The comment there’s a question further on just to continue with that conversation around paid versions, around specific tools.

The way I look at some of this stuff is the risk to to a business or to my empire for the tools that we use, for example.

There’s there’s a lot to still look into when you’re talking about how organizations or vendors have introduced AI into their technology stack. And I know we’re talking about Copilot and probably our fault because we raised it earlier on. But Microsoft Put the worms back in the can, mate. That’d be great. Microsoft is a behemoth of an organization. Yeah.

Massive. And Copilot is now so or their AI platform is now integrated with so many things that we use or take for granted day to day with Microsoft. So email, Word, just those productivity suites, Teams for messaging, all of those types of things.

What what I’ve used as a barometer to some of this, even just as a starting point is for us at MyEmpire, we use Microsoft. That’s not a secret. We use it for some of our key information. We’re already trusting to a degree, implicitly trusting Microsoft as an entity for some of our data. So when we look at the use of Copilot, the things that I’d be looking at there to maybe consider it safer or more secure is how does my overall controls that I’ve implemented in Microsoft apply to Copilot to apply to our data and to make sure that when Copilot interacts with my data, it is still contained with the marks in within the Microsoft ecosystem where I’ve applied all of these layers of controls.

Again, I’m not gonna say anything safe and secure. I think it’s a foolish thing to say for cyber in general. But you are entrusting some of these organizations, whether it’s Google or Microsoft for some of your data, your emails. You’re just extending that perimeter with inside that ecosystem.

So if the data is still contained within, you’re probably leveraging many of the controls you’ve already implemented to protect your existing data Yeah.

Versus going out, I don’t use ChatGPT right now, and now I’m gonna just randomly start using it. Now you’ve introduced a completely new avenue of managing your data.

That’s what that’s one of the areas I would suggest is considered when we talk about how safe and secure. So, hopefully, that helps. I think it was Lucas. Hopefully, that helps Yeah. What you’ve asked there because that’s probably the key part.

And depending on Microsoft licensing, again, it’s not a Microsoft course. I’m probably not the the best for all of this. But depending on your licensing with Microsoft, you’ll get elements of protection inherently because of the business premium or enterprise licensing that you can purchase with Microsoft that would then inter interact with Copilot.

And and I think it’s fair to say that once, you know, a lot of this AI becomes more regulated, that’s the sort of things that you’re gonna see. You’re gonna see the the office of the, you know, privacy commissioner probably step in and have, you know, whether it be separate or catch all if they’re not already about the way that, you know, data is protected, your privacy obligations, all those sorts of things, which will also apply to the likes of, you know, Microsoft and Google and people that store and keep your data.

And like we like we said to start with, we’re really just at kind of the tip of the iceberg in terms of what this is going to be.

So, yeah, it’s it’s it’s an interesting time. I think it needs to be embraced, but with a certain level of of skepticism.

So as we push on, kind of talked a little bit about this.

The the set and forget or the end to end automation where you’re relying purely on the AI.

Maybe you’ve got chatbots that do certain tasks and provide information and you’ve built in, you know, quote, unquote, advice.

There’s where you can get caught out because if things change, if industries change, if advice and standards change, and you don’t update, you know, whatever it is that is automated, you have a significant issue. And that is that is where we come to professional indemnity.

So the human review, the checking, the signing off, they they remain absolutely essential from a a licensing and a PI perspective. I can I can tell you right now, if you receive a professional indemnity claim and your defense to the matter is, oh, well, that’s what ChatGPT said, I mean, just forget it? Like, the insurer is going to be so terrified, not necessarily about this or the matter at hand. The question they’ll be, well, hang on a sec. What else have you what else have you put through ChatGPT? Do we have a hundred other issues that we’ve relied solely on AI that now your business becomes virtually uninsurable. So just, yeah, retaining control over your work is is absolutely key.

Insurance Implications of AI

So we wanted to we’ve touched on some of these already, but just provide a bit of guidance around how to use it. And and a lot of these, you know, they may or may not be common knowledge or go without saying, and I know we’ve kind of touched touched on them already. But I think, Chris, what we’re gonna you know, what we’re pushing towards is like any good risk management or managing risk within your business is how are you documenting it, tracking it, how are you using it, and and do you have a written policy in place to give to your staff, to do all those sorts of things? So can you just kinda go a little bit more into the guidance that, you know, a professional organization should be should be listening to.

Yeah. Definitely. The the key part there is that connection to what the business deems acceptable, communicating that to its staff. And you touched on a little bit there about about how we wanna interact with our staff and how how it then becomes used. That that there is is really, really important.

If the business isn’t is unaware of how it wants to use technology and AI being one of those components, it’s really difficult to guide the use and and how to communicate that with your with your your members.

I know we’re gonna speak about this a little bit more later, but, really, one of the key things here is is awareness and training. Again, we want to make sure that our people fully understand what’s acceptable inside our business, and which is alluding to some of the points here about personal information using personal information in AI tools, for example. We wanna give as much clarity to our people so that they can benefit from productivity gains while still using this safely and still protecting what’s important to us as a business. So when we go down this list, as you said, they’re relatively straightforward, but, having it clearly articulated and then shared internally that we’re gonna use Copilot, and that’s the only approved AI tool as opposed to some of the others, out there on the market.

If we’re gonna use it for business purposes, we must use this because of we’ve already assessed the risk. We already understand the associated risk to Copilot and Microsoft because we use those products for other things as opposed to this new one that sits over here on the the far right hand side that we have just heard about. So that’s really, really, really important. But, again, if it’s not written down, it’s not gonna be or or it’s not communicated, we’re probably gonna open ourselves up to potential issues with with data being shared around different tool sets.

When you go into things like considering how AI information is being used.

Again, it’s really important that we still have that human element connected through this. So one of the conversations we typically have with some of our clients is, great. You’ve created this AI capability.

Who’s validating it? And then and then when does that validation happen before it becomes information as opposed to just data? It becomes decision or input to decision before before that analysis occurs. So that connection is still really, really important. The other linkage with everything when we talk about technology use, and probably many has heard terms like IT acceptable use, these types of terms that organizations push onto staff, connecting them with other organizational policies, because, again, AI is one component. Connecting it to other organizational policies is probably really key as well. You’re kind of creating that loop of of understanding of, again, what the business is expecting of you and then how you can go about using those tools.

Yeah. And I think that I think that’s really key because, you know, businesses have, let’s say, a privacy policy or acceptable IT use. If you’ve got a privacy policy that says, you know, we won’t share your information with anyone, we subscribe to the national privacy principles, all of that. And then you go and upload, you know, a hundred thousand medical records to chat GPT to get a summary of all the chronic illnesses that your patients have had.

It’s like, well Game over.

Game over. And guess what? You already have, you know, whether it expressively says how you do or don’t use AI, that is still a privacy issue at the heart of it. And I think that’s really important point that you raised.

These documents and these risk managements and the policies, procedures, they don’t need to be specific to AI because AI is just one tool about how you go about your business. It’s it’s not just what systems and what AI you use, but how you use it, how, like you said, how you validate. And I think that’s where, you know, perhaps the the policies and the procedures are lagging because we’ve got, you know, probably, you know, still digital because they’re still in the way, and I wouldn’t call them analog privacy procedures, but they’re not as robust as perhaps they need to be.

Yeah. Great great connection back to just general technology because the same should apply to the use of email.

For example, we should have similar guidances for staff about their use of email and how we share information via technologies like that. So spot on. I think this is broader this this use of these guidelines is broader than just AI, but can be directly applicable to all our key technologies.

And we just need to consider that AI may be doing things with that data faster than than other technologies previously.

Yeah. Yeah. And, look, there’s a great great question in there about leveraging AI in terms of offsetting, you know, traditional workforce and maybe downsizing and what that means for more likely like a an employment practices type issue and and whether the use of AI and, you know, making humans redundant to do the AI, whether you’re opening yourself up for future litigation.

The answer to that question is is probably tied back to existing kind of industrial relations laws and principles around, you know, redundancy and and whatnot.

Certainly, more of a question. And if there was an issue, it would be under unemployment practices.

There’d be things like, you know, unfair dismissal. So, yeah, for that sort of thing, I’d probably consult more fair work and making sure that you’re following your obligations there.

You know, whether it matters or not, you know, the reason for the redundancy or whether, you know, computers are taking over the people and we’re heading to what is it? Terminated to judgment day.

Got it.

We got, yeah, we got Skynet looking over us. So, yeah, it’s it’s it’s going to be hard. And then, you know, obviously, what happens if you completely muck it up?

What happens if, you know, something goes wrong with the AI implementation and then you need to get your workforce back, whether that’s also part of the question.

Yeah. I think it’s I think it’s gonna be gonna be challenging, and I don’t you know, I personally haven’t experienced any any employment practices claims in that respect. But, yeah, I think the courts work on precedent. So once, you know, once there is precedent, you’ll you’ll sort of get a feel for for what’s gonna happen.

Hope I’ve answered that question okay. I hope I didn’t misread that too terribly.

Let’s And, hopefully, get that as well. The organizations have really considered their staff base in the first instance. So are they replacing expertise with AI and have really done their due diligence on on that capability that they may lose by replacing expert expert or capability through that human element with AI. So they’ve really focused on that as opposed to everybody’s moving quickly with AI. Let’s use it as fast as possible. But have you really thought about what else you would lose as a business?

Yeah.

And and really gone back to that business process level as well.

Yeah.

And and definitely, from an insurer point of view, we’ll probably touch on that shortly when we kinda wrap wrap around these principles, how it interacts with PI and professional negligence, and whether or not the insurers will have issues and perhaps underwriting questions around, okay, like so for example, at the moment, when an insurer asks about your workforce, they want to know, particularly in a blue collar sense, what is your employee payroll? What are the amounts you pay to subcontractors and labor hire personnel?

Because the the makeup of your workforce directly impacts a physical exposure on a construction site, for example.

In future, do I see a possibility where an underwriting question is, okay, how many employees do you have? What’s your revenue? And what percentage of revenue is handled by automation or AI? I think that’s entirely possible.

As to what it would do for the rating, couldn’t tell you. Depends how many claims happen and and what the trends are both locally and and internationally. But I think that’s that’s probably a a pretty good a pretty good segue into how we think PI insurers and insurers more broadly with that employment practices example are likely to respond. And we’re already seeing it out of some of the London markets with AI related limitations and exclusions.

Particularly, if you’re a technology business, you will have underwriting questions that ask, you know, what your use of AI is, what tools that you use, how you rely on it, how you interact with it.

And the underwriters will be looking for specific things now and into the future.

And everything that we’ve talked about here with how your business interacts with AI, making sure the human element is there to check is is the governance or lack of insight. How much control do you have on the end product? What policies, processes, procedures do you have in place to ensure that when you are relying on some form of artificial intelligence that you are vetting it, checking it, making sure it’s accurate, and not just using another AI to check your other AI to then produce something that’s entirely artificially created.

Do you have anything to add on that, Chris?

It’s happening more and more, to be honest, Daniel. The the evolution of AI agents or bots is evolving quite quite a lot. So I’ve definitely seen a fair bit in the especially in the software development world, we’re seeing a lot more where agents upon agents will be doing different components of a development cycle.

So, again, all of the input is still driven by different bots, and a different AI agent will do a security review, and then a different AI agent will do a code review, and then a different AI agent AI agent would would have done the original development of that code. Right? So you’ve got this cycle, and that’s happening so much now because of the speed. The big thing again is where do we fit in? And and that’s the big push, I think, all even the federal government as you called out earlier.

But a lot of industry is still really prominently stating that it’s a use AI to support, but that human component, that oversight, that governance rules need to be explicit and need to be demonstrable through that process so that we aren’t, like you just called out, leaving AI to check its homework and then just letting it run free. So Yeah. Think that governance, that last one you called out there as well, I think is super crucial because AI on top of AI is definitely happening.

Yeah. That’s probably how you’re gonna get a university to grow soon, isn’t it?

Yeah. Hopefully not. But Oh, jeez. Very closely.

It’s a worry.

Now this next part is probably the main cause for concern. Right? So we don’t know what we don’t know.

And everybody that has professional indemnity insurance does know or should know that cover is offered on a claims made basis, which means that the the policy you have in place at the time you make the claim is the one that applies to that loss. Whether you did the work six years ago, ten years ago, you know, a day ago, provided it fits within your retroactive date and you had that specified at the time, the policy in place when you find out about or notified of the claim is the policy that responds.

What that means, and I’ll use two examples, asbestos and aluminium composite cladding. Two two building products, for example, that were used fairly widely, asbestos in particular, it was mined. It was used in all sorts of building products. And then guess what? It’s really nasty.

What do the insurers do? They apply to asbestos exclusions. So they put on there, you know, great asbestos claims, not covered.

So the same thing has potential to happen with AI or a version or a specific risk that attaches to AI.

So your insurer or all insurers, if they wanted to, in twelve months time or six weeks time or whenever it is, can say, actually, we’re not going to ensure the risk of AI because we’re worried about claims. We are going to exclude it. If you have been working with AI for five years and you have exposure, if the insurers decide that they want to do that, you would have no cover.

I don’t think that’s going to happen, but what I do think is that there will be some tightening over, particularly probably oversight and sign off and being able to demonstrate that you added some sort of professional touch or your own personality to whatever it is that is being produced from a from a technology point of view. So I don’t have a crystal ball, unfortunately, and it really depends just going back to, you know, precedent and issues at large. What happened with insurers, not just locally, but globally, is what we’ll see with with claims trends. So if we start to see a whole bunch of claims because, you know, accountants put tax returns in ChatGPT to, you know, give advice over where they can, you know, have some savings or what the rules are around capital gains and say, oh, yeah. This is what Chad said. This is what you can do.

And and people take action based on that.

You know, insurers may have a problem with that. So really understanding that whilst right now there may not be any restrictions, you should absolutely adhere to the best practices because you don’t want to get stung retrospectively. So you’re better to be conservative now, make sure that you follow whatever guidance is out there, whether that be from, you know, Australian government or wonderful businesses like My Empire Group who can, you know, produce best practice guidelines and things like that, making sure that you you do that. And, yes, we are starting to see limitations around AI. They vary.

Nothing at a, you know, broad macro level at this stage, more so on a specific, and we’re probably seeing them more on our technology risks.

But, you know, as policy wordings develop over time, they’re they’re the sorts of things that you may see kinda creep in there.

So, yes, insurers have responded in a small way, but nothing nothing too crazy. It feels like a lot of the underwriting questions that we’re getting asked is perhaps for them to build up a knowledge base themselves and find out how their clients are working with AI, what their exposure may be. So, yeah, it’s not anything that I could report back and say, hey. This is this is what is going to be excluded. This is what is excluded.

It’s more of a case of I feel like it might be coming based on what we’ve seen, and I don’t know if that’s in six months or six years, but it’s something to really be mindful because it does have the potential to really impact your business.

Do you have anything? I know we’ve I’ve I’ve I’ve gone into insurance land there, mate, but you you work in cybersecurity and and interact with that sort of stuff all the time.

So I think from my side there, Daniel, when we’re starting to get into the conversation around how to support aligning to the policy or even thinking about what to do to gain the right insurances.

Some of the work we’ve done in the past is actually support the clients properly complete those types of insurance questionnaires and work with them on what are the proper responses. But, also, if you’ve got gaps in these really key areas, some years a few years ago, I remember seeing the big transition from you won’t get insurance if you don’t have multifactor authentication. Yep.

That’s what I see something here with AI. It’ll it’ll get to a point where we are losing elements of control or attacks are significant that we are forced to input something to control it. So that included one of the big ones from my perspective is understanding what that leverage would be. Yeah. MFA multifactor authentication as an example, something similar also I’ll likely see from a practitioner’s perspective, not from a legal perspective, but something that they’d likely gonna push.

Yeah. And, I mean, it’s it’s just better if the insurer tells you to do something and you start, I’m already doing it. I’m already doing it. Huge.

Look. We got a question there. What are the PI insurers, you know, providing in terms of guidelines? The answer is really nothing because they don’t start doing things until it’s an issue for them.

They’re quite I would say they’re more reactive than than proactive because in the same token, they don’t want to jump at shadows because they don’t understand the full extent of it yet. Like I said, there’s no precedent. There’s not a lot of case law to to know from, you know, the SME space for a, you know, technology business for a, you know, building designer for whatever it may be. How are you, you know, more or less exposed?

How do we price for it? So, you know, there’d be a whole piece of work being undertaken by, you know, the actuaries who set insurance pricing and make recommendations. Like, what is what is the actual exposure? So, yeah, I think it’s just a a watch watch this space.

Developing an AI Policy

So following the AI principles, human oversight, transparency, accountability, data protection. So that is what’s gonna help you demonstrate that you’re using AI responsibly.

Frameworks, and I’m gonna get Chris just before we go into q and a to just talk a little bit more about MyEmpire and perhaps touch on some of these these frameworks that they can assist your business with.

But there are companies out there to help you. If you’re thinking this is a massive exposure for me, this is a huge area of development, we need some help. There are wonderful people like Chris who do this for a living that that can help you.

So here’s just a little visual takeaway to road map your way, where to start. Okay. I I’m feeling overwhelmed with this AI thing. I don’t know. Firstly, I’ve put the define your AI policy.

Have a think about how you wanna use it, where you wanna use it, and and what your general philosophy towards it is. Once you’ve thought about that, document it. Write it down. This is what we’re gonna do. This is how we’re gonna develop. This is what we’re gonna use it for.

Understand your risk. Okay. If I’m going to use it for this piece of work, what is my exposure? Okay. I can now find a way to manage that risk because I understand what my exposure is. So now maybe I’m gonna put it into, you know, Copilot or whatever. I’m going to check it, or I’m going to have a senior staff member check it or whatever.

That is the way you’re going to manage your risk.

For those of you that employ staff, it is critical to make sure that these policies and documents are shared with them and that you train staff on the appropriate use of AI just like you would train them on the use of any other system within your business.

And, obviously, test your controls. So making sure that you’re, you know, you’re auditing like you would any other piece of work, checking it, and then review regularly. This is a evolving industry.

What you do and decide today, if you do not review it again in, I don’t know, six months, twelve months, whatever it may be, the technology will have probably developed to a stage where in a in a short time, your policies and procedures will become redundant because they just won’t be relevant for for whatever it is that’s available out there.

And and, Chris, just to to wrap up this part and then perhaps after this, if you can just go into a little bit more about MyEmpire and what you do before we get into question time.

Sure.

So you’ve got some wonderful more technical tips for businesses on on ways to manage risk there.

Yeah. What what I what I love about the the diagram you had before, Daniel, it ties into both of these slides together. What’s great here is this kind of process, while quite still quite high level, can be applied for risk in general.

And what I mean by that is we replace AI in the policy there and just be policy. You You can apply that apply this kind of step process to anything you do. The key the key couple of things here is also what is important to that box in the the middle and the top row. If you can understand, you can demonstrate through a register or you have properly documented what is important to you, how your business operates, and if something happened to those key processes, staff members, facilities, you can appropriately assess the risk to your business.

And from there, you can manage that risk. That those couple of steps in the middle there up to train your staff are are so important but can be modified to suit cybersecurity risk, business risk. It doesn’t really matter. But what’s really good with this slide is this can be applied holistically.

And I think it’s a really good callout you’ve made here with with some of these boxes because when you know what’s important to you, that second box on the top, you can make decisions. You can invest appropriately in the right controls because you’re doing it against defined risk of things that are important. So if we lost a process that was critical to our business and we couldn’t operate for three days, does that mean we go out of business?

So you’re starting to learn those types of things by taking, again, these little high level steps that you’ve you’ve called out there. So thanks for coming back to that one. But when we go into the technical tips, it depends how far you wanna take some of these things.

Utilizing Frameworks for Risk Management

One thing I suggest with every client is when you start, unless you’ve got an explicit need to be certified to something, use use frameworks as an aid.

Use them to help build process, help build principles inside your business, tailor them to suit your organization. So, again, even with that first one, leverage them, but you might adjust them to suit how you actually operate and and what data is actually important to you or what process is important to you. So you use these frameworks as guides. It doesn’t and unless you’re getting completely certified to them, like what you can with ISO twenty forty two thousand and one, use them as guides. Allow that to input your into your business operations.

The big the biggest call out that I would refer back to and we literally touched on at the start is understanding your risk and having appropriate due diligence practices to be able to understand how AI impacts your business, how the technologies that you use, third party technologies, how they impact your business, how your supply chain impacts your business. If if I can leave from one thing, understanding those components would greatly help the and, again, not just talking about cyber, but looking at some of the events that have happened in the many months.

Big data breaches, for example, recently have been occurring because of poor not necessarily poor, but through a supply chain attack.

So understanding how you’re leveraging different technologies, different suppliers, how that all translates to protecting your business, absolutely crucial. And I’d recommend that as just an just just understand it as one of the takeaways from from this.

Wonderful.

Now just before we get into Chris questions, Chris, and we’ve we’ve probably got a couple minutes here.

I mean, like, wow. Your the the amount of knowledge in your head on something that, you know, probably most of us found out about, you know, a year or two ago, it really feels like this is this is an absolute area of expertise, and I I know it is and certainly a passion of yours.

Overview of MyEmpire Group’s Services

Talk to us a little bit more about MyEmpire Group, what the what the company does, what you can what you can do to help with.

Sure. I’ll try and keep this short because I don’t want to be a sales pitcher in any stretch of that, Daniel. So I’ll try and keep it short. But in essence, if you don’t mind, with with a minute, I’ll try and do this really, really quickly. So MyEmpire started with my two business partners and co and they, the cofounders, because their previous business went out of business through a cyber attack.

So in the space of two and a half weeks, they lost their entire business, which was about to be a listed business, lost it in two and a half weeks because some bored truck driver decided to teach himself how to hack, compromised their environment, and and from that destroyed their whole business.

So By talking many, many, many million dollar business It it was context.

Yes. It would. Yeah. Public yes. Correct. About to be publicly listed. So the the ethos of us is trying to prevent that from happening again, and that’s how this this kind of originated.

And then when I joined the business, we built it into a a business where we can provide a variety of support services to our clients. And, essentially, it can be anything from helping you define your cybersecurity strategy or your risk strategy all the way through to doing penetration testing on systems you might be running and all the things in between about making sure you are slowly but thoughtfully uplifting your cybersecurity posture. And and AI naturally being a technology kinda gets roped into this. But if I I think about it in a really simplistic manner, we provide risk management support, but we specialize in cybersecurity.

So when people are thinking about how do I protect my data and protect my systems, that’s where we would sit down with you and and go through the almost those boxes you just called out earlier, Daniel. Yeah. What’s important to you?

And once we know what’s important to you, let’s work out how we manage that, And we do it through a variety of different services, which I’m happy to share through some of the materials post this. Yeah.

But that that’s probably the the the the high level non sales y technical speech is is about yeah. We’re really here to help. We wanna partner with our clients and make sure that their posture is appropriate, that we don’t see the repeats that our cofounders experienced.

Yeah. And yeah. Look. I’ve I’ve I’ve heard that that story in detail, and it’s, yeah, it’s really heart wrenching.

And that I think that’s the reality of business. There’s there’s not too many things that can really just sync your business very quickly depending on how you operate and and what you have. But, you know, data, online presence, most businesses have it. It’s it’s very easy to to find yourself waking up without a business.

Addressing Professional Indemnity in AI Usage

We’ve got a question just quickly, and I might actually take that one offline, Andrew, just specifically about how it impacts, like, your professional indemnity. The the the general idea, if you’re an IT professional or an IT consultant that is providing advice or service around AI models, including developing them, that is a professional negligence exposure. So provided your PI policy doesn’t have any of those AI exclusions, then then there shouldn’t be any issue.

That’s something that we’ll we’ll take offline, and we’ll have a look specifically and and come back to you.

So do we have any other questions just before we we wrap up?

Other Chris?

I’m just having a quick look at the list.

I’ll touch on two that I can see.

One’s in regards to probably a little bit more targeted, but it ran Microsoft Copilot.

Microsoft Copilot will have access to pretty much everything you give it access to inside Microsoft. So Copilot’s embedded into Microsoft Teams, Microsoft Word, Microsoft Excel, and if if utilized, would be able to access elements of data with inside those those areas.

So, hopefully, that does answer that first question.

It is becoming so, again, sorry to speak specifically about Microsoft. Okay.

It is becoming more ingrained with inside the Microsoft three six five ecosystem. So it’ll be definitely something Microsoft continues to push.

They’re releasing new licenses, which is pushing this AI concept further. So they’re trying to leverage their investments with AI.

Do do you see like, in terms of using AI, so there’s a question there about whether sole operators use it or whether it’s, you know, sort of enterprise level companies, you know, with staff. Do you do you sort of see trends in who’s actually using AI?

Again, yeah, I think it’s I think it’s once it’s understood, Daniel, I think the use of it’s growing from all different business types.

I I really just emphasize the things that you’ve just went through in this presentation about understanding its risk to you And applying the right controls and validating that those controls actually work. And, again, that may mean you need to speak.

You’re probably not gonna get too much information from Microsoft. Speak with someone to to help create the right rule set for you. Yep. But I think, as you mentioned, a single operator, there would be benefits there because, again, you create the right prompts and you create the right use cases for AI.

I’m sure it’ll speed up elements of the mundane task that you would need to do. But, again, to the points you’ve raised, you would still be responsible for the production of that output. Therefore, you still would need a QA element called the assurance element across that to make sure it goes out correctly. Yep.

It’s factually correct as you called out in some of your tips and tricks. So Yep. I I think there’s some benefit there. It’s just you really need to understand your use case and then how to apply the tool to your use case to to make things more productive or efficient or or cleaner through some of the text that it can can manage.

Navigating Data Requirements for AI Tools

And and then, kinda, how do you how do you just this is the final question before we before we wrap up. So I really appreciate everyone sticking with us. You know, when you when you use certain AI tools, there’s there’s minimum information that they want you to put in, whether that’s, you know, business name, emails, passwords, all that sort of stuff.

I mean, it’s fair to say that there’s probably no way around that if there are minimum requirements to using it. That kinda goes back to our original point around terms and conditions. And if you want to use x y z, these are the rules, and this is the minimum information that you need to supply.

Yeah. That that sort of stuff, Daniel, will start to come back into your, I guess, your end user agreements with the provider, the software provider. And, generally, if it’s a free version, they want some information because they will use it for marketing purposes and and things like that. Paid versions, you might have more control of the type of information you’re providing them.

But that would be that would be relatively high level in the sense of contact information. Where it gets a little bit more nervous or scary is the fact that you of what you put into it yourself when you start utilizing the tool and what again, as we went back to you before, the t’s and c’s of how that tool, that vendor is using that data. That for me is probably the the key part there when we’re looking at the tools is is how it’s actually being how that data is actually being used.

Yep. Yeah. Wonderful.

Conclusion and Future Engagements

Following following this, everybody will receive a certificate of attendance. So if if your association is recognizing this for CPD, they will will do that. We’ll get that out to you in the next two weeks or so.

Look. I wanna thank everybody for for tuning in. We’ve had, you know, a massive a massive attendance here today. And, Chris, that’s really a credit to, I think, you and the knowledge that you’ve brought.

This has just been wonderfully eye opening and hopefully not too scary.

Honestly, if there’s if there’s a lot of feedback and lots of questions after, I know there’s heaps of questions around the legal aspect of using AI, you know, we’re we’re fine to continue to produce content and and has sort of have have further sessions. So, look, I I really, on behalf of us and and all of all the people that have attended, just thank you for your time and expertise, I think.

Yeah. It’s it’s certainly eye opening, but, yeah, I’ve I’ve I’ve learned a lot. So we really appreciate it.

No. Likewise, Daniel. The expertise you brought to and the the topic that you you created for today, I think, was a a really good session.

So thanks for having me along. Wonderful. Great. Appreciate it, guys, and thanks a lot. Take care.

Alright. Thank you. Bye.

Previous Webinars

In case you missed our previous webinars and would like to catch up, they’re available using the links below:

Insurance advice you can trust