
Room for All
The Inclusion in Hospitality Podcast
Welcome to Room for All – the podcast where we dive deep into the world of inclusive employment and explore the power of creating opportunities for people with disabilities.
Andrea Comastri, CEO and co-founder of Hotel Etico, Australia’s first not for profit social enterprise hotel and Saraya O’Connell, General Manager of Hotel Etico. will be your hosts as they talk about the importance of breaking down barriers in the workplace with a particular focus on hospitality and other customer facing roles, and how businesses can lead the charge toward inclusion.
At Hotel Etico, we believe that everyone deserves a fair chance to succeed, and we’ve made it our mission to not only provide jobs but to build meaningful careers for people with disabilities. On this podcast, we’ll be sharing success stories, best practices, and inspiring conversations with industry leaders from the hospitality sector, disability sector, other social enterprises, philanthropy and of course our own trainees graduates and staff.
Whether you’re a business owner, an advocate for inclusion, or someone curious about the future of work, this podcast is for you.
So come and join us at Hotel Etico, or as we call it…the Hotel California for the heart. A place where once you have checked in…your heart will never never leave!”
So, let’s get started and open the doors for all.
Room for All
Room For All - S2 E21 - Live at Social Traders Convene 2025 - James Gauci, Cadent
Ethical AI and Social Impact: A Conversation with James Gauci | Room For All Podcast
Join us for an engaging episode of the Room For All Podcast, live from the Social Traders Convene in Melbourne! In this episode, host Andrea interviews James Gauci, founder and leader of Cadent, a responsible AI engineering and governance consultancy. James shares his experiences from Morton Bay, his journey in the tech industry, and the inspiration behind starting Cadent. Learn about the importance of ethical AI, the economic benefits of socially responsible technology, and the challenges faced by social enterprises. This episode is packed with insights on how AI can be designed for social good while ensuring no one gets left behind. Don't miss this enlightening discussion!
00:00 Introduction and Welcome
02:10 Guest Introduction: James Gauci
02:39 The Airport Mishap
04:38 James Gauci's Background and Company
06:53 Social Enterprise and Impact
11:19 AI Tools and Ethical Considerations
15:43 AI Regulation Challenges in Australia
16:00 The Dual Perspectives on AI's Future
16:23 The Role of Values in AI Development
18:00 Ethical Decisions in AI System Design
18:28 Cultural Differences in AI Ethics
19:08 Addressing the Digital Divide
21:20 AI's Impact on Social Inequality
22:20 Empowering People with Disabilities through AI
25:05 The Importance of Ethical AI
29:02 Final Thoughts and Reflections
Ethical AI for Social Enterprise Dialogue
[00:00:00]
[00:01:00] Okay.
We're rolling. We're rolling. Welcome. Welcome everyone. Welcome Saraya to another thank you episode of Room For All Podcast. Live. Live another live. So, I mean live. It's not live. You are gonna edit out my stuff up is what we're going with. Oh, we will. We'll have nothing left[00:02:00]
On that note, let's introduce our guests. We are, well, first of all, we are. At the Social Traders Convene in Melbourne at RACV. And , yes. We have, , a long list of, guests today for our live podcast show and today the first cab of the rank is James Gauci. Welcome James. Nice to meet you both. And it's a pleasure to be here.
Very good to have you first. So you've got like the best I've gotta, I've gotta break the ice. You've gotta, yeah. All the worst guys into the rhythm. We're rusty. That's right. That's right. The first and the last I think are not the luckiest ones. It's been quite eventful, carding all of this to Melbourne.
Do we wanna already disclose it? I think so. I, I knew I was gonna throw you under the bus. First thing, James. So I'm gonna take away from your interview for two minutes. Oh, please. Just, just to put like, basically shit on Andrea, because what did you do yesterday while we were at the airport, please? Well, what is the, wait first, what is the most.
Expensive equipment. We have me, uh, [00:03:00] uh, we have a lot of, that's priceless. Priceless equipment. We have a lot of equipment. No, we've got lots, two huge bags. We had this the bag with the Rodecaster, the cameras, lots of stuff. And the mics, it, everything in the back, the mics, the, and we come off the plane, we go, we had some stuff to pick up from the.
Um, carousel. Carousel, and we had the main bag with this, and the mics inside and the cameras. So pretty much everything we needed mm-hmm. On my shoulder. And I go and we wait, and it was a bit heavy, so I put it down on the floor and then the bags arrive. We go and get the bags, and then we walk to the taxi rank.
Oops. The daisy. We catch a taxi, we get all the way to the city around the corner, we unload the bags and say, oh, where's the bag? We, I left it on the floor. You left the podcast on the floor. The whole thing on the floor. On the floor in the middle of the, the airport. So the taxi driver, who's amazing waits for us.
Then we go back. He was flooring at 110 [00:04:00] kilometers all the way when he was 80 zone all the way. I on the phone with Qantas on hold for the whole duration. I get there as still was on hold, and we get there and I ask and say, oh yeah, it's a grey, it's a great bag. Yes, they've got it over there. So I go to the desk and they had already opened it up and they were sort of course well playing with it.
What did they say about leaving baggage around? That's right. They're gonna take it pretty seriously, aren't they? They're definitely proud upon, yeah. I'm surprised they didn't blow it up because there's all these electronics in it. Anyway, we just wasted four minutes of your time, James. Sorry. I had to, I had to let people know 'cause he very rarely makes mistakes, but all is well.
That's not time wasted in my view. How do you say? All is well and swell. Yeah, whatever. I don't know. James? Yes. Who is James? What do you do? Where are you from? Who am I? I, uh, I live in Morton Bay, in the city of Morton Bay, just north of Brisbane. Great state. Yes. Just putting it out there. Queensland.
I'm a Queenslander. It's the place to be. Yeah. And uh, Morton Bay's a really interesting region to be in. It's really fast growing and, um. Uh, so I run a small consultancy. Um, we're a responsible AI [00:05:00] engineering and governance consultancy. Um, we're also a certified social enterprise. We've been around since 2021 and basically trying to build systems that ethical by design.
Um, using our capabilities and experiences that we've earned over, well, in my case, a, a 10 15 year career in technology. Nice. And the name of the company is Cadent. Caden? Yes. Is is the name of the company. So your tags said James Cadent. You just, that's James. It's just name now? Yes. Well, I can't get away from it, can I?
And where does the name come from? Cadence is, uh, a term that we use in agile project management in technology, and it's usually used to describe the rhythm with which we make a deployment to production. Okay. Um, so when you are using, uh, scrum, which is a methodology that we use. Your, your sprint cadence is how long each sprint is.
Okay. Um, but I've also got a long time ago, a bit of music in my background and there's some, uh, uh, some other meanings to cadence in, in that environment. Okay. And so it just kind of had a few meanings for me and, and fit for us. And it was available online. It was, yes. Cadent dot au was [00:06:00] available. Uh, so, uh, so go and check it out.
Very good. Very good. And yeah, sorry, Andrea will probably lead this interview because. The fact that he's even letting me hit the record button is a step up with technology for me. It's just, he's like, you should see this. I'm a bit nerdy a bit. I'm a bit nerdy. Yeah. Um, so he'll have all the questions for you, and I'll just sit here pretending like I'm not stupid.
Oh, look, I'm, I'm of the mind that, uh, a diversity of opinions on technology is a requirement. Uh, it's not opinions, it's capabilities. He's not wrong either. Yeah, no, I, I, like, I'm a bit geeky, but I definitely don't profess to be up to scratch with te technology just moves too fast for a 54-year-old.
It's just, it's just unbelievable. I mean, you're too fast for a 38-year-old Don. Yeah. Well, there you go guys. I can't even do technology and I'm 35, so. Yeah, right. You're the young one. And why, why do you do it as a social enterprise? Mm-hmm. What's your motivation? Well, I, [00:07:00] um, I, I've always been in volunteer roles and, uh, had positions on not-for-profit boards and and so I've always had a bit of a, a desire to make social impact with my life and with my work.
I was lucky enough in the three years preceding starting the, the business that I was the head of digital and technology at a public sector program. Okay. Um, called PPQ in Queensland, which is the kind of personalized number plates business. Okay. Uh, in Queensland. So I had a team of 15 people and, um, across software and infrastructure and cybersecurity.
And during my time there, we, we developed a bit of a, an internal practice of accessibility by design and security by design, because what we found was not only was it better for our stakeholders and for our customers, but it was also cheaper to do it by design than bolting it on at the end. Mm-hmm. And so that experience uh, as, as well as all of those experience in in the not-for-profit sector led me to start a business somewhat [00:08:00] opportunistically when I moved on from from PPQ.
As a framing for making the kind of impact that I wanted to make. So values and purpose certainly came before the business did. It's just I had some capabilities that I wanted to share and I wanted to do that for a reason that kind of transcended profit. So very good. Yeah. Very good. And who are your clients?
Um, at the moment, we've got a range of clients. Social traders is actually a client of ours. Okay. And we work quite closely with them, uh, especially the digital team. Um, so, uh, the most recent, uh, updates to the social traders portal, um, for, for business and government members has been some work that we've collaborated with social traders on.
Um, we also do, uh, some AI systems for safety critical industries. So, okay. Um, using, um, using AI to produce highly relevant safety talks for, uh, for heavy industry site workers. Using real world incident data. So without getting into the, the specifics of it, um, obviously getting that [00:09:00]wrong has some pretty serious consequences.
I would think so, yeah. And, and so ensuring that the, the values and the priorities of the various stakeholders involved are, are taken into account and the design of that system. Becomes dramatically more important, not just from a, you know, a doing good perspective Yeah. But from a practical perspective.
Okay. Yeah. And so what makes you a business for good though? Yeah, that's a, a great question. So, um, there's the how of, of what we do. Um, so it's ethical by design, accessible, secure. Mm-hmm. Yeah. Um, and we use a couple of frameworks in our work. So 27,001 for information security. 42001 and 42005 for, um, AI impact and AI risk management.
Um, but, uh, but beyond the how we we reinvest our profits, uh, towards community need. So. If, uh, a charity, um, or a social enterprise comes to us and says, Hey, we need, we need the right people to do our technology [00:10:00] project. Um, we will provide them with pro bono or low bono services in order to make sure that the best of technology is affordable to those kinds of organizations.
Nice. Who are. Usually structurally disadvantaged from ac accessing it. Well, that's it. I think social enterprises are disadvantaged in that way. I mean, it, it costs a lot. Technology alone costs a lot of money. But to be a social enterprise that, you know, there, there's an analogy that I've, I've used around the place in the past and.
I liken starting a social enterprise to starting a small business, except it's a race and I've taken 10 steps behind the starting line. Yes. Just to make a bit of a handicap that's, you know, and not, not to martyr myself or anything like that, or, or other social entrepreneurs around the place, but it, but it does, it's a, it's like doing it on hard mode.
Absolutely. It's not just the. The profit bottom line that you're trying to chase. It's, it's actually something that impact a more impact, yeah. More economically valid. Yeah. In the modern sense. Yeah, absolutely. It is hard work. You, I, I really like that analogy. It's, if you [00:11:00] start a business, you're 10 steps behind of every anybody else.
It's like starting going with a car with a hand brake on and sort of deciding, you know, when to release it, when to put it on, when do you know how many times I have, I've done it in your car. 'cause it's on the pedal, so it's so hard. Just drive off and I'm like, why am I going so slow? Whoops. That can't be good for the hand break.
So ai, what's the best tool? I know. I'm sure you're gonna say, ah, well it depends what you're doing. Well, it depends. Yeah. Yeah. I'll give you the consultant answer. It depends. Um, what do you use, what do you like using? What do I use? So, chat, pt or tropic or, so we we're a Google shop, so Okay. We've got Google Workspace.
Okay. And we're very liberal users. Of, of the Gemini suite of tools. Okay. And Google AI Studio. Um, but we're also an AWS partner, so, um, we use Amazon Bedrock in a lot of our systems, so Oh, yeah. Do they have AI themselves? Do they have They do. So Bedrock is like, they, they call them model gardens. It's like all these models are growing and they pretty Okay.
Right. Um, but uh, essentially it's a marketplace. For ai, [00:12:00] uh, and, and generative ai. So Amazon's marketplace for AI includes models from open AI and from Amazon and from from Anthropic, but also, um, it gives you the ability to run open source models. Okay. Um, so increasingly we're seeing ethically designed and developed, um, LLMs.
So the Swiss AI Institute, for instance. Okay. Just a couple of weeks ago, released. A frontier model, which is trained exclusively on licensed and public domain information. Okay. So it excludes all the other crap. E Exactly. So it's not, it's kind of the diametric opposite to the way that OpenAI and Anthropic and Google and others have used information from the public web.
Yeah. Okay. Often without people's permission to train their models so that the Swiss AI Institute has done it a a completely different way. Oh, interesting. The performance is not the same as the frontier models right now. But it's only one generation behind, like it's only six or nine months behind. So that gives people like me a lot of heart that, um, there are going to be ways, well there are ways [00:13:00]today to make more ethical decisions around the use of AI systems that don't compromise your values and you're not stuck in, uh, stuck making a kind of a least worst option kind of decision as we've been forced to for such a long time now.
So, yeah, see I glazed over a little bit of that. I'm very confused. Like I'm very, very, uh, ask me a question. Well, you had me at Amazon and then I was like, so basically it's a marketplace for heaps of different, oh, he's moved on from that marketplace bit. But that's, yeah, I know. Yeah, I know that. But I'm back there.
This is what I mean, you were just like so into it. Yeah. So for example, like I use chat GPT and the stuff that it knows about me mm-hmm. Is ridiculous. Bit creepy. Well, yeah, absolutely. Like the other day I was like, can you tell me about myself? And then it listed all those things and I was like, where is it all getting this information from?
Well, websites, well, in that, in that instance, so OpenAI, um, has like a feature of their product. Is that they will develop a memory about all of the conversations that you've had. Yeah. And it will develop a [00:14:00] schema, like an idea or a model of what you are and who you are based on the language that's been used in your conversations.
Yeah. So if you've ever described yourself or you've described your work or you've described your personal life, it is developing a very intimate picture. Lovely for me. I've never done the personal life, but it tells me everything. I've got an idea. So, but can you then trick it over time? Yeah. Can, can you keep saying, you know, Andrea is this amazing person, blah, blah, blah, blah, blah.
Does this, and then eventually it will spit that out for me, for others. Yeah. Um, um, so it's relatively complex because Well, yeah. If you search about me, 'cause what I've put in there, I've done that. Yeah. Goes out public right? Sometimes. So, okay. So it depends on the, so this is where it goes outside the realm of AI and back into the realm of.
Law and product design. Yeah. And information security and privacy and things like that. So if you are not paying for the system, odds are Yeah, yeah, yeah. Your data is being used as training data [00:15:00] Yes. For the public models. Yes. So if you are ever using, like let's say for instance, you are, you're talking about a personal situation, um, and it's a medical situation, let's say.
Mm-hmm. Um, if you are in a public free version of the product, that information will go in as training data. However if you are paying for it, and this is not always the case, but often you need to turn it off, you have the option to turn off. Uh, have you done that, that your data is, there's a feature in the settings that you need to turn off something.
Do you think you should have told me that? Yeah, we'll do that. Everybody should do that, and it's kind of. It's these kinds of hygiene factors that are easy to overlook. Um, yeah. Because we're just so used to consuming free digital services. It should be an opt-in though. I do do the I agree. Yeah. And in Australia it, it legally is required to be, but it's not something that is often, uh, monitored.
It's very difficult to enforce, let's put it that way. Wow. Okay. Okay. See, I learned something. So are they gonna, are they gonna take over ai? Yeah. Is it gonna be Skynet and. Stuff. I, [00:16:00] very cool. So there, there, there are two kind of schools of thought, like at each, each end of the extremes. So we've got the Skynet, Terminator, the Matrix, the robots take over, and then we've got this other side, which is like utopia.
Nobody ever works again. Everybody Yeah, that's right. Has a wonderful, blissful life. Let the machine do it and, and the machines are doing everything and it's just magic and, and we don't really question. It's definitely gonna be somewhere in the middle, in my opinion. But I think the thing that is easy to discount is our agency in that scenario, and that's the entire reason why I started Cadence.
Mm-hmm. Was because I wanted to be an agent for positive change as it relates to ai. Um, because I can see how powerful these technologies are, and it requires people with values and prosocial values to make the decisions that lead to good system design. Yeah. Not just in technical systems, but in, in social systems.
Um, yeah. But it's enough and that's, that's what's gonna change things. It's enough to have one person that does Skynet and we're doomed. Right. I I, I'm not, like, [00:17:00] if you asked me three years ago, I would've said yes. But uh, the way that I've seen civil society respond, um, over the last, especially over the last three years, but even people who've been in the AI game and especially the AI ethics game for sometimes 10, 15, 20 years, there is a, a really strong rationale, economic rationale, social rational.
For getting it right. Um, and I think the right people are listening to the right other people, but it's gonna require us, like people, like us to, to support those people in their work. Yeah. What's really scary is that I often sort of get into this sort of, Dom Doom scrolling about, and it goes into, you know, Sam Altman and blah, blah, blah.
Mm-hmm. Mm-hmm. And they talk about ai, like if it's something that has got its own. Evolution thing. Right? So it's not like, well, I'm creating it. Mm-hmm. I should be able to control where it goes. Instead it talk it, they talk about it. Like if it's an extra entity. Yeah, that's right. It's a bit of a furphy in my personal opinion, because.
It's decisions that [00:18:00] they're made. Like a lot of the decisions that are made in the development of those very powerful general purpose AI systems are ideological decisions. Yeah. Um, and, and for me, as somebody who, like I teach, um, ethics in the computer science course up at University of Sunshine Coast, just near me, just one class a week.
Keep my skills sharp. Okay. Nice. Um, and we, we often talk about the decision to design and deploy a system. Mm-hmm. That is an ethical decision. Yeah. Whether you want it to be or not. Yeah. The prevailing values in. The geographies like Silicon Valley, San Francisco Yeah. Are quite different to the values that we have in Australia, for instance.
So those cultures, especially at the higher levels, are hyper utilitarian. They're very much uh, venture, yeah. Enterprise focused and profit focused and shareholder focused. Yeah. And so that's what we get. No regulation or Yeah. That, that's what we get Hate in the systems that are subsequently developed and they think they're doing a good thing.
Mm. That might be a good thing for them and for [00:19:00] the people that they talk with a lot, but it's not necessarily a good thing for the people that are ultimately impacted by those systems. Yeah, I agree. Yeah. Agree. Wow. Okay. Can, what do you think about the people that are getting left behind that just 100% refuse to use it or dunno how.
Probably it's more like, dunno how to be probably honest. Yeah. But what do you think of, of that in the age that we are now? Like, I mean my daughter does ev like notebook and pad. Like the stationary list is no longer, you know, in schools and primary schools and high schools. It's all tech, tech, tech, tech and the best kind of tech.
But what about the people that don't have a choice about being left behind? So age related disability. A hundred percent. Like, and the, the problem is I think that, um. Like primary schools. I know they're starting to teach it even in primary schools. Like I know Lala came home and I was like, you did what?
Um, but what do you think about the ones that are forced to like, that are to be left behind? One because they can't [00:20:00] afford to keep up. Mm-hmm. Or they have an intellectual disability or they have a Yeah, exactly. A hundred percent a disability and things like, how do we make sure that people don't get left behind, that don't have.
Yeah, the skills to be able to do it. Again it's a philosophical decision that we have to make as a society. Um, there's a, this economic idea of just like to be a little bit abstract to answer your question, um, there's this idea of complimentary capital versus substituted capital as it relates to labor.
So if we deploy our capital to build assets that complement people. That's a different decision to making a decision to deploy capital, to build assets that's substitute for people to people. Yeah. So if we take as, as a business community, if we can look at each other and say we are going to make pro-social human-centered decisions, we don't want to see the negative, uh, unintended consequences of AI systems.
For our people who are our employees and the families of our [00:21:00] employees and the communities that we work in. Um, and that's a, that's not a fait accompli, that's not, that's not just destiny. Yeah. That's a decision that we have to make today and every day from now on. Mm-hmm. If we are going to mitigate against the worst impacts for people who are already being left behind.
Mm-hmm. To comment a little bit on like the gap and, um. Y you know, the haves and the have nots. I think that, uh, uh, well, I don't think, uh, I've, I've read the research now, which is great. Uh, we're, we're putting together a white paper. Um, but, uh, the, there's Australian research which demonstrates that AI is contributing to a widening social divide.
There are people that are being disadvantaged by ai. In significant ways, um, often in ways that are difficult to measure. And even worse than that, when we deploy AI systems that are beneficial in the aggregate, those benefits are not equitably distributed. Yeah. Yeah. And they're not equitably distributed in ways that we can't predict as well.
So we have to be quiet. [00:22:00] Deliberate in the design and the deployment of these systems to make sure that we're not just doing good, but we're getting the most economic benefit out of the deployment systems. Yeah. If you think like, you know, I'm going to the extreme, but you know, a homeless person is already disadvantaged, right?
Mm-hmm. For lots of reasons. Mm-hmm. Now it's gonna be even more disadvantaged. Yeah. Because the gap, increases And the same goes to someone that is in different sort of I do, I do think for people with a disability though, AI is a game changer for some the equipment that they can get and, ah, big double-edged sword.
Oh yeah. Yes, it really is. Um, like, I mean, I've been in this space, the disability space for a long time and the technology has already come so far forward. It's incredible. It's incredible. Yeah. Yeah, yeah. No, that, that absolutely. But. But yeah, when it comes to physical disability, yes, with intellectual disability is very different.
Yeah. Um, and in fact there's big risks because Huge risks. Yeah. Scammers, um, sort of even having access to it. But yeah, it's so much easier to discern. [00:23:00] Something we haven't talked about as well is, um. Hm. Is the nature of the AI systems that we're talking about here, because it's easy to lump AI into.
Yeah, it's easy to say AI change everything. Everybody here is a slightly different thing. Um, with LLMs, large language models the better you are at language, the better the performance you're going to get out of an LLM, you're stuffed, I'm screwed. And, and that in itself is a structural.
Disadvantage, right? Yeah. So if, um, if as an individual with an intellectual disability, let's say, who has trouble with language, um, I'm going to be disadvantaged in some ways. Like I won't be able to get peak performance out of an LLM. Conversely though it may be able to elaborate on my capabilities in some ways.
Yeah, that's right. So I might be able to give it an idea that I can communicate and it might be able to elaborate it in a way that Yes is more successful in communicating my idea. It's almost like a reversed easy read. So we, we, we use a lot easy read to communicate [00:24:00] policies, processes, and so on at work, right.
So it's almost the, the opposite, like the person with intellectual disability might be able with some help to feed the system simplified images mm-hmm. Or simplified language mm-hmm. And extract instead a more sophisticated one that makes them actually elevate their Yeah. That has the effect of empowering them.
Yeah. Yeah. In that circumstance. That's interesting. And, and that's a, that is a system design choice, right? Yeah, it is. Yeah. Um, you, you have to account for that use case. Yeah. If um, people with intellectual disabilities are going to be impacted by that system. Yes. Oh, very interesting. Yeah. And to think I wasn't gonna learn anything, I'm gonna dread this interview 'cause it's just not, technology's not my thing, but we, so today's episodes are gonna be about 25 minutes or so each.
So I think we've gone a little bit longer, but I think we started a bit earlier with you, um, James, so that's great. Thank you. What. What is something that you really wanted to say or to cover before you sat here that we haven't really covered? 'cause we've gone off on a [00:25:00] tangent. We always go off on a tangent.
No, the tangent, but I love the tangent. I'm a rabbit hole kind of guy. Um, I think, um, the one thing that I came here with the message to say, and not just to the the podcast, but to, to convene, is that ethical ai. Has a return on investment. Mm-hmm. And this isn't just like, like I kind of alluded to it earlier, it's, we don't do these things just because we're good people.
Yeah. We do these things because they actually have an economic benefit. And the latest, um, literature on this subject demonstrates something that we've already learned through practice, which is that sure, you've got the productivity benefits, the cost savings, the um, the greater quality of output and, and so on that AI systems can give you.
But that actually only represents about. 20%. Okay. Of the benefits of ethically aligned AI systems, 50% of it is through [00:26:00] long-term benefits, like increased trustworthiness. Mm-hmm. Increased customer satisfaction. Um, and, and it's this idea that, you know, the next time somebody says to you, yeah, we can, we can give you a 10% productivity boost with our, with our AI systems.
Have a think about the, the five times that amount or the, the, the two times that amount that you're going to get if you do it in an ethical way. Mm-hmm. And if you do it in a way that's consistent with your values and the values of your stakeholders. Yeah. And this is something that we've learned time and again in technology, which is the, when you, uh, it's this idea called shift left.
So have a look at the design phase first. If you solve a problem in design, it costs you 90% less. Then if you deploy the system and fix it later. Okay. And that goes for security, it goes for accessibility. And now increasingly it's going, uh, going for ethical alignment and values, alignment of AI systems.
If we can't agree on what our values are mm-hmm. In an explicit way, [00:27:00] it's gonna be very difficult to determine whether or not our AI systems are performing in ways that it can. System. So good design. The problem I suppose, is that nowadays the pace at which people are expected to spit new things out or, or the competition right, pushes people to put products out that are not necessarily mm-hmm.
Finished or, or, or, or refined you. And you and I could sit down in half an hour and have a. Uh, we could vibe code a technology platform. That might be, yeah. It's vibe, code, vibe. Coding is like when you talk to an AI and it spits out a website or it spits out an app and you can do it in like half an hour.
That's so good. Yeah, but, and it's great. But but's the, but if you don't, if you don't have the expertise to understand where the security holes might be or where the accessibility problems might be Yeah. And so on. Um, you might deploy something to production that ends up. Unintended. Unintended way.
Yeah, for sure. Having negative consequences. And so you, what you're saying is that, if I understood is that being an ethical AI [00:28:00] company mm-hmm. Um, you actually, it, it shouldn't be a disadvantage actually, potentially a competitive advantage. That's the message you're trying to put out. And, and, and it's the same philosophy that social entrepreneurs have taken to their work for a very long time.
It's just a slightly different area in which we are playing. And so we are learning so much. Yeah. Like I, when I started the business, I didn't realize that I'd started a social enterprise until I rocked up at World. Yeah, right. Brisbane. Right. That was a good world forum. It was. It was my first real throw into social enterprise as well.
I knew that I was in the right place where I was like, oh, I know you from another life. And I know you for another life. Yeah, yeah, yeah. So, but, and I think, I think it the concept is maturing. So we say the same thing, like, you know, we don't want people to come to hotel because we do good. Mm. We want people to come because it's a great product.
Mm. Um, and on top of that, we've got the competitive advantage of something that nobody else does, but we also. Do good in terms of the inclusion. Yeah, that's, that's what I mean, so that's right. [00:29:00] So, yeah, very interesting tap point too. Very interesting. Okay, well, let's wrap it up. Have to, well, there one more question and it's Andrea's favorite question to ask people.
Yeah. Which you would've seen the preview at the, at the, at the social social enterprise festival in, um, in Sydney. Mm-hmm. Yes. And you probably prepared if your social enterprise was a dish on a menu, what would it be? On a menu? So, yeah, if. I haven't prepared. Oh, right. Um, what if my social enterprise was a dish on a menu?
On a menu? It doesn't matter what type of menu. Oh, that's really difficult. Um, I, I, I would want it to be, I don't know. I, for some reason, I want it to be the first thing that pops into your head, a hamburger. Okay. Uh, like an, an Australian style hamburger, like with the lot with, with an egg and beetroot and pineapple and two different sauces and stuff like that.
Like just really rich and wholesome and nourishing. Layered and layered. And yeah. And it's just delicious. And it's just delicious. Yeah. There [00:30:00] you go. So was that a bit unhealthy? But Yeah. Yeah. Well, in moderation, right? Yeah. You gotta get the balance right. So was that the first thing that popped into your head?
Yeah. Yeah. I like it. Very good. See, I said we still have had, have to have someone that repeats the same answer as someone else, right. Not the 13. Yeah. We've not had, it's been so good. I don't think so. Yeah. So very good. Okay. Last thing. Where do people find you? Uh, what are the Yeah. Where can they find you and, yeah.
I'm an absolute pest on LinkedIn, so if you, me too, got some hot takes on ai. Just, uh, follow me there. Um, but, uh, our website is cadent au, CADENT. Um, and, uh, yeah, we are, we're starting to, uh, put together a lot more content around ethical AI and the impact of, uh, AI on social enterprise and vice versa.
So if you're interested in hearing more about that, please, uh, come and subscribe. Fantastic. We'll put it in the show notes as well. Yes. Thank you, James. Enjoy the rest of the day here. Thank you, you both very much and see you around. This is a great experience. Thank you. Thank you.
[00:31:00]