GiantKelp’s monthly look into the business of AI and what it means to our clients and partners.
Al and Rich discuss various topics this month including: Open Source AI, Enterprise budgets and project ROI, amongst other points.
Watch or listen to the conversation below:
Full Transcript of the conversation
SPEAKER: Rich Johnson
This is the GiantKelp monthly State of AI update.
Al and I realised that there’s so much information and news and things happening within this space, particularly our clients are interested in what should they be reading, what should they be aware of? So this is our way of trying to boil down what we feel is interesting and relevant to our clients and people that follow what we’re doing. So, yeah, I think we’ll just jump straight into it, Al, and talk about some of the things that have caught our eyes.
The first article and piece of research that we came across was from Andreessen Horowitz, which is a VC firm in the States. Now, this is a really interesting bit of research and obviously we’ll share the link to this below. Wherever you’re watching this on YouTube or LinkedIn or wherever, and you can read this yourself. But they’ve basically covered 16 points around how enterprise businesses are investing and developing their own AI solutions. And some of this is really valuable insight. They’ve run surveys, I think it was over 70 different enterprise firms, and what they’re looking at is this quite a significant rise in enterprise budgets around generative AI development. The other interesting element around the budgeting of AI development is kind of how they’re allocating recurring budgets for software development, whereas conventionally these budgets will be held in different pockets or different teams, it teams, whereas AI is predominantly the biggest factor when it comes to software development within these researched enterprise clients. The other interesting point within this article, it’s quite a big article, as you’ll see, is around how enterprises are measuring the ROI from their generative AI products. And this is obviously really important. With any project you’re doing, you need to know what kind of ROI you’re going to get on it. A lot of the ROI that they’re measuring is all around productivity gains. And if you’ve spoken to us or worked with us, you’ll know that that’s a key area that we focus on, is mostly about trying to increase productivity in some respect, I think you’d agree with that Al, right?
SPEAKER: Al Cattell
Yeah, absolutely. And I think they’re related to some degree the two areas, right, because RoI and budget spend are kind of the budget spend needs to be tied to some level of RoI. The interesting thing, just to touch on the budgets part to begin with, was that a lot of the budgets that have come previously have really been based from these innovation budgets. So people would say, okay, well we’ve got a budget set aside for some innovation projects. And the larger the companies, the more that they will have these sort of specialist innovation centres to just explore business improvement and innovation. And what the A16Z report talked about was that a lot of the AI investment now is being shifted out of those innovation buckets and into sort of reoccurring software buckets. As you said, rich, I think that’s important because it’s seeing that this is being moved from a sort of exploratory, what could be kind of piece of work into a. Right, this is sort of an acceptance that this is a new kind of ongoing cost centre that we know is going to become more of an operational cost, more of a, more of a repeat cost. And I think that just shows a mindset shift that companies are moving from an initial exploratory phase into a, okay, this is being baked into the operations of the business now. All companies will be at different stages in that and it’s been an interesting 12, 18, 24 months in this space. So then that comes on to, okay, if we’re going to invest this money, how do we measure the ROI? And I think that was the second part, is like, ultimately the core gains at the moment are about efficiency gains. And those apply in a whole range of areas, from software development to marketing to operational work and all things in between. And I think the one place that people can be more clear that this is where AI is helping is to be more productive, to be more efficient. That’s really the ROI. But the other thing that the article said, I think, was that it’s still a bit of art and science. Like, it’s not a one dollars in gives you $2 out kind of ability. People can feel that there’s value, but they’re still having to kind of. There’s a little bit of art in the science of that ROI calculation to work out exactly where that is, but they’re still spending the money believing that they’re getting those results. So yeah, it’s fascinating. I think.
SPEAKER: Rich Johnson
Yeah, it’s a really good piece. I think the other thing that kind of jumps out to me there is around the multimodal usage. So for any of you out there that aren’t embarking on an AI project at the moment, or you’re thinking about it, most people would consider, let’s just pick OpenAI as a model and use that. But if you do that, you’re locked into one model, and if something changes with that, you could be in a bit of trouble there. Whereas the enterprises that these guys have researched are seeing a pattern where they are using multi-modals. And it makes clear sense if you can do that, obviously it increases the cost and it’s probably going to reduce your ROI level. But by doing that, you might have a lot more flexibility, you might have a lot more customisation around the apps and the tools that you’re building. The other element that they talked about here was the use of open source models within this as well. And I know that’s something that we promote quite heavily. We don’t sit on one side or the other of this fence in terms of, you must use open source to work with us. We recommend that clients should use the best model that is most relevant for them and most capable for them. But generally speaking, open source models are the kind of direction that we tend to look at. First, I think it’s fair to.
SPEAKER: Al Cattell
Well, sorry, Richard, let me just jump in there on the multi model thing, because I think that’s a. I could probably talk for hours about that, to be fair, because I feel very strongly about it. Ultimately, at the beginning, if we take a step back and say, what is AI actually doing? AI is giving us a new layer on top of data. Now, that data could be numerical data. It could be, in more likely the case in generative AI documents, right? There could be documents that you have in house, your own support tickets that you want to use AI on the top of. It could be documents you get from the web where you want to use some web information to summarise news articles. It could be anything in between, right? Ultimately, there is this level of data. So you’ve got some data in your business, then the question is, how do you apply AI to that data to get more productivity, to more efficiency, to our earlier point, right? And so what you do is you then choose a model like OpenAI and you say, right, I’m going to use the OpenAI AI to answer the questions I want. So based on these proposals, give me a summary of the strengths and weaknesses, analyse these emails, come up with ideas for marketing campaigns, et cetera, et cetera. And the business model of people like OpenAI and others is to be almost that layer that helps you do all of those tasks much faster and much quicker and much more efficiently. And if you would live in a perfect world in terms of Microsoft or Google or OpenAI, you’ll just use their model. But from a business point of view, you’re effectively giving this very important and it’s believed to be very powerful layer, it becomes a single source of failure for your business, right? It’s like, okay, if we put all our chips in with OpenAI and I’ll come on to how that could go wrong, then actually if it doesn’t work quite well or if it’s not quite right or some things change, then the whole system sort of falls over. So you have to build in resilience to your business by being able to, a, use different models for different tasks because you might make it more cost effective or b be able to swap in and out a model from the workflow that you’ve created. So let’s say your workflow is triaging support emails. So when someone emails your software business and says hey, I’ve got a problem with blah, you might use AI to sort of do an initial triage to work out priority problem space, best person to route that support ticket to so you get a bit more efficiency through your support system. Right? Now, if OpenAI for some reason goes down and you can’t use their API, then that whole system breaks, for example, or if OpenAI decides to change their model, which might happen, then the efficacy of that system might change as well. And so you really, using this multi model approach is critical to not putting all your eggs in one basket. And so in the last couple of months alone, this year alone at Junkup, we’ve used OpenAI, anthropic Google Mistral, cohere perplexity. I’m sure there’s a couple of others, and then within those we’ve actually used different models for different things. And so we’ve got some client where we’re using three different models as part of a workflow because those models provide much better cost effective solutions. So you need a cheaper summarizer tool for a simple task and then a more complex GPT four scale tool for a different task, but that’s much more expensive, so you want to use it in different ways. Anyway, as I said, I could talk about this for hours, but there’s a lot of talk about open source as well and we’ll come into that in a sec, I think. But this core idea that it’s all about OpenAI or it’s all about Google Gemini, or it’s all about whatever is actually the most important thing to do about adopting AI is to plan it out where you’re able to swap out the large language model, the LLM AI component, to give yourself resilience in whatever system that you’re building. I think.
SPEAKER: Rich Johnson
Yeah, I think that’s probably the key takeaway for anyone listening to this. You might have heard this before if you’ve done any research into this, but that’s the correct way to do it. And as we discussed, the cost can change there because you obviously have to test and develop around different models. But then if you are using this as a serious tool within your business, it’s worth it. We’re not working with clients that are just playing around and trying to just build little agents in chat GPT. So yeah, I think that’s really a valid point.
SPEAKER: Al Cattell
I personally am relatively bearish on the chat GPT agents, whatever they’re called, I should know because basically they are putting all of your workflows into OpenAI’s ecosystem, and that probably works if you’re a very small business or maybe a solopreneur, a little entrepreneur, and you want to build your own sort of system, but they’re not particularly secure insofar as there’s ways that you can sort of eke out the prompts and that sort of thing. But really, then I think if you did want to suddenly take the same workflow but use Google’s Gemini instead of OpenAis, well, you can’t because it’s all baked into the chat GPT world. Right? And I actually think from a usage point of view, chat GPT is a great consumer tool, but it’s not necessarily going to be the business solution, which is why Microsoft is quite happily tied up with chat GPT with OpenAI, because they can offer this sort of copilot’s concept. So personally, I think the, there’s actually a lot of barriers to people going into having a chat GPT account, then having the paid account and then going in and knowing to use the Richard Johnson accountants GPT. There’s multiple steps of barriers there for business. And what makes a lot more sense, I think, in the short term, is to think about building whatever is right for your software business accounting, business recruitment business, as a tool that you can use internally and then potentially give access through your own website or something like that. Otherwise, I’m personally a bit controversial maybe, but I’m quite bearish on the thing that businesses will be built on top of this GPT ecosystem as it stands today. Right? I think there’s a huge amount of value that will come to businesses from this technology, but it won’t be through the chat GPT store.
SPEAKER: Rich Johnson
Yeah, my hot take for today, I agree. I think there was actually a piece that they mentioned at the very top of that article, I won’t read it out. People can go and read that article, and I recommend you do read that article because it’s really some interesting research, I think, to sort of finish off on that section. The final point that I thought was interesting, and we know this because the amount of conversations we’re having with people and the potential amount of work that we’ve got on the books at the moment is showing this, that so many companies are looking to build their own solutions rather than just trying to cobble together these kind of little startup tools. Because again, it’s like putting all your eggs in, well, you’re putting your eggs in multiple baskets, but these are very small baskets that you’re trying to loosely tie together, and there’s not a recipe for success. So the tools are out there. The companies like us and others that can pull the right technology together, the right models, and work with companies to develop robust systems that you can actually rely on and run businesses on is really key, I think. Okay, so that’s a 16 z. It leads us quite nicely on to another bit of news that’s happened this month, which is around Mistral AI releasing their Mistral large model. Now, I don’t think we should really get into the specifics of the technicalities around that model. That’s kind of something else which we can do if people are interested, and we’re happy to talk about that. For me, the interesting point with the view of a business owner or business leader that is interested in the AI space is Mistral had an open source model, Mistral Seven B, which was very interesting, which we’ve used, and now Mistral have done a deal with Microsoft, so they’ve now got what looks like a really good model, but it’s not open source. And I think that’s quite an interesting shift. And there’s a whole load of buz around this subject at the moment within the AI industry around what is open source within AI. I think Elon Musk is involved with OpenAI around or trying to sue them around their lack of open source models within what they develop. So I think there’s some interesting stuff there. What’s your kind of take on the sort of mistral partnership there and the broader open source model point?
SPEAKER: Al Cattell
So, yeah, I think it’s difficult for mistral companies like Mistral French, very successful. They’ve only been around for six months or so, maybe a bit longer now, but very successful open source. And open source means that they don’t just give away the model for free, but they also explain all of the weights effectively the inner workings of the model, right? So someone else can run with that and not only just use the software, but actually understand how it was kind of put together so that it can be then built on top of by other people. And so the nature of open source there is to effectively really distribute the software to let people use. And you’re right, the AI world that sits today is built on strong foundations of sharing information and making things open source. And famously the technology that underlies chat. GPT. The large language model was developed by Google, and Google just told everyone how it worked. And then now it’s turned into a multi billion dollar industry of which Google is only a small part of, probably not a small part of. From a business perspective, I think it’s critical for the AI industry. And again, this might be going off piece a bit rich, so you can tell me to rent it back in a little bit. But it’s really important that we have a variety of providers to give us this technology so we have selection in the market, right? Otherwise we end up with a couple of big players like a Google and OpenAI and Microsoft, or however that all stitches together, Amazon eventually. But what we effectively then lose is a lot of potential for that flexibility that we talked about, right? And you can see that the language models that are made available to us are actually quite expensive to train and quite difficult to run. So ultimately a lot of applications will come from these sort of small handful of suppliers. And if we don’t have the flexibility to sort of build and train our own models for our own purposes, then we effectively end up in this world of AI that is very determined by a small handset of companies. And so almost on like an ethical level, we need to have a thriving ecosystem of multiple vendors of this technology so that we can effectively build out very capable tools that are generally more helpful. So it’s great that there’s open source models, but people like, and it’s critical, I think, and it is the way forward. There was a big debacle with Gemini’s image generation a few weeks ago, which won’t go into the details of, but it really showed that specific opinions can feed into what the AI will or will not do for you. Right? And whatever your state on politics is, I think my perspective is it’s good to have variety of opinions available for us to choose from and use in these situations. But companies like Mistral need to make money. It’s very expensive to have the infrastructure. Nvidia has become one of the most valuable companies in the world purely by providing the gpus to train all these models. And so I think it’s not surprising or bad for Mistral to be working with Microsoft. I think Microsoft took a very small share, to be honest, in the business, but a share, and again, it feels like a smart play by Microsoft, who have effectively bankrolled OpenAI, or at least their infrastructure, could be also giving themselves multiple options to offer their customers. So Microsoft own the relationship to businesses through their Microsoft three, six, five suite, et cetera. And they, I think, are just keeping their options open by having many different providers that they could possibly feed in there. But yeah, I hope we will see more open models. Elon Musk so Twitter has a model called Grok. There is another company called Grok which does a slightly different thing, which is also awesome, but we won’t go into those details. That’s been open sourced so you can use the AI that’s effectively been trained on Twitter. It’s probably because the first round of it isn’t as good as he wanted it to be. And so they’re kind of giving it away to let people operate off the top. On meta is one of the biggest open source providers of AI. So Facebook’s data, but that’s ultimately because they’re not in the business of selling you the AI, they’re in the business of selling you ads on Instagram. Right. And so to them, it’s kind of not the end of the world if they give away a lot of this technology. So I hope we see a lot of open source technology. I think we are looking to use as much open source as we can, but we’ll ultimately always use the best solution that’s available and that will mean a combination of quality of results and cost effectiveness. So as we said earlier, we will probably end up building systems that use a combination of closed and open source models. But I would like to think when there’s a good enough open source model that either we host ourselves or we can use via cloud providers, we’ll try and do that, I think, just to keep the community alive.
SPEAKER: Rich Johnson
I think that’s all really interesting. The point there that jumps out of me is around the kind of pricing of AI solutions. We have a lot of conversations around this with our clients. If you’ve ever embarked on an AWS project, it can very quickly become complicated to figure out how much this stuff is going to actually cost, because it’s all about how much usage you’re applying to your application. Uptime. There’s all kinds of different fairly complicated metrics that you need to go through and historically you kind of look at open source in terms of software as okay, there’s no cost to purchase or licence the code base but then obviously there are running costs and that’s where solutions with AI solutions do become complicated. Even if you use an open source LLM there’s still cost involved, especially if you’re hosting it yourself because you have to have then the architecture behind it and that’s fairly meaty architecture depending on what you’re doing of course. So yeah I think the pricing around all of this stuff again is a bit of a minefield and again that’s kind of one of the things that we can help clients with.
SPEAKER: Al Cattell
Yeah, I can’t remember I heard this recently that initial adoption of prototypes and businesses can be relatively cost effective. But then if you’re at a 10,000 person company and you try and roll out Microsoft copilot to all of your employees, the costs get very expensive very quickly and then the whole ROI about is this making us more productive and efficient actually becomes an interesting conversation. And so there is a challenge and like you said, being able to map out usage costs and ongoing costs of this is really critical. It’s a question we actually get asked quite a lot, especially as companies look to productionize things and often the case might be let’s host our own open source model because it’s good enough. And actually whilst there is more of a capped understanding of pricing. Right, so that per request model that OpenAI and everyone else is moving towards is great for small scale and it’s great for them in the long term. But for you as a business you probably want to have a very fixed view of your costs going forward. Right, and that’s where actually considering hosting, like you say, yes there might be an overlay in terms of, or outlay I should say, in terms of hosting your own open model, but actually the cost as that scales that may be a better outcome for you as well. So yeah, it’s a fascinating space. The other thing is that everything is evolving very quickly. Right. And so all of these decisions I think probably need to be predicated a bit on an awareness that in twelve months you may want to change some of these things around because things have evolved so far or so fast as well. But yeah, I think for small to medium sized businesses you can actually hit that ROI, that efficiency ROI, you can actually do a lot of things and you can probably host an open source model relatively inexpensively compared to using GPT four on API for everything where your cost will get very high very quickly and there’s lots of options there. But we’ve been through the wall, I think, so we can obviously provide advice as needed.
SPEAKER: Rich Johnson
Absolutely.
SPEAKER: Al Cattell
Okay.
SPEAKER: Rich Johnson
So another thing that I found this month was an article by the Nielsen Norman group on generative UI and this kind of grabbed my attention straight away. And it’s a really interesting concept around using AI to change and generate the user interface depending on who you are and what your interests are, what your kind of user behaviours and things like that. Again, we’ll share the link to the article. You can go off and read that and they give kind of case studies around how this could be effective. And there’s a really interesting one about someone booking a flight and if the system knows what your preferences are around what type of flights you’re after, or it can look at the weather in the location that you’re trying to book something for. If you’re a keen photographer and you want nice weather, it can know that and it will change the interface depending on kind of these preferences. I think there’s a lot more around this, but the general concept around user interface design adapting and evolving kind of intelligently on its own is an incredible idea and you can kind of see where this goes later on and what it could potentially mean for how do we design interfaces if it would turn into a kind of, we’d need to design elements and then just kind of group them in certain ways and allow them to be very fluid. So for anyone we’ve been involved in many web projects and back in the day, you just design a very static 800 by 400 pixel table design type thing. And then of course, as browsers become fluid and then you’ve got to work into responsive browsers for mobile and this kind of stuff, those techniques change. But to go from that to where this is suggesting is quite a big jump. And I think that’s quite interesting. I don’t know if you had any thoughts on that one out.
SPEAKER: Al Cattell
Yeah, I love all of these things that are coming down the pipeline like this, like the ability to do a lot more on the fly. And I think to your point, there used to be a process in making a website which was you’d have a designer who would do design, you maybe have some ux people who would do user research and work out what should be on the page and then make some trade offs and that sort of thing. You’d then build the site, you’d then push it live and someone would visit it. And ideally, if you’re a competent kind of digital professional, you’d monitor how people are using it and you’d improve it over time. What’s possible now is relatively rapidly, we can actually redesign and build things. Like some of the tools that are coming out is you can say, design me an interface. So it’s slightly different what you’ve talked about, rich, where you’re talking about the personalisation, but there are tools which you can use now which are like, design me a landing page for makeup company. And it will make the landing page, right. And that landing page may be relatively generic, but to be fair, most landing pages are. So it does a good enough job, right. And so then the question becomes, well, if we can do that easily, what’s the next level of abstract? What’s the really cool stuff we could do? And in that case, it’s like, okay, I’m really interested in eyeliner versus foundation or whatever from a product perspective. Well, historically we were limited to trying to fit maybe slightly different blocks of content into the same lattice, but now it feels like the opportunity is there to be much more creative on the fly. But I also think, let’s see how it all plays out to be fair, because I wonder how much hyper personalisation we’re going to be told AI is going to change the world with and actually the use cases of it. We need to be a little bit sensible about how this stuff was all rolled out. But yeah, I think that concept is super interesting. And we have tools now that enable us to do it because we’re not so tied to this slower workflow of like designer to web developer, to DevOps engineer to launch a website. We can actually go from person hits website. Here’s some data, build me a page. That’s the dream, right? And it’s about imagining what the possibilities are of that. And I think it’s super interesting, but I wonder in the short term exactly how that plays out. I think that we may still be living in the same sort of world we live in now, and it’s just little sprinklings of that kind of magic that philtre its way through. But I think it’s good. That’s a good article I’ve been on. We should definitely. We’ll send out the links for that one as well.
SPEAKER: Rich Johnson
Yeah, absolutely. Okay. I think I’ve got a few other things, but I think maybe this might be the last one, unless you’ve got anything interesting to share. And this one’s around AI generated content, which is a big part of what people are using AI for, specifically what I’ve been reading a lot of is Google’s search updates. There was a Google core update, you might have seen this. If you follow anything around SEO or online marketing where some fairly big sites that put a lot of AI generated content onto their websites, it’s in their traffic levels completely tank. And that’s because Google’s changed the algorithm that they use to rank content to weed out these websites that are just churning out AI generated content. Now, I think it’s important to stress that we’re not saying that you shouldn’t be using AI to supplement your content creation process. Obviously, if you’re using it in a smart way and creating good content, because ultimately what we’re talking about search engines, they want good content because at the end of the other side of the search engine is a human that is looking for an answer to something or looking for a product or a solution. So if you’re using AI to supplement your content production, to be able to promote the best possible content for your business or whatever service you’re providing, then that’s great. This topic around websites losing traffic and rankings is for people that are trying to take that shortcut and they’re just using AI to push out loads of content kind of en masse. But I think it’s really interesting that Google has this functionality and the ability to be able to figure out what content is totally generated by AI and to kind of penalise it. Now, like anything, I think there’s going to be cases where certain sites get penalised that maybe shouldn’t have been penalised as much. And that’s down to whoever is managing those websites on a case by case basis. Like in the past, before AI was a thing, if you were doing kind of questionable SEO tactics, then that’s on your head and you have to take those penalties if they come to you. But I just think it’s interesting because there’s a lot of talk about AI generated content and SEO, and I think that’s a big thing at the moment. Also connected to that, loosely is the YouTube have added an AI generated content label, which is a slightly strange thing. So essentially, when someone uploads a video, if they’ve used AI to generate that content, whether it’s the video or the audio or the text within the video, they can toggle a switch that says this video contains AI generated content and that works on the honour system. So even if you did churn out thousands of YouTube videos, all made of AI, you don’t have to tick that box. So it’s interesting that they’re thinking about this and putting these mechanisms in place, but I don’t quite see the value of that just yet. But it will come about.
SPEAKER: Al Cattell
I don’t know.
SPEAKER: Rich Johnson
You want to add anything for that.
SPEAKER: Al Cattell
I think I’m quite interested in your takes on the SEO stuff, rich, to be fair, because you’ve always got a lot of experience in that space. And I guess what do you think is the path forwards for SEO in this new AI world? Is it still important for companies to be producing content? Should we? Because I think I live in an AI bubble, a lot of deep in that bubble. But yeah, SEO is still a thing, right? We still need to be thinking about producing content. We still need to be thinking about at the core part of SEO, which is like providing value to people, as you said. So, yeah. What do you think about the sort of the.
SPEAKER: Rich Johnson
Yeah, again, I could talk for hours on this topic and unfortunately I don’t think anyone really knows the answer to that in terms of right now, businesses should definitely be continuing to focus on SEO, right? So SEO is a major part of any business’s online marketing and it’s incredibly valuable. You don’t even need to sell SEO to a business if you understand the value of, if people are searching for something, they search on Google or Bing or wherever and they try and find it. So for your business to come up in those results in a high position, that’s valuable. And the fact that creating content, which there’s a lot more to SEO, obviously, but creating good content that talks about whatever product or service you offer, that content will always exist on your website. So that is an asset where you can track ROI. Go back to our original point at the beginning of this video, you can track the ROI of that content creation and the optimisation that you do around it. And that’s really important. Where we go in terms of how AI impacts SEO, that’s really a bit of an unknown because perhaps the search engines will change and we won’t be searching on search engines, we’ll be searching within AI models because they’ve been trained to understand where this stuff is. But then again, we kind of come to this same point where if that model is not open source, how do we know who’s influencing the results of it? With search engines, you can see who ranks and you have a list and most consumers, I think, understand that. Well, maybe not, but there’s a waiting behind the search results based on what you’re searching for. And when it comes to an AI model and how it’s trained and what it’s learnt, what data it’s learnt from, that’s something completely different.
SPEAKER: Al Cattell
Yeah.
SPEAKER: Rich Johnson
So my advice, and this is my professional advice that I give to all clients, is stick with your SEO strategies, continue to keep creating content that’s relevant to the end user. Use AI when you can and be clever about it. I’ve seen some really bad use cases of companies using AI because they know they want to create content and they think, oh, AI will write it for us. That’s great. So they just type in some simple prompts and say, write me some content about this product or this service and it’s fit to that. But most people listening to this will have tried chat GPT. You know, if you put a simple prompt in, you’re going to get something that just doesn’t sound great. It doesn’t sound like you as a business as well. It doesn’t have the same language that you would use or the tone of voice and this kind of thing. So you can end up making your business just sound. It’s almost like the equivalent of a stock photo. It’s like if you use any AI to just write blanket your content, you could damage your brand and not only from how it sounds, but also how it ranks within search engines. So yeah, to reiterate, I think use chat, GPT or any AI tool to help supplement your content production and then keep listening to our state of the AI updates as we go along because we’ll share everything that we learn about this stuff if we think it’s relevant.
SPEAKER: Al Cattell
Yeah, definitely. And I think that’s it, right? I think the things that Google are trying to weed out is the bad behaviour, which is like lazy or trying to game the system use of AI to produce content that’s not valuable, right? And I’ve seen some interesting people on X really showboat about basically stealing competitors content, rewriting it with AI and whacking it on the site and getting Google rankings. But Google’s been building an algorithm to analyse content for two decades or more now, right? They’re going to be able to, it’s a cat and mouse game. But I think there’s still a role to play for using AI to help with the content creation process, as you say, to help with the ideation process. And where you’ll come unstuck is if you blindly just grab stuff, paste it into your blog and chuck it on the Internet, is that’s going to cause you problems, right? Whereas if you do know what you’re talking about and you need a little bit of inspiration from AI or you need it to maybe write the first draught which you then improve. Great. It’s a great use case. Right. But just pressing a magic button that says, write me a blog post about the impact of AI is not going to get you the answer that you. But, yeah, fascinating stuff. That was good. I think we will do this every month, rich.
SPEAKER: Rich Johnson
Yeah, absolutely. We’ll aim to put one of these videos out every month. We’ll transcribe this as well. So if you don’t want to watch us, which you should watch us, you can read it on the train or wherever you are or listen to it as audio. If you’re interested to sign up so we can notify you about this, you can go to our website, gianthelp.com, and leave your email address and we’ll send you this. We don’t spam you or send you anything you don’t want.
____
Sources & References:
Andreessen Horowitz: 16 Changes to the Way Enterprises Are Building and Buying Generative AI
Nielsen Norman Group: Generative UI and Outcome-Oriented Design
If you are interested in any of the topics we discussed or if you have any question, please comment below or contact us directly.