2025: The year of insatiable AI.
Hungry for data. Hungry for funding. Hungry for deregulation. Buckle up!
We heading backward folks, in so many ways.
I’ve really struggled to write this piece. Tech innovation has a certain speediness, understandably, but in this attempt write about the hunger for data to feed the next generation of AI systems I find myself completely outpaced by the news cycle.
Frantic change - which would in normal circumstances be synonymous with heading forwards rather than backward. But it feels like the opposite right now. However, that feeling of going backward is more about the chaotic constant screaming of the news cycle than it is about any specific technological shift. Too much is happening too fast, and everything sounds apocalyptic, making it progressively more difficult to work out what’s a real crisis and what is instead a convenient one. Backwards, because it was like this under the Johnson Leadership in the UK. Backwards because it was like this under the (first) Trump administration in the US. At what feels like 5-minute intervals we seem to get hit with other piece of news/opinion/conjecture about how the future of AI is this or that. But rather than having to hear from just the squawking proxy mouthpieces of under-qualified world leaders, now much of the noise seems to be coming from our newly appointed tech-oligarchs.
I don’t use that definition lightly. 2025 will be looked back on as keystone moment in the era of a global tech oligarchy. We have permitted business owners and CEOs to permeate the leadership of sovereign states. They have moved beyond a once relatively confined space of tech innovation. Bezos, Musk, Thiel, Zuckerberg, Cook, Iger et al. They don’t just supply us with phones, cars and light entertainment. They own a significant portion of print/social media, and as we’ve seen in recent weeks, they are also firmly, financially, embedded in the running of governments. I don’t want to get off-track with this topic, there are many great journalists out there tackling the Oligarchy advent and doing a far more eloquent job of it than me!
So, to drag us back the actual topic at hand - AI, what on earth is going on now? Well, did you watch Tiger King?

Of course you did. There were horrors-a-plenty in that show, and amongst them what stood out was how completely blind all the keepers (I use that term the loosest sense) were to three things -
How big and dangerous tigers are.
How much food and resources tigers need to survive.
How ill-equipped, in both resources and intellect, these keepers were to properly care for these dangerous creatures.
If you’ve not seen it, don’y worry, I’m sure you know enough about big cats to get the metaphor. I’m not going to encourage anyone to watch that show. Please, don’t do it.
The misadventures of Joe Exotic, Doc Antle and their tigers is in many ways a (frankly, frighteningly accurate) mirror of whats happening right now with AI. We had cute little cubs/kittens in the form of DALL-E Mini, and ChatGPT3.5 and even Disco Diffusion. As tools they were novel- producing outcomes and exhibiting behaviours we’d not seen before from software. Yes they had claws, but they were controllable, and we could confidently theorise about how big and lethal these AI tools could be become. See, just like tigers.
Let’s take a step back. Here’s our 2025 recipe so far-
AI tools are actually bigger and more dangerous than most of us would care to acknowledge.
Tech Oligarchs are stepping into positions of unchecked influence and power
And finally- those same AI tools, well, we’re running out of stuff to feed them….
Yeah, one of the biggest challenges facing companies like OpenAI is that they’re running out of data to train their models. It would seem that ingesting what was not-too-far-off the entirety of written material from the whole species wasn’t enough. If we want the machines to get better they simply need more data. There’s a lot to digest here, for us that is, not for the AI.
If you want to get lost on a side quest take a look at the links below, they’ll get you up to speed on this data problem and feature perspectives offered by Sam Altman, Geoffrey Hinton, Elon Musk + more. Come back after the side quest and we’ll talk about our options + what might happen if we don’t act.
Side Quest Reading -
This statement from OpenAI
Elon Musk’s legal challenge to OpenAI’s plans.
This response from Geoffrey Hinton about opposition to OpenAI’s intentions.
AI lobbyist’s attempts to attain exemption from UK copyright laws.
And then this hypothesis from Musk regarding the exhaustion of resources.
Ok, you’re back! Pretty ominous, right? Companies like OpenAI, Anthropic etc. have enthusiastically taken money from investors. That means they have no choice but to grow at a pace and ferocity that ensures a healthy return for those investors. The same can be said for Companies like Meta, X and Amazon - each have spent significant amounts of cash developing proprietary AI tools, in turn promising a windfall for their shareholders.
With this in mind it’s not too much of a reach to assume that this is why the many of the companies above have so shamelessly positioned themselves closely to global political leaders and administrations - sustaining the expansion of their AI tools will require changes in legislation. Based on the barriers that AI companies have already openly confessed frustrations toward, I think it’s entirely possible that we’ll see attempts at -
Changes to copyright laws to permit the ingest of previously protected material.
Changes to privacy laws in order to permit the ingest of previously protected communication data (chatlogs, text messages, emails etc.)
Rapid construction of data centres in developing nations and those without comprehensive environmental legislation.
And all of this will be much easier to achieve if you’ve got governments on your side.
Ok, but what if none of this happens? What are OpenAI et al going to do if governments stand strong and don’t allow them to run roughshod over environmental and copyright protections, and privacy laws? Despite a sizeable continued resistance we need to acknowledge that the gate is open. The horse has bolted. AI isn’t going anywhere, and for better or worse the expectation is for it to continue to improve, in turn making our life’s easier.
Let’s swing back to our tiger analogy for a moment. Regardless of the restrictions in place AI companies will still need to feed their tigers, and without new data there’s a very limited field of options. One option is synthetic data - training AI using data generated by AI…
Yes, that idea puts a twisting ache in my stomach as well. Something about it feels - unnatural.
Given that efficacy and accuracy are two of the most significant public concerns about AI, this road is likely to amplify debate about how safe it really is to have AI posses any kind of control over infrastructure within sectors like healthcare, finance or, dare I say it, defence
Time is running out. The people want their tigers, and we need to decide what to feed them on.