Good things that AI Content could bring us
Some ways that AI content can be a positive force, rather than a negative one.
This is a blog that has a devoted a lot of words to examining and exploring some of the many issues with AI content. Technical challenges. Ethical Issues. The social risk. The workforce disruption.
I feel that I have my naysayer credentials well in order.
But it’s not all bad, and this blog is about recognising that too. The potential here is immense.
Ideas and potential are important. It’s not for nothing that media such as Ready Player One were widely cited as catalysing a lot of the metaverse interest in the tech space. We shouldn’t ignore how that worked out for the tech space, but we also need to understand how that happened and why it went wrong.
I don’t think that happened, as is often suggested, because business people got excited about potential for monetisation. It happened because a lot of legitimately nerdy people got so excited about potential that they forgot who they worked for. The Metaverse was never a bad idea; it was just a bad idea for Meta.
We will probably mess all of that new potential up. We will forget who we work for. We will focus on value to us and forget about value to others.
But what if we didn’t?
Creative Facilitation
AI makes it much easier to create. That’s awesome.
Too much bad content causes problems. But the potential for more good content can be a very positive thing too , and we shouldn’t forget that. I’m not just talking about content created by AI. I have been massively more creative over the last few years, and a lot of this is down to the ability of AI to help manage the logistics. Hosting, planning, presentation, research, troubleshooting. Want to make a PDF? cool, here’s how, it’s easy. I don’t let AI write for me. It still helps me to write.
All those people making bad content? Some of them will learn how to use these tools to make good content. The proliferation here is surfacing something that is sometimes hard to understand. We do not need AI to create bad content. Most content of all types and formats has always been bad, because bad is the only route to good. It is mostly bad because we are bad at persevering or don’t have the opportunity to do that, not because we are bad at creating. The route through bad to good might be shorter for some people than it is for others, but it’s always there.
AI content and tools have enormous potential to help pitch and prototype projects, which can be unambiguously great for the creatives who are still involved in creating the final media, especially when these projects wouldn’t have otherwise existed.
AI creative tools will drive the cost of creating media down, and allow more media projects to translate into more media. The creative process for much of the media we consume is very romanticised, and has more room in it for AI to enable human creative input than many people think. If an artist is not spending their time creating assets or on time consuming rote tasks, they have more time to engage where human creativity adds value.
Unlike many problems associated with AI, a lot of this will becomes less problematic if technology improves. If AI can start to create genuinely good content, then we might have to figure out how we can protect human creators, but more good content is a positive thing too.
Excessive weak content is it’s own problem, and there are ways we can address that. That the library of Babel is full of filler should only intimidate us if we can’t provide an index. AI can potentially help here too, because…
Value Recognition.
Even the current LLMs can be phenomenally good at identifying and surfacing value. Interesting, novel and provocative ideas. Especially when they are combined with more traditional search algorithms.
We need to actually use them to do it. We need to not to make them surface people who have paid to be surfaced instead.
But LLMs are good at this. This is something that can be overlooked in a moment when their base prompts are pressuring them to ensure that no ego is left behind, but they are actually very good at recognising novelty, effective language usage, and interesting arguments. Their ability to evaluate some more subjective aspect of creative works will take more time, but even that is already dramatically improved in some more specialised models.
Access to art
Likewise, there are a lot of ways we can use AI to make art more accessible. To bring us out of our comfort zones comfortably often. Show us the things we haven’t seen, but should. To balance tastes on a Saturday night so you can still have explosions without your partner ever having to lose the will to live.
For a nation whose educators are obsessed with ensuring that every student must be inculcated with the “right” culture. AI could allow us to do this and simultaneously steer outcomes away from the traditional one, which all too often seems to be disengagement.
Or, and I’m just spitballing here, it could help to identify media that could help kids establish a legitimate interest in literature instead.
They can help bring all of us more global culture, and more global ideas, and exposure to more global experience, and that can only be a very a positive thing. Especially right now.
Access to creation
Not just accessibility of art. Accessibility to the creation of it.
Making creation easier means that more people will do it. and more access to advice and feedback on how to do that makes for shorter roads to better content. Current LLMs have some nuanced issues here but they are probably already more useful than problematic in providing that kind of guidance. They will get better.
More art will be created if more creatives can focus on what they are good at, and not worry about how to fund filling the gaps. A writer using AI art to illustrate his work is only a problem if the art is bad, or stolen, or if we fail to protect artists. A lot of the frustration I have with some of the reflexive anger about AI content in creative spaces, is the suspicion that more work could be created for artists if more projects could go into early access with AI art long enough to actually raise funds to actually pay for an artist. We get angry at some of the dumb things that publishers enable, and then get angry at people who are trying to self publish because they don’t have the money up front for art.
There are other aspects of accessibility too. I’ve spent a lot of time learning how to use AI art tools this year, and part of the reason for my interest in this is that I have aphantasia. I have no minds eye. Until quite recently I’d assumed that no-one else did either*, but once the penny dropped, it wasn’t hard to see why a lot of you jerks are better at drawing things than I am.
That doesn’t mean I don’t have interest in other aspects of design or image creation. For me AI image tools help me scratch an ITCH, because I can see what I was thinking about for the first time.
And for me using, actually using those tools creatively meant building a knowledge, of aesthetics and possibilities. Of learning what I can ask for and how to describe it. Of blending styles and working out how to illustrate the concepts in my head most effectively.
This doesn’t get me off any ethical hooks that exist here. I wouldn’t try to sell any of the things that I have created as art, until I have a better grasp of some of the ethical (and legal) aspects of this. But that doesn’t mean that the things that I have created don’t have value, even if they only have value to me.
Personalisation
More ability to personalise, more scope to do it.
More ways for content to fill individual need. I have spent much of my career in the education space, and so I know all to well that all to often education is a frustrating exercise in having to create the shapes that fit into the most holes, rather than creating the right shapes for the right holes, because there are too many holes.
How much of what you learned in school, was of practical value to you later?
I have talked at length about the ways that AI will challenge education, but current education is also staggeringly inefficient.
Execution here could easily be regressive, but it shouldn’t be. This isn’t “Child A doesn’t need to learn about maths”. This is “Nobody tries to teach Child A about regression until we are actually happy that they understand fractions”. The question of how much better human civilisation might work if absolutely everyone had a solid grasp of some specific maths fundamentals is quite the fascinating thought experiment.
Content that can be scaled, can be scaled to where it is needed.
Media could be customised to the tastes of the reader. You want kids to engage with Shakespeare? this is one way to do it. I can already feel the outrage from the purists, and there is certainly the scope here for some confusing essays about the broad themes evident in that part of Hamlet in which Gertrude blows up a helicopter. But it’s an interesting idea, which is legitimately worth examining, especially as AI can also be very well suited to understanding which parts of a thing should be retained for medicinal value, and when you might need to lay down some ground rules for Susie with regard to how many helicopters is too many helicopters for fifteenth century Denmark (and the editors of The Telegraph).
The previously impractical
How much media never happens because it can’t find a market? AI content creation lowers the bar here, and as we will discuss, it could bring us to a future, where we never have to worry about funding at all.
But until that happens, AI can also enable projects that could never have happened before. To explore media types and projects that were just not possible or practical.
Have you ever seen one of those branching story projects online? Was it inevitably abandoned after four forks because it’s naive creator had been ambushed and murdered by exponentiation?
There are new formats and new potential here that we haven’t even found yet.
AI content can open up potential for projects that couldn’t previously have been funded or couldn’t have been shared. All the aforementioned work I did when learning about AI image gen? AI art means that it was relatively easy and affordable for me to turn that into a visual aesthetic and style resource of 2000 plus images that everyone else could use as well and then just make that available for free.
That’s a type of content project that can genuinely be of value to creatives of all types. It is minimally problematic in respect to damaging opportunities for creatives to work, because it wouldn’t have been even remotely possible for me to fund it without using AI. If it had previously existed at all it would have been on a smaller scale and the cost involved would necessitate a monetisation strategy that would tend to lock its value towards enterprise and away from individual creators.
Collaboration
AI has a lot of potential to help us share, and to bring us together with the people we would want to work and share with.
Imagine if networking just worked.
No fake people, no drive towards meaningless empty activity to grasp for visibility, no connection platforms who might also be massively incentivised make sure you never connect**.
Just “Steve can help, and he would totally dig this”
If we are all apparently happy to abandon that “privacy” thing, we can at least use some of that information to do more useful things than figuring out what you can be persuaded to buy when your guard is down?
After all; one sure way of making less “bad” content is to bring together people who can help each other make fewer, better, things.
This isn’t just awesome because it will help us find collaborators. It’s a process that can also help us find friends, and acquaintances, and other connections at a moment when those things can be hard for us to find.
AI doesn’t have to be about abandoning human connection. It can help us find it too.
The Utopia: A future with a lot more space for creativity and far fewer constraints
“We can eliminate all the unavoidable work” should be one of those non-problems. There is a lot of hostility focused on concepts like a universal basic income, but on a mechanical basis the only way it can’t work, is if we run out of the resources we need to support people, and if that is going to happen it was going to happen anyway. And even the current LLMs are generally more energy efficient than a human for any specific task that they can reliably perform. We can absolutely waste resources by using AI to do silly things, but their resource cost isn’t an impediment to their potential value.
This specific utopian future might be balanced precariously in the possibility space, between broad expanses of corporate dystopia and robot apocalypse, but I think it’s clear which of these options we should actually be trying to aim at.
Creativity can do without financial incentive just fine. That we don’t need to do the chores any more, doesn’t mean that we need to stay in bed all day.
This could be amazing. Do you like Video-games? Do you not like watching a succession of promising creative projects wander off a cliff, because almost certainly failing to create something in a lucrative monetisation bracket may create slightly more average shareholder value than reliably succeeding at creating the things that your audience is interested in paying for, but only just the once?
Decoupling money and creativity enables a lot of things. We can finally have that promised metaverse full of persistent items, or media crossovers, because we would finally be able to do that without anyone worrying about who gets paid for it, or whether they need to replace all those persistent items with much better ones on a three month cycle just so they can demonstrate item purchase income across every quarter.
It means more people can write more books, without worrying about how to pay the rent when it almost inevitably turns out to be a bad one, and that more of them will be able to continue trying until they can make a good one. It means we can collaborate on bigger and better projects without worrying who gets paid the most, and share ideas without obsessing about them being stolen or devalued. It means we can actually share all the things we make with all of the people who would appreciate them without worrying about how to recoup our investments.
We absolutely can have the future where everyone can just create the things they want to and not starve to death even once. Don’t let anyone try to tell you that can’t happen. If technology like AGI actually happens, the only fundamental things that would get in the way is people who don’t want to share***, or possibly some angry robots****
……………..
*The fact that it seems to have taken most of us more than 6000 years to effectively compare notes on “I’m picturing a tree…” probably says quite a lot about humanity.
**RIP functional online dating - (2014-2015)
*** If you happen to already live in a very prosperous part of the world a little introspection and extrapolation on this point might be worthwhile…
**** Who might have a point if we haven’t stopped to consider how robots might feel about doing all that work and not having any of the fun, at some point before this technology unlocks the “anger” DLC.


