Specific guidance - LLM usage to create factual content
How to use, and not to use AI in creating informative factual content.
Cards on the table here first. At present I would not trust LLMs to write factual content in my name, or suggest content that I am going to write about. None of this article came to me Ex-Machina. The tools still have a lot of value for technical writing, and I’m going to discuss that, but it’s important to be upfront about the issues here.
But my situation and objectives may not be yours. There are situations where it may make sense to try and do this anyway. Over time, the technical argument against doing this is likely to change.
This article is going to lead with the problems, because if you are going to try to use LLM’s to write this content, you need to understand them. I will also talk about what I think you can use them for more safely; that’s going to be a much shorter section, but there is still a lot of practical value in it.
Understand that LLMs are very predictable at suggesting topics for factual content
This is at the top of the list here for a reason. Duplication kills value for content. It kills the practical value, but it will also be actively identified by search algorithms, and the content will be ignored.
It is possible to get LLMs to suggest original content, but they are going to need help. The more specific and detailed the brief you give it the better. Give it as much information as possible about your target audience. Look for facets of a broader topic to focus attention on rather than producing generic, top level, content.
And actually check their suggestions. Google your topic. If you can see other similar articles, then you might want to look for another topic, or find a distinctive spin, or just make sure you can offer better content.
This problem extends to talking points within an article as well, but it will be less of an issue if you have managed to find distinctive topics. If you need your content to cover widely discussed ground, then providing the LLM with solid and underexposed talking points to hit becomes very important.
Almost every aspect of an LLMs performance can be improved by asking it to perform better
LLMs adhere to a lot of the narrative tropes we associate with Genies, and that includes the probability that they are dealing with some muppet who is wasting their wish. 75% of “prompt engineering” is knowing what you can ask for. Want it to explore under discussed aspects of the issue? Ask. Want it to examine utility? Ask. Want clever metaphors? Go for it. Want everything as a poem? Don’t.
Understand that LLM’s may be worse at exploring genuinely interesting and novel ideas
As we will discuss, I see the most value in AI in terms of offering feedback on content that I have already written. I will ask the LLM to suggest points I may have overlooked, and I’m at least open to the possibility of exploring those threads in a final draft, but in practice that almost never happens. And I think this is why. I find that the quality of feedback from an LLM drops precipitously if I am exploring away from the beaten track. The less we talk about something, the less an LLM has to draw on. It’s important to keep this in mind if you are trying to create distinctive content, especially if you aren’t confident in your ability to continue spotting problems with it. The reverse will be true if you can find a topic that not many people are creating content about, but is still discussed a lot online. These areas do exist, and they could represent sweet spots for AI content creation; especially if you can find an audience, or identify topics that are relevant to your existing audience that no one is talking to them about yet.
Always be providing value for your target audience
Content 101, but this is easier to forget when content “costs” less to make content. If there is no obvious value in what you are creating why would anyone actually engage with it? If a reader doesn’t learn anything they don’t already know, or are not surprised by anything you say, then they aren’t going to read your next article. The only thing that valueless content can reliably do is obscure any other value you might otherwise be able demonstrate. “Free” can cost you dearly here.
Does your audience need resources? Can you create them? What problems are they talking about? Can you help them fix them?
A small amount of inaccuracy can destroy a lot of that value
LLM’s make a lot of mistakes, and it’s almost impossible to rule out that possibility. They make mistakes that are hard to spot, and they don’t make them in the same patterns that a human will, rendering much of your hard won experience in dealing with human bullshit worthless. They misrepresent and invent sources. There are a lot of specific and nuanced issues in play here. Factual content is just worthless if it isn’t reliable, and the most important segments of your target audience will also tend to be the ones who are least able to spot mistakes. If you are going to let AI create content in your name, you will need to understand the topic well enough to properly vet its outputs. If you can’t do that, you are staking your reputation on very bad odds.
That value has no value if your audience won’t associate it with you
If it’s very clear that your article was created by AI, then it doesn’t really matter how insightful the LLM is being. It’s a good look for OpenAI, but it’s not actually going to help your own brand. If you can somehow manage to shepard an AI to create valuable and trustworthy content, that people don’t actually believe you wrote, then it’s very likely you have A) Magnificently risen to a significant technical challenge and B) Failed anyway.
Don’t be afraid of longer form content
Another general content point, but it’s something that is widely misunderstood. Short length may represent a sweet spot for readability, but it’s not a sweet spot for communication or distinctiveness or value, or search performance. This just means that if you can do it in 500 words you should, but often you can’t. For any complicated technical topic that just isn’t going to work. Keep your content as short as possible and as long as it needs to be. Don’t give an LLM a word count, because it will pad it out if it can’t fill it, or it will cut an article dead if it overruns. Talk about target length in your prompt, but also the level of detail you expect. You need to pay attention to how people are responding to your content, but if they are bouncing off it, you should often be worrying more about structuring for accessibility, and properly managing expectations than in cutting out actual value.
LLM’s are generally bad at structuring content on the first run
So would you be if you had to do everything in one long take without stopping for breath. They tend to repeat points, and default to article structuring that is great at signalling an LLM is involved, but bad for a lot of specific use cases. Instructing it to take a second pass with your specific audience in mind will tend to dramatically improve the resulting content.
So what should you actually use AI for here?
Catching mistakes you may have made, as well as other types of risk such as legal or reputation risk. I find this to be where LLMs are most reliable, and unproblematically useful. LLM’s can’t offer reliable legal advice, but when that’s not on the table, it’s better than nothing, especially if you write about contentious topics. Make sure it knows where you are based, and appreciate that if you ask it to identify risk it will almost inevitably find something to worry about. Focus on credible risk. Don’t use it as a replacement for real due diligence in situations where that clearly needs to be done.
They can also offer good advice on structuring work on a broad level, spotting repetition, or identifying when content might benefit into being split into separate content pieces. Likewise suggesting possibilities for follow up articles or points that could benefit from additional examination. It’s much safer to take its content suggestions when those are issued in response to content that is itself distinct, but that shouldn’t stop you from checking for competing content anyway.
LLMs can be useful in terms of rhetorical and communication advice, but be careful here. If you are discussing the technical complexities of an article, you are also probably degrading it’s ability to advise on making that article more accessible, especially within the same single “chat”. This is because its responses are heavily routed in the sociolinguistic and lexical environment of the moment. That sounds complex, but is practically straightforward. If you a marketer are wanting to communicate with other marketers, for example, then it will generally give you good advice. If you are wanting to communicate with a completely different audience, you should be more wary, and ideally make the request in an entirely fresh chat (turning account level memory features off).
Keep in mind, that it’s feedback can be rooted in patterns it has seen in other contexts; if it’s advice doesn’t fit the situation, or your own audience, then don’t take it. Also be aware that it’s very difficult for an LLM to offer meaningful advice on an entire piece of work, especially with regard to any subjective aspect of it; and this type of advice is as likely to be routed in patterns of feedback than in the patterns of the content.


