The Big Hallucination
Would I be more effective with an LLM? Probably not, no. Maybe I just haven't seen The Emperor's New Clothes. I have seen the Stochastic parrot - that don't impress me much... Sigh.

One of my biggest concerns with LLMs is hallucinations. While you would think a stochastic parrot would repeat best practices, stories are abundant on the LLMs making stuff up. I am equally concerned about all the biases against minorities, the underprivileged and non-US thinking. I am somewhat concerned about how the content is stolen and unknowingly captured (I see you, MS Teams) and about the environmental impact of all the data processing.
We already know the issues of the LLMs from On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? π¦ and the book Unmasking AI. (Hey book club) Fix this, and then let's talk about diverse training data and how AI output makes the results bland and similar, not diverse - not unique. US culture and content are no longer the guidance for the culture and behavior of the world. We are not the same. It should be needles to say, but values should not be imposed on others without consent.
Would I be more effective with an LLM? Probably not, no. The problems I usually wrangle with are not about applying best practices but figuring out the right practices for the situation based on a range of technical and organisational factors. Tossing those factors in a teapot and stirring will have unwanted side effects.
At work, I deal with the implementation of requirements. The implementations need to be trackable or we are in potential breach of contract. In that context I cannot base the requirement analysis on LLM due to the potential hallucinations of making stuff up - both impossible and plausible.
I have yet to see a customer satisfied with a test report that does not rely on objective, verifiable and consistent evidence. Recently, I read about auditors needing to be aware that system screenshots could be generated/faked. Fake audit material is not acceptable to customers who pay a premium for services being implemented as specified. The security controls are there for a reason. (ffs)
Sigh
I'm currently writing a security specification to elaborate on how we implement the requirements of the customer. And we do have similar specifications available, even for the same customer. But. The big but is, that the requirements for this contract has been updated since the previous thing. Obviously. While we can lean on existing best practices, we need to elaborate according to this specific context. This novel situation, where some parts are directly available, and others require analysis, collaboration and developing a shared understanding.
Last year, I fed a range of existing test strategy examples into a local LLM and asked it to generate a new strategy based on a new contractual framing. It didn't compute, and the tool asked me to ask the author of the documents. That was me.
That Don't Impress Me Much
I am also writing a reply to a 100 million DK/EU contract RFP. I can clearly see how the contract text has been tossed together and in multiple places, contradicts itself. The contract deals with building Platform as a Service cloud environments but requires procedures for testing fuel availability for emergency power supply. The testing chapter regurtitates terms of the software application context, which only marginally applies in a modern containerised DevOps context. Yet, we have to stay compliant with the wording of the contract in order to score the deal.
Maybe I just haven't seen The Emperor's New Clothes.
When I create written content, both here and for my books. The topics are about bringing thoughts and references together to form a new understanding. As that new understanding has not been formed it cannot be generated. As English is not my native language, I do use a tool for grammar and symatics - but even that has it quirks and misleading autocorrects. Or when the tool simply doesn't accept names it hasn't seen before. I am not a typo.
Code generation? Perhaps, yes, it can get you started with a sketch. But that sketch will be buggy and insecure in many known and unknown ways. There is much more to development than producing code. Check the DORA AI report, then let's talk. It would be the first time in 9 generations of programming that we can specify intent and have it delivered as intended. There is so much more to having a running and maintained system than outputting code.
Tools are tools. They shape us, and we shape them. They emerge, expand in use, commoditize and converge. But first of all, they need to produce actual results, not hallucinations.