Discussion about this post

User's avatar
Basil Chamberlain's avatar

You don't post as often as some of the other Substackers I follow, and I was thinking of cancelling my subscription. But this post has swayed me; I'll stay.

I've been saying for years that one reason machines have found it easy to learn to behave like people is because people have spent years training themselves to behave like machines. It shows, for instance, in the way that job interviews have been reduced to box-ticking exercises designed to eliminate, as far as possible, personal preferences and emotional responses. Institutions and organisations are increasingly suspicious of any process that delivers a different outcome from the one a computer would have delivered. We are urged to rely on procedure rather than experience, to follow rules rather than than making choices. No wonder computer programmes can imitate us so effectively.

You're right to mention Marvel movies. Sean Thomas, the British legacy media's most insistent enthusiast for AI, recently wrote a book and submitted it to an AI editor; he claimed that the editorial feedback he received from the machine was as good as any feedback he could hope to get from a real-life, flesh-and-blood editor. But the machine's advice was essentially to make the book more conventional: "Lack of clarity can be frustrating for readers who want a definitive answer," so "while ambiguity can be effective in some cases, it’s important to provide enough clues and hints to allow readers to draw their own conclusions about what happened." If that's the kind of advice a professional editor would give, perhaps it explains why so much modern fiction is so unenterprising. Well, OK, Thomas was writing genre fiction. But even modern literary fiction often seems to be written to a formula, insistent on clarity, determined to explain itself, and deeply reluctant to tolerate the kinds of ambiguity that were the mainstay of the novel in the hands of, say, Henry James, and were common in his heirs up to the 1990s. Read, say, the early Angus Wilson (singling him out somewhat arbitrarily since I have read all of his 1950s novels in the last few years), and marvel at his refusal to advance neat and tidy explanations for human behaviour. The kind of demands he makes on the ordinary intelligent reader are, in their quiet way, quite breathtaking.

Of course, as you suggest, the question these days is how many people can spot the difference.

AEIOU's avatar

AI IMO writes better than maybe 75% of people – it’s just that these 75% self-evaluate and just don’t publish anything (I certainly don’t publish long-form essays here or anywhere, let alone fiction).

Even if it’s stylistically bland, inexpressive output, that matches or exceeds vast majorities of humans. Three years ago, it beat maybe a third of people. Before that, it could not functionally write coherently at all, which frankly is also “human level output” for large proportions of the world population who ever functionally learned to write at all, i.e. not just shopping lists.

If the proportion of people it can’t best in each field keeps – let’s say – roughly halving every 2.5 years and my assessment above is correct, AI will be a 98.5th percentile writer in a decade. Still not a great novelist, but what percentage of humans is.

Progress is of course uneven in different fields – good writing is probably just not enough of a market to invest scarce and expensive ML talent in, plus you get a lot of good, angry prose written about you if you try for obvious reasons. E.g. in programming, not an artistic but arguably a creative field, the best current models are 90th+ percentile contributors only given natural language instruction.

13 more comments...

No posts

Ready for more?