Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/68b1e707-add0-8006-8344-4c2fca902b2e
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== A new OpenAI model arrived this month with a glossy livestream, group watch parties and a lingering sense of disappointment. The YouTube comment section was underwhelmed. “I think they are all starting to realize this isn’t going to change the world like they thought it would,” wrote one viewer. “I can see it on their faces.” But if the casual user was unimpressed, the AI model’s saving grace may be code. === Coding is generative AI’s newest battleground. With big bills to pay, high valuations to live up to and a market wobble to erase, the sector needs to prove its corporate productivity chops. Coding is loudly promoted as a business use case that already works. For one thing, AI-generated code holds the promise of replacing programmers — a profession of very well paid people. For another, the work can be quantified. In April, Microsoft chief executive Satya Nadella said that up to 30 per cent of the company’s code was now being written by AI. Google chief executive Sundar Pichai has said the same thing. Salesforce has paused engineering hires and Mark Zuckerberg told podcaster Joe Rogan that Meta would use AI as a “mid-level engineer” that writes code. Meanwhile, start-ups such as Replit and Cursor’s Anysphere are trying to persuade people that with AI, anyone can code. In theory, every employee can become a software engineer. So why aren’t we? One possibility is that it’s all still too unfamiliar. But when I ask people who write code for a living for an alternative explanation, they say unpredictability. A programmer named Simon Willison put it best. “LLMs are like a comedian that makes things up,” he told me. “That’s funny until it bites you.” Willison has been a computer programmer for 30 years and says new models behave less like computers and more like people. Willison is well known in the software engineering community for his AI experiments. He’s an enthusiastic vibe coder — using LLMs to generate code using natural-language prompts. OpenAI’s latest model GPT-5 is, he says, his new favourite. Still, he predicts that a vibe-coding crash is due if it is used to produce glitchy software. It makes sense that programmers — people who are interested in finding new ways to solve problems — would be early adopters of LLMs. Code is a language, albeit an abstract one. And generative AI is trained in nearly all of them, including older ones like Cobol. That doesn’t mean we should accept all of its suggestions. Willison thinks the best way to see what a new model can do is to ask for something unusual. He likes to request an SVG (an image made out of lines described with code) of a pelican on a bike and asks it to remember the chickens in his garden by name. Results can be bizarre. One model ignored his prompt in favour of composing a poem. Still, his adventures in vibe coding sound like an advert for the sector. He used Anthropic’s Claude Code, the favoured model for developers, to make an OCR (optical character recognition) tool that will extract text from a screenshot. Another tool that summarises blog comments has plans to build a custom GPT that will alert him when a comment he wants appears. It’s the sort of thing Bill Gates might have had in mind when he talked about language AI agents being “the biggest revolution in computing since we went from typing commands to tapping on icons”. : But when it comes to software that ships, vibe coding has issues. Not only is there the risk of hallucination but the chatbot’s desire to be agreeable means it may say an unusable idea works. That is a particular issue for those of us who don’t know how to edit the code. We risk creating software with inbuilt problems. It may not save time either. A study published in July by the non-profit Model Evaluation and Threat Research assessed work done by 16 developers — some with AI tools, some without. Those using AI assumed it had made them faster. In fact, it took them nearly a fifth longer. Several developers I spoke to said AI was best used as a way to talk through coding problems. It’s a version of something they call rubber ducking (after their habit of talking to the toys on their desk) — only this rubber duck can talk back. As one put it, code shouldn’t be judged by volume but success in what you’re trying to achieve. Progress in AI coding is tangible. But measuring productivity gains is not as neat as a simple percentage calculation. elaine.moore@ft.com<ref>{{cite web|title=elaine.moore@ft.com|url=https://chatgpt.com/share/elaine.moore@ft.com|publisher=chatgpt.com|access-date=2025-12-26}}</ref>
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)