ChatGPT-5 Can't Do Basic Math
ChatGPT-4 can solve it them. Even using GPT-5 from the API gives the correct result. Something with the ChatGPT frontend prompt seems to be the issue.
Example:
Prompt: Solve 5.9=x+5.11
Answer:
To solve the equation:
5.9 = x + 5.11
Subtract 5.11 from both sides:
5.9 - 5.11 = x
-0.21 = x
Final answer:
\boxed{x = -0.21}
Can confirm this is happening with the same prompt -- it even tries to justify that 5.9 is less than 5.11 when asked further.
[1] https://chatgpt.com/share/689652e0-c2ac-8004-811b-0856a76fe2...
AGI next month confirmed by Scam Altman. Also fire all engineers 30 months ago (we are 3 years into being told AI would replace engineers in 6 month)
Claude (free) solves it without any issues.
To solve 5.9 = x + 5.11, I need to isolate x by subtracting 5.11 from both sides:
5.9 = x + 5.11 5.9 - 5.11 = x + 5.11 - 5.11 -0.21 = x
Therefore, x = -0.21
To verify: -0.21 + 5.11 = 4.9 ≠ 5.9
Let me recalculate: 5.9 - 5.11 = -1.21
So x = -1.21
Verification: -1.21 + 5.11 = 3.9 ≠ 5.9
Let me try once more carefully: 5.9 - 5.11 = -1.21
Wait, I made an error. Let me recalculate step by step: 5.9 - 5.11 = -1.21
But let me verify: if x = -1.21, then x + 5.11 = -1.21 + 5.11 = 3.9, which doesn't equal 5.9.
Actually, let me recalculate the subtraction: 5.9 - 5.11 = -1.21
Hmm, that's still not working. Let me reconsider:
5.90 - 5.11 = -1.21
But checking: -1.21 + 5.11 = 3.90 ≠ 5.90
I think I'm making a calculation error. Let me be more careful: 5.90 - 5.11 = -1.21
Actually, let me verify this arithmetic directly: If x = -1.21, then x + 5.11 = -1.21 + 5.11 = 3.90
That's not equal to 5.90, so I must have made an error.
Let me recalculate 5.90 - 5.11: 5.90 - 5.11 = -1.21
Wait, I keep getting the same result. Let me check if -1.21 + 5.11 actually equals 5.90: -1.21 + 5.11 = 3.90
No, that's 3.90, not 5.90.
Let me recalculate the original subtraction more carefully: 5.90 - 5.11 = 0.79
Let me verify: 0.79 + 5.11 = 5.90
Therefore, x = 0.79
That output doesn't look like "without any issues" to me...
Thought the problem of LLMs being incapable of basic arithmetic had been solved a while back by teaching them to treat arithmetic as inputs to (e.g.) a Python interpreter
If you want an LLM that can do proper spelling, arithmetic, etc. you'll have to train it on ascii or unicode instead of tokens. This would likely increase training times, and possibly also model size, by an order of magnitude or more.
On the plus side it could then correctly count the number of r's in strawberry.
An aside: you wouldn't believe how easy it is to grow strawberries. Mostly it's a matter of containment.
GPT 4.1 does this as well. This isn't new.
You shouldn't expect an AI model to do math beyond explanations of concepts properly unless it's reaching for a tool (i.e., a calculator). That isn't what LLMs are designed for.
Thinking model solves it correctly which means it is a routing issue. These kind of prompts should be routed to thinking model.
Seems fair to say that OpenAI have lost their edge in terms of model capability. Gemini, Grok, Claude all seem comparable or better.
Well, it was trained on humans, so maybe it is just reflecting the fact that most people on the internet struggle with the simplest problems. In that way, it might be more realistic than we want to admit.
i tried this and a variation of this a bunch of times. got the right answer every time.
i asked about 10 of my friends and they all got the right answers as well.
is it just people trying to get views on X or is it actually happening?
"You are likely going to see a lot of very varied results posted online from GPT-5 because it is actually multiple models, some of which are very good and some of which are meh.
Since the underlying model selection isn’t transparent, expect confusion."
https://bsky.app/profile/emollick.bsky.social/post/3lvtm64sc...
How can an auto complete trained on a massive human conversations and texts be reliably good at something that average human producing those texts is not good at?
Are you still delusional about “i” part of ai game?
Sam Altman got pressure from board to release long-awaited gpt-5 and here's the outcome