12466
Offtopic group for casual talking about anything. @rules_for_python still apply (except for the ontopic rule)
My computer crashed today. I was lucky enough to have been able to open htop before it stopped reacting completely.
To understand repo. It’s like complaints of diminishing returns is being improved
Читать полностью…
I’ve seen significant less token usage by Claude cli. Extension is useless.
Читать полностью…
Token efficiency matters for cost. Independent testing found Claude Code uses 5.5x fewer tokens than Cursor for identical tasks. Claude Code (Opus) completed a benchmark task with 33K tokens and no errors. The Cursor agent (GPT-5) used 188K tokens and hit errors along the way. Fewer tokens means lower per-task cost even at higher subscription prices.Читать полностью…
— https://www.builder.io/blog/cursor-vs-claude-code
Claude models are good at coding so they gatekeep. No wonder. Their claims
Читать полностью…
That is surely true... worst part is if you "switch" in the middle, God it eats up all your tokens... but 90% of the time I'm using Kimi 2.5...
Читать полностью…
I'm not sure if it's still true, but cursor was using 5 times more tokens for the same task than the CLI.
Читать полностью…
Though, it seems to be more efficient with tokens than Cursor IDE is.
Читать полностью…
All cores were fully busy, but that was a side effect of the memory being full. As if was doing something special. But somehow I managed to get it full.
Читать полностью…
100%, there is something called Caching. When you first talk to the LLM, it needs to know everything about your project.
The context window size doesn't change when you change the model, but most of the context for the first LLM is already cached on the server. When the next LLM is called, the whole thing has to be sent again, and the LLM has to know the proper context.
This is why you see that the amount of tokens used for the newer model is much higher.
Maybe fair amount of token gets burned in the beginning
Читать полностью…
They should have compared Claude Code CLI with Claude Code in Cursor.
Читать полностью…
No, not really... if you switch from one model to another, you have to send that new model all the context, while if you stick with a single model, it uses loads of cache.
Читать полностью…
I mean there's an overhead I'd assume, but it's not 5 times as much.
Читать полностью…
It is, but the advantage with the cursor is that I can use almost any model in the world, switch to Opus when I really need to.
Читать полностью…
They wrote a TUI because it is lightweight and can be compiled for anything, including server-side Linux.
It is nothing special at all... even LLM itself is simple, if you see the code, it is not something complex at all, the problem is DATA...
Python (the snake) does not actully use poison.
It strangles the victim crushing lungs and other bones in the proccess...