Context, then answer… instead of having everything ride on the first character (e.g. we make it pick “Y” or “N” first in response to a yes-or-no question, it usually picks “Y” even if it later talks itself out of it).
Yeah I’ve been having it code short useful scripts (like converting the PDF of my work schedule to an importable ICS or making a custom desktop timer for a work task that repeats every fifteen minutes) and I find it works better if you make it sum up it’s goals at the beginning then if I need to start fresh in a new chat (faster processing, less perseveration on erroneous earlier versions) I have it sum up the goals at the end to paste into the new one.
Remember when satnavs first came out and you could download different voices for them?
If only Waze weren’t owned by Google… I saw they had that function. Wonder if you can do that with OSMAnd…

Or even better, don’t use the racist pile of linear algebra that regurgitates misinformation and propaganda.
That’s the basis of reasoning models. Make LLMs ‘think’ through the problem for several hundred tokens before giving a final answer.
Crank the temperature settings and have it say “Trust me, bro.”
It would definitely be funnier to train it to do that.
More wise it would sound.




