Your old prompts won’t work with GPT-5.5. Try these instead
OpenAI’s latest and most powerful model, GPT-5.5, has been topping benchmark charts and impressing users with its coding and reasoning abilities, not to mention the sheer quantity of facts at its fingertips.
But while ChatGPT’s latest model doesn’t require the hand-holding that older models did, it also gets fussy with the longer, highly detailed prompts that might have worked well in the past.
If you’re seeing worse performance with GPT-5.5 than you had with previous models, it might be your prompt constructions.
OpenAI released a prompting guide for GPT-5.5 that spells out what works—and what doesn’t—with its latest model. Here are some of the best pointers and takeaways.
Keep your prompts short
The top bullet point of the GPT-5.5 prompting guide gets right to the point: “Shorter, outcome-first prompts usually work better than process-heavy prompt stacks.”
That means the best GPT-5.5 prompts tell in detail what you want rather than going on and on about how you want it to get there. Giving GPT-5.5 an older-style prompt with elaborate step-by-step instructions might actually get you worse results that “add noise, narrow the model’s search space, or lead to overly mechanical answers,” OpenAI warns.
So if you’re using legacy ChatGPT prompts that say things like “first do this, then do this, and then do that,” you’re probably stifling the very quality that makes GPT-5.5 so impressive: its own creativity when it comes to solving problems. Instead, make sure your prompts focus on the outcome. Let GPT-5.5 decide how to get there.
Here’s an example of a well-optimized, outcome-oriented prompt for GPT-5.5, according to OpenAI:
Resolve the customer’s issue end to end.
Success means:
– the eligibility decision is made from the available policy and account data
– any allowed action is completed before responding
– the final answer includes completed_actions, customer_message, and blockers
– if evidence is missing, ask for the smallest missing field
Beware of B.S.
GPT-5.5’s creativity when it comes to solving problems can be something of a double-edged sword, meaning it has a documented tendency to confidently get things wrong.
That’s what makes OpenAI’s guidance for “creative drafting guardrails” so interesting… and important. Specifically, the GPT-5.5 prompting guide suggests detailing what the model can make up and what it can’t.
Here’s an example from the guide:
For creative or generative requests such as slides, leadership blurbs, outbound copy, summaries for sharing, talk tracks, or narrative framing, distinguish source-backed facts from creative wording.
– Use retrieved or provided facts for concrete product, customer, metric, roadmap, date, capability, and competitive claims, and cite those claims.
– Do not invent specific names, first-party data claims, metrics, roadmap status, customer outcomes, or product capabilities to make the draft sound stronger.
– If there is little or no citable support, write a useful generic draft with placeholders or clearly labeled assumptions rather than unsupported specifics.
Dial back the “nevers” and “onlys”
Another interesting tidbit from the official GPT-5.5 prompting guide explicitly states what to avoid: “unnecessary” absolutes like “always,” “never,” “must,” and “only.”
While older AI prompts may liberally sprinkle in phrases like “ALWAYS search the web before giving an answer” or “NEVER ask clarifying questions,” GPT-5.5 will perform better if you give it “decision rules,” like:
Ask a clarifying question only when missing information would materially change the answer or cause a high-risk mistake.
Then again, “always,” “never,” and other “absolute” words still have their uses in GPT-5.5 prompts, namely when you want the model to never, ever do something. It’s the “unnecessary” nevers that should be avoided.
Tell it when to stop
Sometimes the most important step in a complex task is knowing when you’re done, and that’s a key element to bake into a well-designed GPT-5.5 prompt, OpenAI says. Otherwise, you run the risk of the model falling into a lengthy and token-wasting loop (“let me dive into this further”).
Here’s an example of an “explicit stopping condition” straight from the GPT-5.5 prompting guide:
Resolve the user query in the fewest useful tool loops, but do not let loop minimization outrank correctness, accessible fallback evidence, calculations, or required citation tags for factual claims.
After each result, ask: “Can I answer the user’s core request now with useful evidence and citations for the factual claims?” If yes, answer.
The key is to prompt GPT-5.5 to ensure it’s giving a complete, correct, and well-grounded answer before deciding that it’s done.