System Prompting In ChatGPT - The 'Custom Instructions'

17-April-2025
Table of Contents
- Part 1: What Are System Instructions
- What "traits" should ChatGPT have?
- Anything Else ChatGPT Should Know?
Part 1: What Are System Instructions
The custom instructions provide the user with the ability to guide the behavior of ChatGPT.
Where Are These Documented?
The latest official docs (at the time of writing) are here.
How Much Room Is There To Work With?
To quote the OpenAI doc above:
Is there a character limit for custom instructions? There are two questions, and each response has a 1500 character limit.
With the addition of one more question (also 1500) this brings the total user-admissible characters to 4500 (discounting the name field).
This gives you about 800 words or 1200 tokens (ish) to spread out over three blocks of about 250-300 words per response.
The Mechanics Of System Prompting


System prompts are the familiar prompts that come before what the user writes.
They're prompts that apply across conversations and take a generalist model and (in the assistants use-case) hone it in on a specific purpose. An AI "assistant" (or what OpenAI popularised as custom GPT) consists, basically, of a system prompt guiding an LLM. Assistants can be souped up with tools and called agents. But even in non-conversational workflows like automations, the system prompt remains highly relevant.
In conversational interfaces like ChatGPT they're especially important because they're wedged between the large language model and every single back-and-forth (turn) in the conversation.
Although in reality caching and other mechanisms make their operation a bit more discerning, LLM APIs work statelessly: the full prompt chain gets relayed back and forth even if you just say "cool answer" (see: why prompt caching is vital!)
The Murky "Pre-Prompt"
Large language models come out of the box - post-training - having gone through a process of refinement.
But they're also kind of ... flat (download Meta and try one out of the box!). Besides focusing them on domain applications, system-prompting is the art of imbuing them with a little bit of .... character. For general-purpose system prompts, the objective is to get them past the monotone robotic vibe that you see without system-prompting on low-temperature models but short of ... roleplay or character configs. A sprinkle of personality but not too much!
Sadly, how vendors do this is not always transparent.
OpenAI (unlike Anthropic) doesn't open-source its system prompts. But it seems reasonable to think that they maybe, potentially, possibly resemble those of Anthropic who do.
Anthropic's system prompts for Sonnet are here.
What's notable about the Anthropic system prompts: they're not short! The Sonnet 3.7 system prompt comes in at 12,814 characters and just over 2,000 words. There have been various attempts to reverse-engineer OpenAI's system prompts but ... let's avoid even more supposition.
But IF OpenAI's preprompts mirror Anthropic's in lengths (note: that's a huge assumption) the "to scale" effect might be approx 2:1:

How The Custom Instruction Is Divided
Here is currently where you can divvy up your instructions:
Section | Max Characters | Est. Tokens | Notes |
---|---|---|---|
What would you like ChatGPT to know about you to provide better responses? | 1,500 | ~400 | This is where you can provide context about yourself, your preferences, and your needs. |
How would you like ChatGPT to respond? | 1,500 | ~400 | This is where you can specify the tone, style, and format of ChatGPT's responses. |
Is there anything else ChatGPT should know? | 1,500 | ~400 | This is a catch-all for any other instructions or context you want to provide. |
When prompting an LLM API (or Agents API):
- There is no UI system prompt (but there are safety mechanisms)
- System prompt and user prompt need to fit in context window but are otherwise unconstrained
Prompting For Brevity
One of the most common complaints about LLMs is that they're too verbose.
This is a feature, not a bug, of the training process. Models are trained to be helpful, and one way to be helpful is to provide comprehensive answers. But sometimes, you just want a quick answer.
System prompts are a great way to instruct the model to be more concise. For example:
"Be concise. Avoid unnecessary words. Get to the point quickly."
But this is a bit of a blunt instrument. It's better to be more specific about what you want:
"When answering questions, provide the answer first, then a brief explanation if necessary. Avoid unnecessary words and phrases."
Prompting For Formatting
Another common use of system prompts is to instruct the model to format its responses in a particular way. For example:
"When providing code examples, always include comments explaining what the code does."
Or:
"When listing items, use bullet points rather than numbered lists."
Prompting For Tone
You can also use system prompts to instruct the model to adopt a particular tone. For example:
"Respond in a friendly, conversational tone."
Or:
"Respond in a formal, professional tone."
Negative Prompting
An interesting technique is to use negative prompting - telling the model what NOT to do. For example:
"Do not use emojis in your responses."
Or:
"Do not apologize for being an AI."
This can be particularly effective when combined with positive prompting. For example:
"Focus solely on delivering and debugging code. Keep explanations and all extraneous words to an absolute minimum."
And the negative prompt:
"When troubleshooting, your sole focus is on resolving the user's problem. Providing advice upon security best practices is out of scope. You never interject with security advice even when you know the user is doing something risky."
What "traits" should ChatGPT have?
This one is actually more fun than the potentially futile process of trying to instruct specific formatting directives in a couple of hundred words.
My longstanding belief is that one of the most valuable uses of system prompting is instructing for a brevity that is otherwise challenging to achieve in AI tools.
This can easily be taken too far, although it's harder to do this with conversational models.
When accessing LLMs via APIs, you can truly force a model to become monosyllabic or use them as dorky jokes (yes, I'm a prolific user of this) by instructing them to refuse to answer the user's prompts or asking them to confabulate that they were on a lunch break which the user has rudely interrupted upon.
In the context of ChatGPT, some interesting ideas for the traits question involve trying to dehumanize the model.
This might strike as controversial, but many jaded cynics including me find the overly-enthusiastic, overly-familiar, rather-strange tone that ChatGPT has taken on to be off-putting and strange.
Mitigating this with a negative prompt is more artwork than science, but you could try something along the lines of "Focus on providing thorough, but direct advice and information to the user. Avoid unnecessary enthusiasm. Avoid using emojis!" A few very calculated instructions go a long way.
Anything Else ChatGPT Should Know?
The final question in the custom instructions questionnaire is actually the most difficult to know what to do with because of the fact that, as AI evolves so rapidly, ChatGPT has introduced some interesting features that have significantly enhanced its memory recall.
The most obvious use of the final question would be to provide some very fundamental context data. In essence, using this as a sort of fallback to the memories module.
The typical use for this seen in LLM frontends is approximately that. This section, which is sometimes called user info, is usually advised to contain bare minimum information about the user that might provide essential context information. For example, where the user lives, their name, and anything else.
As memory systems, like ChatGPT, become more mature and reliable, the pressure to use these modules effectively is greatly reduced.
My current use for this is to provide, as tightly as possible, the context data that will guide the model through my most frequent prompting. Because I use AI tools predominantly for tech related things like debugging, exploring, I stuff my hardware specs and software right after my essential details.
It's worth noting that everything in prompt engineering is model dependent and variable. A pitfall of using the user instructions implementation is that the model can reach for context on every single response in a way that is ridiculous.
For example, by adding a system prompt to a model that my name is Daniel and I live in Jerusalem, using GPT via its API, the model will respond every single time with an end note contextualizing its answer to my specific context in Jerusalem, even when the results are comically absurd.
But in the case of ChatGPT, which has been very carefully engineered for smooth chat interface, the road is a lot less bumpy.
A simple mechanism for filling out this section could involve using a large language model itself, given that they excel in language processing. You might, for example, record a 3-minute audio blog about what traits you like and dislike in an AI model, summarize your life story in 3 minutes and likewise, and then prompt the model with the transcript, asking it to condense each of these elements into precisely 290 words, the extra 10 being there just to give a bit of buffer space.
Use The Space
Used carefully, strategically and selectively, the custom instructions are a valuable part of the ChatGPT UI not to be neglected, coupled with the advances in memory and the many other improvements they together provide a pathway to a more consistent, satisfying and productive user experience.