System Prompt Factory: a modular system prompt generator
A Streamlit app for building custom LLM system prompts by mixing and matching configurable identity, style, and output parameters.
I spend an unreasonable amount of time crafting system prompts for LLMs. Whether I'm setting up a new assistant in Open Web UI, configuring a chatbot, or tuning the behaviour of an API-connected model, the system prompt is where the personality and capabilities get defined — and getting it right involves juggling a dozen parameters that interact in non-obvious ways. How formal should the tone be? How deep should it go on technical topics? Should it ask clarifying questions or make reasonable assumptions? Should it adapt to the user's geolocation? Each of these is a separate decision, and when you're writing them as monolithic blocks of text, iterating on any single parameter means rewriting the whole thing. System Prompt Factory is my attempt to make that process modular and repeatable.
A system prompt generation UI that combines model and user characteristics to generate more targeted (but still general) system prompts for LLMs
A prompt construction kit, not a template library
It's a Streamlit web application that lets you build system prompts by mixing and matching configurable parameters across three categories. First, you configure the AI assistant's core identity: its name, personality type (from brusque to empathetic to flamboyant), backstory, formality level, expertise depth, and response style. Then you set user preferences: your name, occupation, cultural context, worldview, and how you like to learn and receive information. Finally, you choose output format preferences for documentation style and data formatting. The app combines all your selections into a coherent system prompt — and this is the key part — using an LLM (Anthropic's Sonnet under the hood) to weave the parameters into natural, flowing prompt text rather than just concatenating bullet points. The result reads like a hand-crafted system prompt, but one you assembled from components in about thirty seconds.
Why modularity changes everything
The practical difference between this approach and writing prompts from scratch is iteration speed. Want the same helpful personality but with more formal language? Flip one parameter and regenerate. Need to adjust for a different cultural context — say, switching from general English to Israel-aware responses? Change that setting without touching anything else. Experimenting with whether "brusque" or "empathetic" works better for technical support? Toggle it and compare the outputs. Each parameter change produces a complete, coherent prompt that reads naturally, not a Frankenstein assembly of disjointed instructions. The best part is you don't need to install anything to try it — I've deployed it as a Hugging Face Space so you can build prompts right in your browser. Source code is on GitHub.
A system prompt generation UI that combines model and user characteristics to generate more targeted (but still general) system prompts for LLMs