AI image generation has a consistency problem nobody talks about. Midjourney can produce a stunning visual in 30 seconds. The problem is the next 30 seconds. Run the same prompt twice and you get two completely different images. Run it across a five-image campaign and your "brand visuals" look like five different brands had a meeting. There is a fix. It is called Style Reference, it is built into Midjourney v7, and most marketers using Midjourney for branded content have either never used it or are using it wrong.
Style Reference (the --sref parameter) lets you lock a visual style across every image you generate. It is the single biggest lever for turning Midjourney from a creative random number generator into a reliable brand visual machine. The catch is that v7 changed how it works. Old --sref codes that produced one look in v6 produce something different in v7, and the documentation buried the fix.
This guide is for marketers, content creators, and brand designers who use Midjourney for actual work, not just exploration. You will leave with a step-by-step workflow for locking your brand style, the four parameter combinations that matter, and a copy-paste prompt structure that produces consistent visuals on the first try.
What is Midjourney Style Reference (--sref)?
Midjourney Style Reference is a parameter that tells Midjourney to apply the visual style of an existing image or a numerical style code to whatever subject you describe in your prompt. You add it to a prompt as --sref [URL] or --sref [number]. The model preserves your subject but borrows the colour palette, lighting, texture, and aesthetic direction from the reference. It is the closest thing Midjourney has to a brand template.
Style Reference comes in two flavours that matter for marketers. The first is image-based: you upload your existing brand visual to Midjourney, copy its URL, and use --sref [URL] in subsequent prompts. The model studies the style of that image and applies it to new subjects. This is how you lock your brand look using assets you already own.
The second is code-based: Midjourney has an internal library of pre-built styles, each represented by a long number. Adding --sref 1234567890 forces the model to render in that specific aesthetic. Code-based styles are reproducible across team members because anyone using the same code gets the same look. Image-based styles depend on the URL surviving, which it does, but the style transfer is sometimes more interpretive.
The critical thing nobody mentions in the basic tutorials: Style Reference does not make every image identical. It makes them feel like they belong to the same family. Your subjects can change. Your composition can change. Your lighting and palette and aesthetic stay consistent. That is what brand consistency actually means in image generation.
How does Style Reference work in Midjourney v7?
In Midjourney v7, Style Reference uses a new model (--sv 6 by default) that interprets style differently from v6. Old style codes that produced one look in v6 may produce something different in v7. To use legacy v6 codes, append --sv 4 to the prompt. To use the v7 style system, omit --sv or set it explicitly to 6. The default v7 behaviour gives broader, richer style transfer than v6.
The v7 update introduced two new things worth knowing. First, --sref random generates a unique style code for you to use, then you can lock that code across a batch of prompts. This is a fast way to discover a brand-friendly aesthetic without browsing endless code libraries. Second, the style weight parameter (--sw) now ranges from 0 to 1000, with a default of 100. Higher weights pull harder toward the reference style at the cost of subject fidelity.
Marketing-relevant practical note: V7 also rebuilt how text prompts and style references interact. Whatever you write in the prompt now describes intent rather than acting as a list of keywords. So "luxury watch on minimalist surface, soft daylight" lands more reliably than "luxury, watch, minimalist, surface, soft, daylight, photo." Style Reference 2.0, included in v7, is what allows you to lock a style across an entire campaign.
The compatibility caveat: Style Reference codes from before June 17, 2025 may not produce the same outputs in v7 with the default --sv 6 setting. If you have a brand library of pre-2025 codes, add --sv 4 to keep them working until you migrate. Treat the migration as a planned exercise: re-render two or three reference images per code and decide which to keep.
How do you use Style Reference for brand consistency?
To use Style Reference for brand consistency, upload three to five strong examples of your brand visuals to Midjourney, copy the URLs, and use them as --sref [URL1] [URL2] [URL3] in your prompts. This trains the model on your brand aesthetic from multiple angles. For team-wide consistency, run --sref random until you find a code you like, then standardise that code across everyone working on the campaign.
The workflow that actually works in practice has four steps. First, gather your brand reference set: three to five existing images that capture the look you want. Hero photography, product shots, or social posts that already feel "on brand." If you do not have these yet, create them first by iterating on a single image until you love the look.
Second, upload the references to Midjourney and copy each URL. The order does not matter much, but the variety does. Three nearly identical images are weaker than three images of different subjects in the same visual style. Variety teaches the model what is style versus what is subject.
Third, build a base prompt template. The recommended structure: [subject and action] in [setting], [lighting], [mood] --sref [URL1] [URL2] [URL3] --sw 200 --ar 16:9. The --sw 200 pulls a bit harder toward your style than the default 100. The --ar 16:9 sets the aspect ratio. Keep this template fixed and only change the subject part.
Fourth, validate consistency. Generate five different prompts using the same --sref set. Lay the outputs side by side. They should feel like a campaign, not five random images. If they feel disconnected, increase --sw to 300 or 400. If your subjects look distorted, decrease --sw to 100. The right number is workflow-specific.
What are the four most useful parameter combinations?
The four most useful Midjourney v7 Style Reference parameter combinations for brand work are: --sref [URL] --sw 100 (default style transfer), --sref [URL] --sw 300 (heavy style lock), --sref [URL1] [URL2] [URL3] --sw 200 (multi-reference average), and --sref random --sw 200 (style discovery). Each serves a different stage of brand development, from initial style discovery to final production.
Combination 1: Default style transfer. Use --sref [URL] --sw 100 when you want Midjourney to clearly take inspiration from your reference but still have creative latitude. This is the right setting for exploration phases when you are figuring out the style direction.
Combination 2: Heavy style lock. Use --sref [URL] --sw 300 when you have nailed your style and need every output to look like it came from the same shoot. This is for production phases when consistency matters more than variety. Subject quality occasionally suffers at this weight; have a fallback.
Combination 3: Multi-reference average. Use --sref [URL1] [URL2] [URL3] --sw 200 when you want Midjourney to interpolate between several reference images and create a unified aesthetic that captures all of them. This is the gold standard for campaign work because it prevents over-fitting to any single reference.
Combination 4: Style discovery. Use --sref random --sw 200 when you do not have a brand style yet and need to find one. Run it 10 times across the same subject. Note the codes that produce visuals you love. Use those codes as your locked style going forward.
What goes wrong, and how do you fix it?
The four most common Style Reference failures are: subjects look right but style drifts (fix: increase --sw), style is locked but subjects look distorted (fix: decrease --sw), the reference style overpowers subject detail (fix: simplify the subject description or reduce --sw), and v7 with old v6 codes produces nothing recognisable (fix: add --sv 4 to use the legacy interpretation). Each failure has a single-parameter fix once you know what to look for.
Failure 1: Style drift across batches. You ran the same --sref across 10 prompts and the first few are on-brand but later ones drift. Cause: the model is interpreting your style description loosely. Fix: increase --sw from 100 to 200 or 300. Trade-off: heavier weights produce slightly less varied outputs.
Failure 2: Distorted subjects. Style is locked perfectly but the people, products, or objects look wrong: extra fingers, melted faces, weird proportions. Cause: --sw is too high and the model is sacrificing subject fidelity. Fix: drop --sw to 100 or even 50. Generate a few more variations to average out the distortion.
Failure 3: Style swallows the subject. The image is beautiful but the subject is barely visible or feels secondary. Cause: the reference style has strong texture or composition that competes with the subject. Fix: simplify the subject description, increase the subject prominence in the prompt, or use a less aggressive reference image.
Failure 4: Old codes produce nothing recognisable. Codes that worked in v6 produce wildly different output in v7. Cause: the v7 default --sv 6 interprets style codes differently. Fix: add --sv 4 to your prompt for one-off use of legacy codes, or plan a migration to v7-native codes for any code you use heavily.
What does a real brand workflow prompt look like?
A real Midjourney v7 brand workflow prompt looks like a fixed template where only the subject changes between renders. Below is a copy-paste-ready prompt structure for a luxury watch brand that wants consistent product hero shots across 20 SKUs. The structure works for any brand once you swap in your own reference URLs and subject descriptions. Test it once. Then reuse it forever.
Try this prompt template:
[Subject and action], [setting], [lighting and mood] --sref [Brand Reference URL 1] [Brand Reference URL 2] [Brand Reference URL 3] --sw 200 --ar 16:9 --v 7
Concrete example for a luxury watch brand:
"Steel chronograph watch on dark slate surface with single beam of overhead light, deep shadows, editorial product photography mood --sref https://your-brand-image-1.png https://your-brand-image-2.png https://your-brand-image-3.png --sw 200 --ar 16:9 --v 7"
For your second SKU, change only the subject phrase: "Gold dress watch on cream linen surface with soft side light, warm shadows, editorial product photography mood --sref [same URLs] --sw 200 --ar 16:9 --v 7"
Why this works: the --sref set teaches Midjourney your brand aesthetic. The --sw 200 keeps consistency strong without distorting subjects. The --ar 16:9 gives you horizontal hero crops. The fixed structure means anyone on your team produces the same look, even if their writing style differs. You have built a brand template inside Midjourney that anyone can reuse.
How do you build a long-term brand style library?
Build a long-term Midjourney brand style library by saving your three to five core reference URLs in a shared document, recording the exact --sw and --ar values that work for your brand, and exporting any --sref random codes you discover into a named registry. Anyone joining the team can adopt the brand style in five minutes by pasting these values into their prompt template. The library is the documentation, not Midjourney itself.
The structure that works: a one-page brand visual document with three sections. Section one lists your locked --sref URLs, with thumbnails so people can see what they are referencing. Section two documents the parameter recipe (sw, ar, v) you have validated for the brand. Section three lists any --sref random codes you have discovered, with a sample image of each.
The reason to maintain this externally is that Midjourney does not give you good library tools. The asset library exists but is not built for team use. A shared Notion page, Google Doc, or simple Confluence page outperforms it for brand consistency. Treat this document as the source of truth and update it whenever you validate a new style approach.
The compounding payoff: in month one, building the document feels slow. By month three, every new team member produces on-brand visuals on day one because they have a recipe to follow. By month six, you stop generating "off-brand surprises." That is the moment Midjourney transitions from a creative tool to a production tool.
Conclusion: Style Reference is what makes Midjourney usable for brands
Without Style Reference, Midjourney is a creative tool. With Style Reference, it becomes a brand production tool. The difference between a marketer who fights inconsistency every week and a marketer who ships on-brand visuals at scale comes down to how seriously they treat the --sref workflow. The first time you set it up takes an afternoon. After that, every new prompt you write inherits the work.
The mistake most teams make is treating Style Reference as an optional flourish. It is not. It is the single feature that turns AI image generation from "playful experiment" into "reliable brand asset machine." If you are paying for Midjourney and producing visuals for clients or for your own brand, the --sref setup is not a nice-to-have. It is the only way to get predictable output.
懂AI的冷,更懂你的難。 UD 同行28年,讓科技成為有溫度的陪伴。 In our 28 years helping HK businesses adopt new technology, the same pattern repeats: tools become valuable only when paired with the workflow that makes them reliable. Midjourney plus Style Reference is exactly that.
Make Midjourney Work for Your Brand
You have the technique. The next step is mapping it to your actual brand assets, building the right reference library, and integrating Midjourney into your team's content production workflow. We will walk you through every step, from auditing your existing visuals to a fully documented brand style recipe your whole team can use.