Selective AI Restyle
Restyle a photo’s background while keeping the subject untouched. The subject stays photorealistic while the surroundings transform into watercolor, cyberpunk neon, fantasy landscape, or any style you can describe in a prompt.
This workflow demonstrates the core “AI is just another node” philosophy: two AI nodes cooperate through traditional masking and compositing to achieve something impossible in standalone AI tools. Where other apps force you to accept whatever the AI does to the entire image, ArcBrush lets you surgically control which parts get the AI treatment and which stay real.
Target audience: Photographers, social media creators, and concept artists who want selective stylization without losing subject fidelity.
What You’ll Need
Section titled “What You’ll Need”- A portrait or any photo with a clear subject (person, pet, product, vehicle)
- A style description in mind for the background (e.g., “watercolor painting”, “cyberpunk city at night”)
- AI credits for two AI node executions per image — one for background removal, one for the restyle (see Credits and Pricing)
The Pipeline
Section titled “The Pipeline”Image In ─→ AI Remove Background ─→ Mask Refine ──────────→ (mask) │ │ │ │ └──────────→ (overlay) Composite ←──────┘ │ ↑ └──→ AI Image Edit ──→ (background) ─────────┘ │ Glow (optional) ─→ Export ImageImage In feeds two branches. Branch A produces a clean alpha cutout via AI Remove Background, then runs that cutout’s image directly through Mask Refine — no Mask From Image step needed because Mask Refine accepts an image input and derives the mask from its alpha channel automatically. Branch B restyles the entire image. Composite then reassembles them: the original subject sits over the restyled background, gated by the refined mask.
Step-by-Step Walkthrough
Section titled “Step-by-Step Walkthrough”1. Image In — Load Your Photo
Section titled “1. Image In — Load Your Photo”Add an Image In node and load a portrait or any photo with a clear subject against a distinguishable background. Higher-resolution photos give the AI more detail to work with in both the masking and restyling steps.
For best results, choose a photo where the subject has well-defined edges. Soft, out-of-focus backgrounds are fine — the AI handles those well — but avoid photos where the subject blends into the background at similar colors and textures.
2. AI Remove Background — Extract the Subject
Section titled “2. AI Remove Background — Extract the Subject”Add an AI Remove Background node and connect Image In’s output to it.
Set Model to Bria RMBG 2.0 for the best edge quality, especially on hair, fur, and fine details. The standard rembg model works for clean silhouettes but struggles with wispy edges.
This produces an RGBA cutout: the subject on a transparent background. The alpha channel of this output is what drives the mask in the next step.
3. Mask Refine — Derive and Clean the Subject Mask
Section titled “3. Mask Refine — Derive and Clean the Subject Mask”Add a Mask Refine node and connect AI Remove Background’s output to its image input pin (not the mask input).
Mask Refine accepts either a Mask or an Image. When you wire an Image, it derives the mask internally from the image’s alpha channel. That means you don’t need a separate Mask From Image step here — one node does both jobs.
This step prevents the restyled background from bleeding into the subject’s edges. Without it, you’ll often see a thin halo of stylized pixels around the subject.
Key parameters:
- Erode / Dilate:
-2(negative values dilate / expand the mask). This pushes the mask boundary 2 pixels outward, creating a safety buffer that catches any edge pixels the AI might have missed. - Feather:
2.0for a natural, gradual transition at the mask edge. This prevents a hard cutout look where the photorealistic subject meets the stylized background. - Operation: None for most subjects. Switch to Close if you see small holes inside the subject’s mask (rare with Bria RMBG 2.0, more common with the standard rembg model).
- Invert: leave off. The mask should be white where the subject is and black where the background is.
4. AI Image Edit — Restyle the Entire Image
Section titled “4. AI Image Edit — Restyle the Entire Image”Add an AI Image Edit node and connect the original Image In output directly to it — not the Remove BG output. You want the AI to restyle the complete image including background context, since that context helps the AI understand the scene.
Set Edit Instruction to describe your target style. Be specific about texture and medium:
"watercolor painting with visible brushstrokes and paper texture, soft washes of color""cyberpunk neon city at night, rain-soaked streets, holographic advertisements""Studio Ghibli-style fantasy landscape, lush painted meadows and dramatic clouds""oil painting with thick impasto brushwork and warm gallery lighting"
Set Model Tier to Standard (FLUX.2 Pro) for a good balance of quality and credit cost. Use Fast (FLUX.2 Turbo) when iterating on prompts, then switch to Standard or Quality for the final render.
Set Seed to a specific number (e.g., 42) if you want reproducible results while tweaking other parameters. Use -1 for random variation each time.
The AI restyles the entire image, including the subject. That’s fine — the next step replaces the subject area with the original.
5. Composite — Reassemble Subject Over Restyled Background
Section titled “5. Composite — Reassemble Subject Over Restyled Background”Add a Composite node with these connections:
- Background: Connect the AI Image Edit output (the fully restyled image).
- Overlay: Connect the AI Remove Background output (the original subject cutout).
- Mask: Connect the Mask Refine output (the expanded, feathered mask).
Key parameters:
- Blend Mode:
Normal. The subject simply replaces the corresponding area of the restyled background. - Opacity:
1.0for a clean swap. Lower values blend the original subject with the restyled version, which can create an interesting partially-stylized look if that’s what you want. - Resize Mode:
Nonesince both inputs come from the same source image and share the same dimensions. - Mask Fit:
None— the mask was derived from the same source, so it already matches.
The subject’s own alpha channel masks it, but the refined mask provides the dilation and feathering buffer that ensures no restyled pixels leak through around the edges.
6. Glow (Optional) — Blend the Subject into the Scene
Section titled “6. Glow (Optional) — Blend the Subject into the Scene”Add a Glow node between Composite and Export Image for a subtle edge glow that helps the photorealistic subject sit naturally in the stylized environment.
Key parameters:
- Threshold:
0.7so only the bright edges near the subject boundary produce glow, not the entire image. - Radius:
8-12depending on image resolution. Larger images need a larger radius. - Intensity:
0.3-0.5for subtlety. You want a hint of color bleed, not a neon outline. - Tint: Set to match the dominant color of your restyled background. For a warm watercolor scene, try a warm peach. For cyberpunk neon, try a saturated violet. Matching the tint to the background sells the illusion that the subject is lit by the stylized environment.
- Blend Mode:
Addfor a luminous glow, orSoft Lightfor a more subtle color shift.
Skip this step entirely if you want a hard-edged collage look where the contrast between real and stylized is the point.
7. Export Image — Save the Final Result
Section titled “7. Export Image — Save the Final Result”Add an Export Image node and connect the final output (Glow if you used it, Composite if you didn’t).
Export as PNG for maximum quality, or JPEG at quality 90+ for smaller file sizes suitable for social media.
Result
Section titled “Result”A composite image where the subject remains perfectly photorealistic while the background has been transformed into any artistic style you described. The mask-based approach means you get pixel-precise control over the boundary — no AI hallucination on the subject’s face, no style bleeding into skin tones, no loss of fine detail on hair or clothing.
This is fundamentally different from what standalone AI image tools can do. They operate on the entire image with no spatial control. ArcBrush’s node graph lets you route different parts of the image through different processing paths and recombine them with traditional compositing precision.
Variations
Section titled “Variations”- Restyle the subject instead of the background by enabling Invert on the Mask Refine node. The background stays photorealistic while the subject becomes a painting. Great for editorial illustrations or album art.
- Try different AI models for different style qualities. Use
Fast(FLUX.2 Turbo) for quick iteration while dialing in your prompt, thenQuality(FLUX.2 Max) orAdvanced(Nano Banana 2) for the final high-fidelity render. All four tiers accept the optionalref_1/ref_2/ref_3reference image pins, so you can also feed in style references alongside the prompt. - Stack multiple restyle passes by feeding the Composite output into another AI Image Edit node for layered stylization. Each pass refines or transforms the result further, letting you build up complex mixed-media looks.
- Use Mask Paint for manual touch-ups by inserting a Mask Paint node after Mask Refine. Paint white to include more of the subject, or black to expose more of the restyled background. Useful when the AI mask misses accessories like hats or held objects.