Theoretically Media
Today we’re taking a look at Midjourney’s long awaited Inpainting Feature. This is a comprehensive deep dive into the tool featuring tips and tricks and a few secrets I haven’t seen anyone else talking about!
We’ll start by defining what Inpainting is, before showing you the importance of having the “Remix” mode turned on in Midjourney. Then we’ll run through the Inpainting (Vary Region as Midjourney likes to call it) module.
I also have some helpful tips, as well as showing you some current limitations with Inpainting. But most importantly, I’ll show you what to do when Inpainting simply won’t respond to your prompt! Plus, we’ll go over which dash-dash commands actually work in Inpainting!
Per usual with my Midjourney Videos, I have a FREE Pdf for you detailing all the aspects of this video here: https://theoreticallymedia.gumroad.com/l/inpainting
Video on Panning in Midjourney: https://youtu.be/xpDjg_6yqBU
Video on Weighting in Midjourney (A little outdated, but still useful info): https://youtu.be/MWVELgLlGgk
If you’d like, please join the Patreon Here: patreon.com/TheoreticallyMedia
Join the Discord: https://discord.gg/nj294sEWqD
Follow Me on Twitter: https://twitter.com/TheoMediaAI
————————————————-
Thanks for watching Theoretically Media! I cover a large range of topics here on the Creative AI space, Technology, Tutorials, and Reviews! Please enjoy your time here, and subscribe!
Your comments mean a LOT to me, and I read and try to respond to every one of them, so please do drop any thoughts, suggestions, questions, or topic requests!
00:00 – Intro
00:23 -What is Inpainting?
01:08 – Vary Region in Midjourney
01:44 – Importance of Remix Mode
02:37 – Inpainting Interface
03:10 – PDF Available
03:24 – Tips and Limitations of Inpainting
03:55 – Inpainting Use Case
06:06 – Using the Tools in Inpainting
08:24 – Inpainting in Niji
09:06 – Changing Ethnicity, Emotion, and Props in Midjourney
10:36 – Tips when Inpainting will not respond
11:07 – The Slider Method in Inpainting
12:22 – Commands in Inpainting
Great and excellent tutorial, I have a question however not related but you're gifted on these subjects, Did github remove controlnet from their repository? I finally installed Automatic 1111 and about 12 extensions but controlnet was not available. I'm a newb so maybe I'm losing my mind and not seeing it, can someone comment and tell me controlnet extension for Auto1111 is there? Anyone can comment on this question, thanks
Thanks for the vid. I turned on remix in /settings but don't see an option to open up a separate in painting section in Discord. I must be missing something. The fact that remix setting exists in Discord makes me think the option should show up alongside other options at bottom of uploaded or generated image.
Thanks for your video! Question, can we import an image from our pc an edit it with inpainting?
Thanks for the :: slider tip.
Thank you. I was trying to figure this out.
Your thumbnail looks so much like some images I generated that I thought It was actually one of mine.
I’ve also experimented a lot and realized you can try just literally writing for your prompt just the thing you are trying to add, especially in the case of the white wolf. MJ is pretty good at understanding how to blend it with the rest.
I have noticed some differences when trying version, also. For instance, I changed a little girls ear to an elf ear by implementing Niji, which doesn’t usually happen in normal v5. So, I think it might actually work.
Dude…. Been going back and forth about grabbing the Midjourney subscription. Your channel totally sold me! Great content!! Subbed.
Really good work of yours, accurate and clear, thank you very much!
Even though your video is about inpainting, you did address using text (prompt) weights. I've seen the format of text prompts done in 3 different ways:
red car::3 blue car::1 (no spaces)
red car ::3 blue car ::1 (space after car)
red car:: 3 blue car:: 1 (space after colon)
On discord the above were examples I got for doing text weights. The last one seems to be the most effective, but I think you did the weight format differently. The midjourney documentation is not helpful in giving us the straight scoop.
Sorry to bug you. This is an inpainting question. I was able to re-create your BACKROOMS image – balloon and all. I decided to also inpaint a dog sitting by the girl. Worked like a charm. Next, I also wanted to add a bird sitting on the desk. It would not create the bird. Tried a cat, did not create. So my question is after adding a balloon and dog did I reach a limitation for in painting adding objects? Not sure.
Great video. Thanks for the info this was super informative. Subbed!
The Smile looks of, because when the mouth smiles, the whole face change. Especially the eyes move too. so if you want to make it realistic select the whole face or justs eyes and mouth.
at the moment when I try to write something in the area that I want to change the image gets closed…but I still get to create an area I want to change…
How'd you zoom out like that??
I’ve found the I paint to be pretty bad and just makes whatever selection blurry and worse.
Just purchased 3 of your awesome PDFs brah! 🎉
Thanks so much Tim really great stuff
I hope it is OK to ask you another Midjourney question. Please know that I do try to find the answer, be it research or discord, before bugging you. By far your knowledge level is way higher than others I work with. Ok…here is my question.
****BELOW IS MIDJOURNEY DOCUMENTATION FOR QUALITY****
Quality
The –quality or –q parameter changes how much time is spent generating an image. Higher-quality settings take longer to process and produce more details. Higher values also mean more GPU minutes are used per job.
The default –quality value is 1.
–quality only accepts the values: .25, .5, and 1 for the current model. Larger values are rounded down to 1.
–quality only influences the initial image generation.
****END OF DOCUMENTATION****
No matter how many times I read it, I find it confusing. Why? Let me explain.
It says higher quality setting produces more details (makes sense). It also says the values accepted are .25, .5, and 1 and that larger values are rounded down to 1. The default value is already set at the highest possible value which is 1. So what's the purpose? If the default value is already set at the highest acceptable value, why even bother with this parameter? I guess I could set it at .25 or .5 but that gives less details – no thank you.
It also says in the documentation: quality only influences the initial image generation. I have no idea what that means?
Am I missing something or not understanding how quality works?
I swear I've seen prompts with it set at –q 2 but according to the documentation this would be rounded down to 1
Any thoughts or help would be appreciated.
I really like the image on the cover (the orange-haired girl with blue eyes) is it made with some prompt of midjourney?
I might have tried to generate a white wolf in a similar background and then blended the two. Maybe I'll try that later to se if there is any effect. Thanks for the all the tips. Love the guitars.
can I add in a custom image tho? like on say a dog artwork I paste my dog's face??
Is it possible to use this new functionality, but with images uploaded to MJ or only with images generated in MJ ?
is it possible to use very region with uploaded images or only with generated images?
Can anyone tell me what cinematic “still” means ? What is the word still supposed to mean ? Does it mean a snapshot of video clip basically ? Thanks, sorry , not a native English speaker