How to Create Consistent Characters in Midjourney V6!
TLDRThis tutorial demonstrates how to use Midjourney's consistent character feature to create a character that remains recognizable across various scenarios. The process involves generating a reference photo with specific details about the character's appearance and then using the --cref parameter along with the image link to ensure consistency. The video covers changing camera angles, backgrounds, facial expressions, and experimenting with character weight to allow for creative variation. It also explores combining multiple reference images for more consistent results and using the /prefer_option_set command to streamline the process. The tutorial concludes with tips for capturing full-body images and the limitations when adding accessories not present in the original references.
Takeaways
- 🎨 Use Midjourney's consistent character feature to generate images of the same person with different backgrounds, activities, and expressions.
- 📸 Create a reference photo with specifics about the character's appearance for more realistic results.
- 🔗 Utilize the `--cref` parameter with the image link to maintain consistency in generated images.
- 🌆 Change camera angles and environments by adjusting the prompt while keeping the character consistent.
- 😲 Experiment with different facial expressions, though extreme emotions can be challenging to achieve.
- 🖼️ Combine camera angles, backgrounds, and expressions for creative and flexible character images.
- 🎨 Adjust color grading in images by including descriptors like 'desaturated and dark colors' or 'bright colorful and saturated colors'.
- ⚖️ Use the `--cw` parameter to control the level of creativity in the generated images, with 100 being the default for close adherence to the reference.
- 🔗 Attach multiple reference images for more consistent results by pasting their links after the `--cref` parameter.
- 🛠️ Utilize the `/prefer_option_set` command to save multiple image links to a custom name for easier access.
- 🎭 The consistent character feature also works for illustration-based characters, allowing for anime-style character creation.
Q & A
What is the main topic of the tutorial?
-The tutorial focuses on how to create a consistent character in Midjourney V6 using various features and techniques.
How can one ensure that the generated character has the same face, hair, clothing, and body type across different images?
-By using Midjourney's consistent character feature with the --cref parameter followed by the URL of the reference image.
What is the significance of using a film type like Kodak Portra in the prompt?
-Using a film type like Kodak Portra tends to generate more realistic looking photos in Midjourney.
How can one change the camera angle of the generated character in Midjourney?
-By adding descriptive terms to the prompt such as 'high angle shot from above', 'low angle shot from below', or 'side angle view'.
How does one copy the image link for the reference photo in the Discord desktop app?
-Right-click on the photo and select 'Copy Image Address' to get the reference image link.
What is the character weight parameter and how does it affect the generated images?
-The character weight parameter, accessed by typing --cw after the prompt, is a number between 0 to 100 that controls the amount of creativity injected into the images. A lower value allows for more changes to clothing, hairstyle, and visual style, while a higher value makes the generated image adhere more closely to the reference photo.
How can one add multiple reference images for a consistent character?
-After typing --cref, one can paste in image links for different reference images to get more consistent results.
What is the Niji mode in Midjourney and how does it help in generating anime style images?
-Niji mode is a Midjourney model specifically tailored towards generating anime style images. It can be accessed by typing /settings and selecting the AI model named Niji 6.
How can one create a full body reference image for an anime character?
-By using a prompt that includes 'full body turnaround' along with the character's features and clothing, and then using the Remix Mode and Vary Subtle button to adjust and combine the images.
What is the /prefer_option_set command in Midjourney used for?
-The /prefer_option_set command is used to save multiple image links to a custom name, making it easier to recall and use those references in future prompts.
Why might the consistent character feature struggle with adding accessories not present in the original reference images?
-The consistent character feature is designed to closely match the reference photo, and adding new elements like accessories that weren't in the original images can confuse the AI, leading to mixed or partially generated results.
Can one use personal reference images that weren't generated in Midjourney for the consistent characters feature?
-While it might be possible to use personal reference images, the consistent characters feature is designed to work best with images generated within Midjourney, so results may vary and might not be as consistent.
Outlines
🎨 Creating Consistent Characters in Midjourney
This tutorial introduces how to create a consistent character in Midjourney, an AI image generation platform. The process involves generating a character and placing them in various environments and activities while maintaining the same facial features, hair, clothing, and body type. The key to this feature is the use of the '--cref' parameter followed by the URL of a reference image, ensuring that the generated images match the reference. The tutorial demonstrates changing camera angles, environments, and facial expressions while keeping the character consistent. It also covers the character weight parameter (--cw), which adjusts the level of creativity in the generated images, allowing for more variation in clothing, hairstyle, and visual style.
🌌 Customizing Environments and Expressions
The video script explains how to modify the character's environment and expressions using specific prompts. It details how to use different camera angles, such as high angle shots, low angle shots, and side views, to create varied perspectives. The script also illustrates changing the character's surroundings, from a mushroom forest to a futuristic city, and altering facial expressions to convey emotions like shock or happiness. Additionally, it discusses the use of color grading in images and the character weight parameter's role in allowing creative freedom. The script introduces the concept of using multiple reference images for more consistent results and demonstrates how to save these references using the '/prefer_option_set' command for easier recall.
🧢 Accessorizing and Inpainting Characters
The paragraph discusses the challenges and solutions related to adding accessories that were not present in the original reference images. It explains that the character consistency tool may struggle with generating new accessories like a baseball cap. To address this, the video demonstrates creating a reference image without the accessory and then using the 'Vary Region' button to inpaint the accessory onto the character. The script also highlights the character consistency feature's effectiveness with illustration-based characters and provides a step-by-step guide to creating a consistent cartoon character, including turning on Niji mode for anime-style images and adjusting prompts for different expressions, angles, and backgrounds.
🚀 Advanced Character Interactions and Limitations
This section of the script explores advanced uses of the character consistency tool, such as generating images of the character interacting with other subjects or environments, like riding a reindeer or standing on a pirate ship. It also mentions the limitations when the character is expected to interact with complex subjects, as the interactions may not always be coherent. The paragraph concludes with a note on the possibility of using external reference images with the consistent characters feature, cautioning that it may not be as effective since the feature is designed for images generated within Midjourney. It also references a beginner's guide for those new to using the platform.
Mindmap
Keywords
Midjourney
Consistent Character Feature
Anime Style
Camera Angles
Facial Expressions
Backgrounds
Character Weight Parameter
Niji Mode
Remix Mode
Vary Region
Illustration-Based Characters
Highlights
This tutorial demonstrates how to create a consistent character in Midjourney using the consistent character feature.
The process involves generating a reference photo and using it to match the character in subsequent images.
Specifics about the character's appearance such as hair, eye, and skin color are important for generating realistic photos.
Using a film type like Kodak Portra can enhance the realism of the generated images.
The character's face, hair, clothing, and body type can be kept consistent across different images.
Different camera angles, environments, and facial expressions can be applied to the consistent character.
The character weight parameter (--cw) allows for adjusting the level of creativity in the generated images.
Multiple reference images can be used for more consistent results in character generation.
The /prefer_option_set command can save multiple image links to a custom name for easier access.
The consistent character feature also works for illustration-based characters and anime styles.
Niji mode in Midjourney is tailored for generating anime style images.
Full body reference images are necessary for generating characters with consistent clothing and accessories.
Remix Mode can be used to combine different elements from various images into one consistent character.
The character consistency tool struggles with adding accessories not present in the original reference images.
Using the Vary Region button can help inpaint elements like hats onto a character's head for a cleaner result.
The tutorial provides tips on how to include the entire figure in generated images by giving Midjourney specific prompts.
The consistent character feature can create a wide range of scenarios and activities for the character.
The feature may not work as well when the character needs to interact with other subjects in the image.
The tutorial suggests that using personal reference images outside of Midjourney may not be as effective with the consistent characters feature.