Multi-Character Scene with Midjourney’s Huge Character Consistency Update (--cref)
TLDRMidjourney has released a highly anticipated feature for character consistency, allowing users to generate characters with consistent details using a character reference image. The feature, known as cref, focuses on character traits and works best with characters created by Midjourney, though it can also be applied to real people or photos with some limitations. The video demonstrates how to use the cref function to generate images on Discord and the Midjourney Alpha website, and how to adjust character weight to control the level of detail from the reference image. The creator shares a workaround for placing multiple characters in a single scene by being more descriptive in the text prompt and adjusting the character reference accordingly. The video also covers techniques for refining generated images using the 'vary region' feature and suggests using Photoshop's generative tool for further editing. The creator concludes by presenting the generated images and inviting feedback on the consistency achieved before Midjourney perfects its character consistency function.
Takeaways
- 🎉 Midjourney has released a new feature for character consistency that allows generating characters with consistent details using a character reference image.
- 🔍 The new `--cref` function focuses on character traits and is most precise with characters originally created by Midjourney, rather than real people or photos.
- 👧 The speaker prefers using real people results as the `--cref` feature is more effective in stabilizing facial features than hair and outfit details.
- 📈 The results are particularly useful for creating AI influencers or fashion models, but the focus of the video is on animation style illustrations.
- 📝 It's recommended to note down key character features once decided to maintain consistency throughout the image generation process.
- 🌟 The character Lily is introduced as an example with specific traits like big round brown eyes, long wavy hair, and a particular style of clothing.
- 🔗 The `--cref` parameter can be used with an image URL to generate images with consistent character details.
- ✅ The `--cw` parameter allows adjusting the character weight from 100 (all details) to 0 (just the face), influencing how closely the generated image resembles the reference.
- 🧐 Generated images may include elements from the reference image, like animals and butterflies, but clothing details might not match perfectly.
- 🛠️ The 'Vary Region' feature can be used to edit and upscale images for closer matching to the original character's details.
- 👥 To generate multiple characters in a scene, a more descriptive prompt is necessary, specifying details about each character.
- 💡 Photoshop's generative tool can be used for further detail editing if time is a concern.
- 📚 The video concludes with a teaser for more advanced character consistency hacks in a future video.
Q & A
What is the main feature of Midjourney's new update?
-Midjourney's new update introduces a character consistency feature that allows users to generate characters with consistent details using a character reference image.
What is the limitation of the new cref function?
-The cref function has limited precision and will not copy exact details such as dimples, freckles, or t-shirt logos. It works best with characters made from Midjourney and is not designed for real people or photos, which may be distorted.
Why might the real people results be preferred in some cases?
-The real people results might be preferred because the cref feature is more useful in stabilizing facial features, which can be particularly beneficial for applications like AI influencers or fashion models.
What is the recommended approach to using the cref function?
-It is recommended to use characters made from Midjourney images with the cref function. Users should also note down important character features to maintain consistency as they generate images.
How does one generate images using the cref function on Discord?
-To generate images on Discord, one types in a forward slash followed by a text prompt for style, then adds --cref and inserts the image URL of the character reference. The image URL can be obtained by dragging the image to the prompt box, right-clicking to get the image address, or opening the image in a browser and copying the link.
What is the purpose of the --cw parameter?
-The --cw parameter is used to modify character references by adjusting the strength of character details from 100 (all details) to 0 (focusing only on the face).
How can one edit the generated images to perfection?
-One can edit the generated images to perfection by using the 'vary region' feature to select the area to edit and then providing a simple prompt description to guide the adjustments.
What is the strategy for adding a second character to the scene?
-To add a second character to the scene, one must be more descriptive in the text prompt, specifying the details of each character, including their clothing and actions. The character reference must also be switched to match the new character being added.
How can the generated images be further refined?
-The generated images can be further refined using the 'vary region' feature to change clothing details, eye gaze, or any other areas that need improvement. Alternatively, one can use Photoshop's generative tool for more detailed editing.
What is the current state of character consistency with Midjourney's tools?
-While the generated images are not perfect, the consistency controlled by the cref parameter is significantly better than using just a reference image. The tools are expected to improve and provide better consistency over time.
What additional resource is available for creating the most consistent characters using AI?
-For those serious about creating the most consistent characters using AI, there is a video available that provides ultimate Character Consistency Hacks.
Outlines
🎨 Introducing Character Consistency with Midjourney's cref Feature
The video introduces Midjourney's new feature for character consistency, which allows users to generate characters with consistent details using a character reference image. The narrator shares their experience with the feature, noting that while it won't replicate every detail like dimples or t-shirt logos, it's particularly useful for stabilizing facial features. They find it more effective with real people than with characters not made by Midjourney, which may get distorted. The feature is best for creating AI influencers or fashion models. The video focuses on animation-style illustrations, showing how to use the cref function in Midjourney, including tips on noting down character features to maintain consistency and how to use the --cref parameter to control the character details used as a reference. The narrator also demonstrates how to generate images on Discord and fine-tune them using the vary region feature.
🖌️ Editing and Refining Generated Images for Perfection
The second paragraph delves into the process of refining generated images to better match the desired character details. The narrator explains how to use the 'vary region' feature to edit specific parts of the image, such as changing the color of the suspenders on a dress to align with the original character design. They then upscale the image and add a second character to the scene, emphasizing the need for a more descriptive prompt to ensure both characters appear. The process involves switching the character reference to match the new character being added. Further details are edited using the vary region feature, and the narrator suggests using Photoshop's generative tool for more complex edits. The video concludes with the narrator presenting the generated images and inviting feedback on their consistency, also promoting another video with additional character consistency hacks.
Mindmap
Keywords
Character Consistency
Midjourney
Image Prompt
Cref Function
AI Influencers
Animation Style Illustration
Character Reference Image
Discord
Character Weight
Vary Region
Pixar Animation Style
Highlights
Midjourney has released a new feature for character consistency, allowing users to generate characters with consistent details using a reference image.
The new `--cref` function focuses on character traits and is best used with characters created by Midjourney, although it can also be applied to real people or photos.
The precision of the `--cref` feature is limited and will not replicate exact details like dimples or t-shirt logos.
The feature is particularly useful for stabilizing facial features, but may not meet the standards for hair and outfit details in animation or storybooks.
The results from using `--cref` are suitable for creating AI influencers or models for fashion brands.
The video demonstrates how to use the `--cref` function in Midjourney, including how to generate the same character doing different things.
It's recommended to note down important character features to maintain consistency as you generate images.
The video provides a step-by-step guide on how to generate images using Discord for those without access to the Midjourney Alpha website.
The `--cw` parameter can be used to modify character references, allowing control over the strength of character details from 100 (all details) to 0 (just face).
Lowering the character weight makes the image adhere more to the text prompt and less to the reference character's hair and outfit.
The generated images include animals and butterflies from the original image, showing that the `--cref` parameter improves consistency.
The `vary region` feature can be used to edit images and perfect details, such as changing the color of clothing items.
To add a second character to the scene, the text prompt needs to be more descriptive, specifying the appearance and actions of both characters.
Switching the character reference to the second character allows for fine-tuning details and generating a consistent scene with both characters.
The video demonstrates how to refine clothing details, eye gaze, and other aspects of the generated images using the `vary region` feature.
Photoshop's generative tool can also be used for editing details, offering another option for those concerned about time efficiency.
The video concludes with a series of images generated entirely in Midjourney, showcasing the current state of character consistency before further improvements.
The presenter invites feedback on the consistency of the generated characters and encourages viewers to explore advanced character consistency techniques in a follow-up video.