Midjourney连续人物和角色生成方法: 3种全新的Prompt技巧

小薇 Official Channel
30 Aug 202306:03

TLDR本视频介绍了Midjourney的连续人物和角色生成方法,以及使用prompt技巧的示范。通过添加提示词,可以实现图像的分割和风格设置,生成高质量的连续性图像。还介绍了使用segmind和tensor.art等绘图平台,以及Midjourney的inpainting工具替换人物服装的方法。视频提供了简单易懂的操作指南,帮助观众轻松生成个性化图像。

Takeaways

  • 🎨 Midjourney能够通过文本生成高质量图像,但生成连续角色图像存在挑战。
  • 🔍 用户Julie W Design发现了一个prompt技巧,可以实现基于同一角色生成不同角度和背景的图像。
  • 📈 通过添加'split into 2'词条,可以将图像分割为两个区域,每个区域中人物面容和背景保持一致。
  • 🧩 使用split词条可以划分图像为多个区域,实现连续性图像生成。
  • 🐕 可以设计不同动作的柯基犬图像或人物图像,保持高度一致性。
  • 🌟 在提示词中添加描述人物表情、服装、背景的词条,可以生成更加出色的效果。
  • 🖼️ 将分割区域调整为六个或九个,并添加相应表情描述,可以生成连续性画面。
  • 🎭 该方法也适用于动漫人物的生成,调整模式为niji5,生成组图效果佳。
  • 🆓 推荐了新的绘图平台SegMind和tensor.art,提供免费额度和多种模型。
  • 🔄 使用AnimateDiff工具可以将不同姿势的图像合并成动画。
  • 📈 Midjourney还可以用来设计矢量图插画,适用于出版、媒体、广告等领域。
  • 🔧 Midjourney新加入的inpainting工具可以完成人物服装的替换。

Q & A

  • Midjourney和Stable Diffusion都是如何完成图像生成的?

    -Midjourney和Stable Diffusion都使用了扩散模型来完成图像的生成。

  • 为什么Midjourney在生成连续角色图像方面存在困难?

    -Midjourney在生成连续角色图像方面存在困难,因为它缺乏inpainting和controlnet插件,这使得它难以处理图像的连续性。

  • 网友Julie W Design发现了哪种prompt技巧可以生成不同角度和背景的图像?

    -Julie W Design发现,在提示词中添加'image split into 2'词条,可以把一张完整的图像分割为两个区域,每个区域中人物的面容和背景都会保持一致,从而实现不同角度和背景的图像生成。

  • 如何使用split词条来实现连续性图像生成?

    -使用split词条可以把同一张图像划分为多个区域,同时设置拍摄角度和分割区域的比例,从而实现连续性图像生成。

  • 在生成连续性画面时,添加哪些描述可以提高生成效果?

    -在生成连续性画面时,可以添加描述人物表情、服装、背景的词条,这样生成的效果会更加出色。

  • 动漫人物的生成可以通过调整哪些参数来优化?

    -动漫人物的生成可以通过调整模式(如niji5)和分割区域的数量(如九个)来优化。

  • 对于没有Midjourney付费账号的用户,推荐了哪个新的绘图平台?

    -对于没有Midjourney付费账号的用户,推荐了SegMind这个新的绘图平台。

  • 如何使用SegMind平台生成连续人物图像?

    -在SegMind平台,用户可以使用自己的Google账号登录,选择SDXL 1.0模型,输入提示词,并在前方添加'different images of the same person, shot from multiple angles'词条,选择风格,设置参数和SEED,点击生成按钮即可获得一组连续人物图像。

  • Tensor.art平台提供了哪些功能来方便用户生成图像?

    -Tensor.art提供了足够的点数和丰富的模型,用户可以选择一个模型,按照模板添加正反提示词,设置参数和Lora,即可完成图像的生成。

  • AnimateDiff工具可以用来做什么?

    -AnimateDiff工具可以用来把不同姿势的图像合并成动画。

  • Midjourney的inpainting工具可以用来做什么?

    -Midjourney的inpainting工具可以用来完成人物服装的替换,用户可以选中人物身体区域,更换原有的服装风格词条,为人物添加任意风格的服装。

  • 如何使用Midjourney设计矢量图插画?

    -使用Midjourney设计矢量图插画,可以通过准备一个简单的公式,其中黑色为固定词条,红色用于描述图像的主要内容,蓝色用于描述背景。通过更改提示词变量,可以添加不同的物体或改变设计风格、色彩、比例,生成多样化的图像。

Outlines

00:00

🖼️ Exploring Midjourney's Image Generation Techniques

The first paragraph introduces the channel and discusses the limitations of Midjourney's image generation capabilities, particularly concerning the creation of continuous character images. It highlights a prompt technique discovered by a user named Julie, which allows for the generation of images of the same character from different angles and backgrounds. The paragraph also covers the use of the 'split' term to divide an image into multiple regions, maintaining consistency in appearance and background. The method is demonstrated through examples, such as creating a series of images of a corgi with different actions or generating a set of character images with high consistency. Additionally, the paragraph provides information on how to enhance the generated images by adding descriptive terms for character expressions, clothing, and background. It concludes with a recommendation for a new drawing platform, Segmind, and mentions other platforms like Tensor.art for generating images using stable diffusion models.

05:01

🎨 Utilizing AI for Vector Illustration and Image Editing

The second paragraph focuses on the application of AI in creating vector illustrations and the potential for selling these designs on platforms like WireStock and AdobeStock. It provides a formula for creating images using Midjourney, with placeholders for the main subject and environment. The paragraph explains how to customize the image by changing the subject and environment variables and mentions the possibility of altering the design style, colors, and proportions. It also touches on the use of Midjourney's inpainting tool for changing a character's clothing style. The video concludes with a call to action for viewers to like, subscribe, and share the channel, and to reach out with any questions or comments.

Mindmap

Keywords

💡Midjourney

Midjourney is a term used in the context of this video to refer to a specific AI model that generates high-quality images from text prompts. It is mentioned as utilizing a diffusion model, which is a type of deep learning model used for generating images. In the video, it is highlighted for its ability to create a series of images of the same character from different angles, which is a key focus of the tutorial.

💡Diffusion Model

A diffusion model is a type of machine learning model that is capable of generating images from textual descriptions. It works by gradually adding noise to a known image and then learning to reverse the process to create new images. In the video, diffusion models are central to the image generation process of Midjourney and are discussed in relation to their limitations and capabilities.

💡Inpainting

Inpainting is a technique used in image processing to fill in missing or damaged parts of an image. The video mentions that Midjourney lacks an inpainting plugin, which implies that it has limitations in editing or modifying existing parts of an image to create continuity in a series of generated images.

💡ControlNet

ControlNet is a term that refers to a type of neural network used for controlling or directing the output of generative models. The video script indicates that the absence of a ControlNet plugin in Midjourney makes it challenging to generate consecutive character images with consistency.

💡Prompt Techniques

Prompt techniques are methods or strategies used to guide the AI in generating specific types of images. The video introduces a new prompt technique discovered by a user named Julie, which allows for the generation of images of the same character from different angles and backgrounds by using specific wording in the prompt.

💡Image Split

Image split refers to the process of dividing an image into multiple sections or regions. In the context of the video, the term is used to describe a prompt technique that allows the AI to generate a series of images where each part of the image maintains consistency in the character's features and background.

💡Stable Diffusion

Stable Diffusion is another AI model mentioned in the video that is capable of generating images. It is highlighted as an alternative to Midjourney for users who may not have access to a paid Midjourney account. The video suggests that Stable Diffusion can be used with platforms like Segmind to generate a variety of images.

💡Segmind

Segmind is a new drawing platform recommended in the video for users without a Midjourney paid account. It offers a login feature with Google accounts and provides new users with a free credit to try out various Stable Diffusion models, making it an accessible option for image generation.

💡Tensor.art

Tensor.art is mentioned as a free platform for generating images, offering users sufficient credits and a variety of models to work with. It is presented as a user-friendly option for creating images by adding positive and negative prompts and setting parameters and Lora.

💡LORA

LORA is an acronym that stands for 'Low-Rank Adaptation', which is a technique used in machine learning to adapt a pre-trained model to new tasks with minimal changes. In the context of the video, LORA is mentioned as a parameter that can be set when using Tensor.art to influence the style and outcome of the generated images.

💡AnimateDiff Tool

The AnimateDiff tool is a utility that can merge different poses of images into an animation. The video suggests using this tool to create animations from a series of images generated by the AI models discussed, adding another layer of creativity to the image generation process.

💡Vector Illustrations

Vector illustrations are a type of digital art that use geometrical primitives such as points, lines, curves, and shapes to represent images in computer graphics. The video discusses how Midjourney can be used to create vector-style illustrations suitable for various professional uses like publishing, media, and advertising.

💡Adobe Illustrator

Adobe Illustrator is a vector graphics editing software used for creating vector illustrations, designs, and artwork. The video mentions it as a tool that designers use to create creative vector graphics, comparing it to the capabilities of Midjourney in generating similar types of artwork.

💡Adobe Stock

Adobe Stock is a stock image platform where photographers and artists can sell their images, and where companies can purchase licensed images for various uses. The video refers to Adobe Stock as a place where advertising agencies can buy authorized images, contrasting it with the use of AI-generated images.

Highlights

Midjourney通过文本生成高质量图像,但难以完成连续角色的生成。

网友Julie发现的prompt技巧可以基于同一个角色生成不同角度和背景的图像。

在提示词中添加'image split into 2'可以分割图像为两个区域,保持人物面容和背景一致。

使用'split'词条可以划分图像为多个区域,实现连续性图像生成。

演示了如何设计柯基犬和人物图像,保持高度一致性。

添加描述人物表情、服装、背景的词条可以生成更出色的效果。

通过调整分割区域和表情,可以生成连续性的画面。

该方法也适用于动漫人物的生成,使用niji5模式。

推荐了新的绘图平台Segmind,提供100点免费额度和多种Stable Diffusion模型。

介绍了如何使用Segmind和Tensor.art生成连续人物图像。

展示了如何使用Midjourney设计矢量图插画,并提供简单的公式。

通过改变提示词变量,可以在图像中添加不同的物体,如小狗。

展示了如何改变图像的设计风格、色彩、比例,生成多样化的图像。

介绍了如何设计矢量图风格的图标,并在素材库网站出售。

展示了如何使用Midjourney的inpainting工具完成人物服装的替换。

提供了视频下方的提示词,供大家根据需要修改生成个性化图像。

介绍了使用AnimateDiff工具将不同姿势的图像合并成动画的方法。

推荐了使用Midjourney设计矢量图插画,并在图库平台出售的可能性。