Midjourney连续人物和角色生成方法: 3种全新的Prompt技巧

小薇 Official Channel
30 Aug 202306:03



  • 🎨 Midjourney能够通过文本生成高质量图像,但生成连续角色图像存在挑战。
  • 🔍 用户Julie W Design发现了一个prompt技巧,可以实现基于同一角色生成不同角度和背景的图像。
  • 📈 通过添加'split into 2'词条,可以将图像分割为两个区域,每个区域中人物面容和背景保持一致。
  • 🧩 使用split词条可以划分图像为多个区域,实现连续性图像生成。
  • 🐕 可以设计不同动作的柯基犬图像或人物图像,保持高度一致性。
  • 🌟 在提示词中添加描述人物表情、服装、背景的词条,可以生成更加出色的效果。
  • 🖼️ 将分割区域调整为六个或九个,并添加相应表情描述,可以生成连续性画面。
  • 🎭 该方法也适用于动漫人物的生成,调整模式为niji5,生成组图效果佳。
  • 🆓 推荐了新的绘图平台SegMind和tensor.art,提供免费额度和多种模型。
  • 🔄 使用AnimateDiff工具可以将不同姿势的图像合并成动画。
  • 📈 Midjourney还可以用来设计矢量图插画,适用于出版、媒体、广告等领域。
  • 🔧 Midjourney新加入的inpainting工具可以完成人物服装的替换。

Q & A

  • Midjourney和Stable Diffusion都是如何完成图像生成的?

    -Midjourney和Stable Diffusion都使用了扩散模型来完成图像的生成。

  • 为什么Midjourney在生成连续角色图像方面存在困难?


  • 网友Julie W Design发现了哪种prompt技巧可以生成不同角度和背景的图像?

    -Julie W Design发现,在提示词中添加'image split into 2'词条,可以把一张完整的图像分割为两个区域,每个区域中人物的面容和背景都会保持一致,从而实现不同角度和背景的图像生成。

  • 如何使用split词条来实现连续性图像生成?


  • 在生成连续性画面时,添加哪些描述可以提高生成效果?


  • 动漫人物的生成可以通过调整哪些参数来优化?


  • 对于没有Midjourney付费账号的用户,推荐了哪个新的绘图平台?


  • 如何使用SegMind平台生成连续人物图像?

    -在SegMind平台,用户可以使用自己的Google账号登录,选择SDXL 1.0模型,输入提示词,并在前方添加'different images of the same person, shot from multiple angles'词条,选择风格,设置参数和SEED,点击生成按钮即可获得一组连续人物图像。

  • Tensor.art平台提供了哪些功能来方便用户生成图像?


  • AnimateDiff工具可以用来做什么?


  • Midjourney的inpainting工具可以用来做什么?


  • 如何使用Midjourney设计矢量图插画?




🖼️ Exploring Midjourney's Image Generation Techniques

The first paragraph introduces the channel and discusses the limitations of Midjourney's image generation capabilities, particularly concerning the creation of continuous character images. It highlights a prompt technique discovered by a user named Julie, which allows for the generation of images of the same character from different angles and backgrounds. The paragraph also covers the use of the 'split' term to divide an image into multiple regions, maintaining consistency in appearance and background. The method is demonstrated through examples, such as creating a series of images of a corgi with different actions or generating a set of character images with high consistency. Additionally, the paragraph provides information on how to enhance the generated images by adding descriptive terms for character expressions, clothing, and background. It concludes with a recommendation for a new drawing platform, Segmind, and mentions other platforms like Tensor.art for generating images using stable diffusion models.


🎨 Utilizing AI for Vector Illustration and Image Editing

The second paragraph focuses on the application of AI in creating vector illustrations and the potential for selling these designs on platforms like WireStock and AdobeStock. It provides a formula for creating images using Midjourney, with placeholders for the main subject and environment. The paragraph explains how to customize the image by changing the subject and environment variables and mentions the possibility of altering the design style, colors, and proportions. It also touches on the use of Midjourney's inpainting tool for changing a character's clothing style. The video concludes with a call to action for viewers to like, subscribe, and share the channel, and to reach out with any questions or comments.




Midjourney is a term used in the context of this video to refer to a specific AI model that generates high-quality images from text prompts. It is mentioned as utilizing a diffusion model, which is a type of deep learning model used for generating images. In the video, it is highlighted for its ability to create a series of images of the same character from different angles, which is a key focus of the tutorial.

💡Diffusion Model

A diffusion model is a type of machine learning model that is capable of generating images from textual descriptions. It works by gradually adding noise to a known image and then learning to reverse the process to create new images. In the video, diffusion models are central to the image generation process of Midjourney and are discussed in relation to their limitations and capabilities.


Inpainting is a technique used in image processing to fill in missing or damaged parts of an image. The video mentions that Midjourney lacks an inpainting plugin, which implies that it has limitations in editing or modifying existing parts of an image to create continuity in a series of generated images.


ControlNet is a term that refers to a type of neural network used for controlling or directing the output of generative models. The video script indicates that the absence of a ControlNet plugin in Midjourney makes it challenging to generate consecutive character images with consistency.

💡Prompt Techniques

Prompt techniques are methods or strategies used to guide the AI in generating specific types of images. The video introduces a new prompt technique discovered by a user named Julie, which allows for the generation of images of the same character from different angles and backgrounds by using specific wording in the prompt.

💡Image Split

Image split refers to the process of dividing an image into multiple sections or regions. In the context of the video, the term is used to describe a prompt technique that allows the AI to generate a series of images where each part of the image maintains consistency in the character's features and background.

💡Stable Diffusion

Stable Diffusion is another AI model mentioned in the video that is capable of generating images. It is highlighted as an alternative to Midjourney for users who may not have access to a paid Midjourney account. The video suggests that Stable Diffusion can be used with platforms like Segmind to generate a variety of images.


Segmind is a new drawing platform recommended in the video for users without a Midjourney paid account. It offers a login feature with Google accounts and provides new users with a free credit to try out various Stable Diffusion models, making it an accessible option for image generation.


Tensor.art is mentioned as a free platform for generating images, offering users sufficient credits and a variety of models to work with. It is presented as a user-friendly option for creating images by adding positive and negative prompts and setting parameters and Lora.


LORA is an acronym that stands for 'Low-Rank Adaptation', which is a technique used in machine learning to adapt a pre-trained model to new tasks with minimal changes. In the context of the video, LORA is mentioned as a parameter that can be set when using Tensor.art to influence the style and outcome of the generated images.

💡AnimateDiff Tool

The AnimateDiff tool is a utility that can merge different poses of images into an animation. The video suggests using this tool to create animations from a series of images generated by the AI models discussed, adding another layer of creativity to the image generation process.

💡Vector Illustrations

Vector illustrations are a type of digital art that use geometrical primitives such as points, lines, curves, and shapes to represent images in computer graphics. The video discusses how Midjourney can be used to create vector-style illustrations suitable for various professional uses like publishing, media, and advertising.

💡Adobe Illustrator

Adobe Illustrator is a vector graphics editing software used for creating vector illustrations, designs, and artwork. The video mentions it as a tool that designers use to create creative vector graphics, comparing it to the capabilities of Midjourney in generating similar types of artwork.

💡Adobe Stock

Adobe Stock is a stock image platform where photographers and artists can sell their images, and where companies can purchase licensed images for various uses. The video refers to Adobe Stock as a place where advertising agencies can buy authorized images, contrasting it with the use of AI-generated images.




在提示词中添加'image split into 2'可以分割图像为两个区域,保持人物面容和背景一致。






推荐了新的绘图平台Segmind,提供100点免费额度和多种Stable Diffusion模型。