最好用的Midjourney新功能+GPT,AI半小时制作动态儿童绘本(最强教程) Best Midjourney feature ever make Children's book Tutorial

靜電的AI設計教室
8 Nov 202325:09

TLDRThis tutorial showcases how to leverage AI to create a dynamic children's picture book video using two innovative features: Midjourney's style tuner function for consistent character design and Runway gen-2's video generation capabilities. The presenter, Static Electricity, demonstrates the process of using these tools along with ChatGPT to craft a heartwarming story about Mia and her Wishing Tree, which is not only engaging for children but also a potential avenue for generating income. The step-by-step guide covers everything from writing a story with ChatGPT, generating style-consistent images with Midjourney, creating movement in the images with Runway gen-2, and finally, compiling everything into a video with subtitles and soothing background music. The result is a delightful children's book video that can be shared on various platforms, offering a new and efficient way for creators to produce content that is both beautiful and inspiring.

Takeaways

  • 🎨 The new style tuner function in Midjourney has greatly improved picture consistency, making it ideal for children's picture books that require character consistency.
  • 📈 Runway gen-2's update has significantly enhanced the fidelity and consistency of video results, allowing for more control over the movement in generated videos.
  • 🚀 AI can simplify the creation of children's picture book videos, with the tutorial demonstrating the process taking about an hour to create a video called 'Mia's Wishing Tree'.
  • 📚 The style tuner function allows for the creation of a consistent style across multiple images by adding a specific suffix to the style command.
  • 💡 Using AI to generate content can be a lucrative way to create short videos for additional income.
  • 📝 ChatGPT4 can be used to write a warm and soft bedtime story for children, with the ability to generate storyboards and process the story into paragraphs.
  • 🌌 Midjourney's imagine command, combined with the style generated by the style tuner, can produce images that fit the desired narrative and style.
  • 🔍 The process of generating images with Midjourney involves selecting preferred styles from generated groups and using a code suffix to replicate the style.
  • 🎥 Runway Gen-2's text2video tool can transform still images into moving videos with simple commands, adding motion to the picture book.
  • ✂️ Video editing tools like Cut can compile the generated videos into a cohesive narrative, with transitions and background music enhancing the storytelling.
  • 🎶 Adding subtitles and dubbing with a suitable voice can bring the story to life, creating an immersive experience for young readers.

Q & A

  • What are the two new functions mentioned in the title that make creating children's picture book videos easier?

    -The two new functions mentioned are the 'style tuner' function from Midjourney, which improves picture consistency, and the updated video generation capabilities of Runway gen-2, which allows for more control over the movement in generated videos.

  • How does the 'style tuner' function help with creating children's picture books?

    -The 'style tuner' function helps with creating children's picture books by ensuring character consistency throughout the book, which is crucial for maintaining a coherent narrative and visual appeal.

  • What is the significance of the Runway gen-2 update in the context of generating videos for children's picture books?

    -The Runway gen-2 update allows for the adjustment of the amplitude of movement in the generated videos, which can add a dynamic element to the children's picture book videos, making them more engaging.

  • How long did it take to create a children's picture book video using these new functions?

    -It took about an hour to use ChatGPT to create a children's picture book video called 'Mia's Wishing Tree' using these new functions.

  • What is the process of using the 'style tuner' function in Midjourney?

    -The process involves going to Midjourney, creating a channel, entering a command to start the style tuning, and then adding a suffix to the desired content to generate images with a consistent style.

  • How does the 'puppet method' mentioned in the script contribute to the creation of the video?

    -The 'puppet method' involves first deciding on a character and their attributes, then using these to guide the generation of images. This method helps in maintaining a consistent character design throughout the video.

  • What role does ChatGPT play in the creation of the children's picture book video?

    -ChatGPT is used to write a short bedtime story for children, generate a name for the story, and create a storyboard that serves as a guide for the subsequent image and video generation.

  • How does the screen ratio affect the generation of images and videos?

    -The screen ratio determines the aspect of the generated images and videos, ensuring that they are suitable for the intended format, whether it's for a short video, a long video, or a picture book.

  • What is the purpose of using Runway Gen-2 in the final stages of the video creation process?

    -Runway Gen-2 is used to add movement to the static images, creating a more dynamic and engaging video. It also allows for the fine-tuning of the movement to match the desired effect.

  • How does the video editing tool contribute to the final product?

    -The video editing tool is used to compile the generated videos into a cohesive narrative, add transitions between scenes, include background music, and add dubbing for the story, resulting in a finished children's picture book video.

  • What is the final step in creating the children's picture book video?

    -The final step is to export the edited video, which can then be published on a video platform or used in other media formats.

Outlines

00:00

🎨 Introduction to New AI Functions for Children's Picture Books

The speaker, Static Electricity, introduces two new AI functions that have significantly simplified the creation of children's picture book videos. The first is a style tuner function from Midjourney, which enhances picture consistency and is ideal for character consistency in children's books. The second is an update from Runway gen-2, which allows for video generation with improved fidelity and consistency. These updates include the ability to adjust camera movement within the generated video. The speaker shares their experience using ChatGPT to create a children's picture book video called 'Mia's Wishing Tree' in about an hour, suggesting that this method is a viable way to earn extra income. They also provide a brief overview of how the style tuner function works and its impact on the design process.

05:01

📚 Using Style Tuner and Selecting Desired Styles

The paragraph explains the process of using the style tuner function to select and generate consistent styles for images. The speaker guides through the steps of using the style tuner, starting from creating a channel on Midjourney to entering commands and keywords for the desired style. They discuss how to choose preferred styles from the generated groups of images and how to use the generated suffix to create images with a consistent style. The paragraph also covers the process of using the imagine command to generate images with a specific style and how to adjust the style by adding different elements or changing the subject of the image.

10:02

📖 Generating a Story and Visuals with AI Tools

The speaker details the process of using AI tools to generate both the story and visuals for a children's picture book. They start with ChatGPT to write a warm bedtime story suitable for children aged 3-6, requesting a soft and storytelling language. After receiving a satisfactory story about a girl named Mia and a wishing tree, they use the style tuner to generate a consistent visual style. The speaker then outlines how to use the generated story to create a storyboard and spells for Midjourney to produce images corresponding to each part of the story. They also discuss the importance of screen ratio when generating images for video.

15:03

🌟 Creating Animated Videos with Runway Gen-2

The paragraph focuses on using Runway Gen-2 to animate the generated images and create a dynamic picture book video. It describes the process of uploading images into Runway Gen-2 and adding commands to guide the animation. The speaker explains the importance of adjusting the binomial value for movement and the camera motion settings to achieve a subtle and natural movement in the video. They also mention the option to regenerate if the output does not meet expectations and the strategy of re-registering to gain more points for video generation. The paragraph concludes with a demonstration of the final animated video generated using the described process.

20:04

🎵 Finalizing the Video with Music and Dubbing

The final paragraph covers the steps to complete the children's picture book video. It involves adding soft and soothing music to set the tone and using subtitles to generate dubbing for the story. The speaker demonstrates how to use the read-aloud feature to dub the text with a voice suitable for children's stories. They also suggest adding ambient sounds like bird calls or crickets for a more immersive experience. The paragraph concludes with the steps to export the final video, which can then be published on various platforms. The speaker also shares the full version of 'Mia and the Wishing Tree' as an example of the final product and encourages viewers to follow the tutorial carefully for high-quality production.

Mindmap

Keywords

Midjourney

Midjourney is a term that refers to a new model training function in the AI painting circle. It is significant in the video as it has released a 'style tuner' function that greatly improves the consistency of generated images, which is crucial for creating children's picture books where character consistency is essential. The style tuner allows for a more comic or painted style to be applied consistently across images.

Style Tuner

The Style Tuner is a new function recently updated by Midjourney that solves the issue of style inconsistency in AI-generated images. It is central to the video's theme as it enables the creation of images with a fixed style, which is demonstrated through the creation of various themed images like 'street fashion boy' or 'street fashion girl', maintaining a consistent style throughout.

Runway Gen-2

Runway Gen-2 is a software mentioned in the video that has released a new version with significant updates to its video generation capabilities. It is important because it allows for the creation of videos with improved fidelity and consistency, and it introduces the ability to move within the 'Runway lens' and adjust the amplitude of movement in the generated video, which adds a dynamic element to the static images created for the children's book.

AI

AI, or Artificial Intelligence, is the overarching technology that enables the creation of children's picture book videos as described in the video. It is the driving force behind the automation and consistency in style and movement that the tutorial aims to achieve. The video demonstrates how AI can simplify the process of creating dynamic and stylistically consistent content for children's books.

ChatGPT

ChatGPT is an AI language model used in the video to generate a short bedtime story for children. It is significant as it provides the narrative content for the children's picture book video. The video demonstrates how ChatGPT can be instructed to create a warm, age-appropriate story that helps children feel the beauty of the world.

Storyboard

A storyboard is a series of short paragraphs or descriptions that outline the sequence of events in a story or video. In the context of the video, ChatGPT is used to generate a storyboard for the story 'Mia's Wishing Tree', which then serves as a guide for creating the individual images and scenes for the children's book video.

Text-to-Video

Text-to-video is a functionality within Runway Gen-2 that allows users to generate videos from text or image inputs. It is showcased in the video as a method to bring movement to the static images created for the children's book, thus enhancing the storytelling and engagement of the final product.

Puppet Method

The Puppet Method is a technique mentioned in the video for generating images with specific characters and attributes. It involves defining a character, such as 'Mia', and describing her appearance and actions in detail to guide the AI in creating a consistent and relevant image for the story.

Screen Ratio

Screen ratio, also known as aspect ratio, is the proportional relationship between the width and the height of an image or video. In the video, it is important for ensuring that the generated images and videos are compatible for the intended format, whether it's for a short video with a 9:16 ratio or a longer format with a different ratio.

Dubbing

Dubbing refers to the process of adding a voiceover to a video. In the context of the video, it is used to add a narrated voice to the children's book video, enhancing the storytelling and making it more engaging for young audiences. The video demonstrates how AI can be used to generate a dubbed voiceover from a text script.

Video Editing

Video editing is the process of assembling video shots and adding effects, transitions, and audio to create a finished video product. In the video script, video editing is the final step where the generated videos are compiled, transitions are added, and music and voiceovers are included to complete the children's picture book video.

Highlights

Midjourney has released a new model training function called 'style' which greatly improves picture consistency, making it ideal for children's picture books.

Runway gen-2 has been updated with significant improvements in video generation, including better fidelity and consistency.

The ability to adjust the amplitude of movement in the generated video is a key feature of the new Runway gen-2 update.

AI can now simplify the creation of children's picture book videos with enhanced effects.

Static electricity used ChatGPT to create a children's picture book video called 'Mia's Wishing Tree' in about an hour.

The style tuner function allows for the creation of images with a fixed style, which is adjustable and consistent.

The style tuner can generate a series of images with a consistent style by using a specific suffix in the command.

Midjourney's style tuner function is particularly useful for creating content with character consistency, like in children's books.

ChatGPT4 is utilized to write a warm bedtime story for children, with a focus on soft and story-telling language.

The story 'Mia’s Wishing Tree' was generated using ChatGPT, providing a coherent narrative for the picture book.

A storyboard was created for the story, dividing it into a specified number of images to align with the narrative.

The 'puppet method' in Midjourney is used to generate images with specific characteristics, such as a cute girl named Mia.

Runway Gen-2's text2video tool can transform images and text descriptions into animated videos.

The generated videos can be edited using tools like Cut Screen, with transitions and music added for a polished final product.

Subtle movements and camera motions can be added to the videos for a more dynamic and engaging viewing experience.

The final step involves exporting the video, which can then be published on various platforms for children to enjoy.

The full version of 'Mia and the Wishing Tree' is showcased, demonstrating the end-to-end process of creating an AI-generated children's picture book video.

The tutorial emphasizes the importance of a beautiful production, guiding viewers to read carefully for the best results.

Support for the video and channel is encouraged, highlighting the effort put into the AI design and production process.