最好用的Midjourney新功能+GPT,AI半小时制作动态儿童绘本(最强教程) Best Midjourney feature ever make Children's book Tutorial
TLDRThis tutorial showcases how to leverage AI to create a dynamic children's picture book video using two innovative features: Midjourney's style tuner function for consistent character design and Runway gen-2's video generation capabilities. The presenter, Static Electricity, demonstrates the process of using these tools along with ChatGPT to craft a heartwarming story about Mia and her Wishing Tree, which is not only engaging for children but also a potential avenue for generating income. The step-by-step guide covers everything from writing a story with ChatGPT, generating style-consistent images with Midjourney, creating movement in the images with Runway gen-2, and finally, compiling everything into a video with subtitles and soothing background music. The result is a delightful children's book video that can be shared on various platforms, offering a new and efficient way for creators to produce content that is both beautiful and inspiring.
Takeaways
- 🎨 The new style tuner function in Midjourney has greatly improved picture consistency, making it ideal for children's picture books that require character consistency.
- 📈 Runway gen-2's update has significantly enhanced the fidelity and consistency of video results, allowing for more control over the movement in generated videos.
- 🚀 AI can simplify the creation of children's picture book videos, with the tutorial demonstrating the process taking about an hour to create a video called 'Mia's Wishing Tree'.
- 📚 The style tuner function allows for the creation of a consistent style across multiple images by adding a specific suffix to the style command.
- 💡 Using AI to generate content can be a lucrative way to create short videos for additional income.
- 📝 ChatGPT4 can be used to write a warm and soft bedtime story for children, with the ability to generate storyboards and process the story into paragraphs.
- 🌌 Midjourney's imagine command, combined with the style generated by the style tuner, can produce images that fit the desired narrative and style.
- 🔍 The process of generating images with Midjourney involves selecting preferred styles from generated groups and using a code suffix to replicate the style.
- 🎥 Runway Gen-2's text2video tool can transform still images into moving videos with simple commands, adding motion to the picture book.
- ✂️ Video editing tools like Cut can compile the generated videos into a cohesive narrative, with transitions and background music enhancing the storytelling.
- 🎶 Adding subtitles and dubbing with a suitable voice can bring the story to life, creating an immersive experience for young readers.
Q & A
What are the two new functions mentioned in the title that make creating children's picture book videos easier?
-The two new functions mentioned are the 'style tuner' function from Midjourney, which improves picture consistency, and the updated video generation capabilities of Runway gen-2, which allows for more control over the movement in generated videos.
How does the 'style tuner' function help with creating children's picture books?
-The 'style tuner' function helps with creating children's picture books by ensuring character consistency throughout the book, which is crucial for maintaining a coherent narrative and visual appeal.
What is the significance of the Runway gen-2 update in the context of generating videos for children's picture books?
-The Runway gen-2 update allows for the adjustment of the amplitude of movement in the generated videos, which can add a dynamic element to the children's picture book videos, making them more engaging.
How long did it take to create a children's picture book video using these new functions?
-It took about an hour to use ChatGPT to create a children's picture book video called 'Mia's Wishing Tree' using these new functions.
What is the process of using the 'style tuner' function in Midjourney?
-The process involves going to Midjourney, creating a channel, entering a command to start the style tuning, and then adding a suffix to the desired content to generate images with a consistent style.
How does the 'puppet method' mentioned in the script contribute to the creation of the video?
-The 'puppet method' involves first deciding on a character and their attributes, then using these to guide the generation of images. This method helps in maintaining a consistent character design throughout the video.
What role does ChatGPT play in the creation of the children's picture book video?
-ChatGPT is used to write a short bedtime story for children, generate a name for the story, and create a storyboard that serves as a guide for the subsequent image and video generation.
How does the screen ratio affect the generation of images and videos?
-The screen ratio determines the aspect of the generated images and videos, ensuring that they are suitable for the intended format, whether it's for a short video, a long video, or a picture book.
What is the purpose of using Runway Gen-2 in the final stages of the video creation process?
-Runway Gen-2 is used to add movement to the static images, creating a more dynamic and engaging video. It also allows for the fine-tuning of the movement to match the desired effect.
How does the video editing tool contribute to the final product?
-The video editing tool is used to compile the generated videos into a cohesive narrative, add transitions between scenes, include background music, and add dubbing for the story, resulting in a finished children's picture book video.
What is the final step in creating the children's picture book video?
-The final step is to export the edited video, which can then be published on a video platform or used in other media formats.
Outlines
🎨 Introduction to New AI Functions for Children's Picture Books
The speaker, Static Electricity, introduces two new AI functions that have significantly simplified the creation of children's picture book videos. The first is a style tuner function from Midjourney, which enhances picture consistency and is ideal for character consistency in children's books. The second is an update from Runway gen-2, which allows for video generation with improved fidelity and consistency. These updates include the ability to adjust camera movement within the generated video. The speaker shares their experience using ChatGPT to create a children's picture book video called 'Mia's Wishing Tree' in about an hour, suggesting that this method is a viable way to earn extra income. They also provide a brief overview of how the style tuner function works and its impact on the design process.
📚 Using Style Tuner and Selecting Desired Styles
The paragraph explains the process of using the style tuner function to select and generate consistent styles for images. The speaker guides through the steps of using the style tuner, starting from creating a channel on Midjourney to entering commands and keywords for the desired style. They discuss how to choose preferred styles from the generated groups of images and how to use the generated suffix to create images with a consistent style. The paragraph also covers the process of using the imagine command to generate images with a specific style and how to adjust the style by adding different elements or changing the subject of the image.
📖 Generating a Story and Visuals with AI Tools
The speaker details the process of using AI tools to generate both the story and visuals for a children's picture book. They start with ChatGPT to write a warm bedtime story suitable for children aged 3-6, requesting a soft and storytelling language. After receiving a satisfactory story about a girl named Mia and a wishing tree, they use the style tuner to generate a consistent visual style. The speaker then outlines how to use the generated story to create a storyboard and spells for Midjourney to produce images corresponding to each part of the story. They also discuss the importance of screen ratio when generating images for video.
🌟 Creating Animated Videos with Runway Gen-2
The paragraph focuses on using Runway Gen-2 to animate the generated images and create a dynamic picture book video. It describes the process of uploading images into Runway Gen-2 and adding commands to guide the animation. The speaker explains the importance of adjusting the binomial value for movement and the camera motion settings to achieve a subtle and natural movement in the video. They also mention the option to regenerate if the output does not meet expectations and the strategy of re-registering to gain more points for video generation. The paragraph concludes with a demonstration of the final animated video generated using the described process.
🎵 Finalizing the Video with Music and Dubbing
The final paragraph covers the steps to complete the children's picture book video. It involves adding soft and soothing music to set the tone and using subtitles to generate dubbing for the story. The speaker demonstrates how to use the read-aloud feature to dub the text with a voice suitable for children's stories. They also suggest adding ambient sounds like bird calls or crickets for a more immersive experience. The paragraph concludes with the steps to export the final video, which can then be published on various platforms. The speaker also shares the full version of 'Mia and the Wishing Tree' as an example of the final product and encourages viewers to follow the tutorial carefully for high-quality production.
Mindmap
Keywords
Midjourney
Style Tuner
Runway Gen-2
AI
ChatGPT
Storyboard
Text-to-Video
Puppet Method
Screen Ratio
Dubbing
Video Editing
Highlights
Midjourney has released a new model training function called 'style' which greatly improves picture consistency, making it ideal for children's picture books.
Runway gen-2 has been updated with significant improvements in video generation, including better fidelity and consistency.
The ability to adjust the amplitude of movement in the generated video is a key feature of the new Runway gen-2 update.
AI can now simplify the creation of children's picture book videos with enhanced effects.
Static electricity used ChatGPT to create a children's picture book video called 'Mia's Wishing Tree' in about an hour.
The style tuner function allows for the creation of images with a fixed style, which is adjustable and consistent.
The style tuner can generate a series of images with a consistent style by using a specific suffix in the command.
Midjourney's style tuner function is particularly useful for creating content with character consistency, like in children's books.
ChatGPT4 is utilized to write a warm bedtime story for children, with a focus on soft and story-telling language.
The story 'Mia’s Wishing Tree' was generated using ChatGPT, providing a coherent narrative for the picture book.
A storyboard was created for the story, dividing it into a specified number of images to align with the narrative.
The 'puppet method' in Midjourney is used to generate images with specific characteristics, such as a cute girl named Mia.
Runway Gen-2's text2video tool can transform images and text descriptions into animated videos.
The generated videos can be edited using tools like Cut Screen, with transitions and music added for a polished final product.
Subtle movements and camera motions can be added to the videos for a more dynamic and engaging viewing experience.
The final step involves exporting the video, which can then be published on various platforms for children to enjoy.
The full version of 'Mia and the Wishing Tree' is showcased, demonstrating the end-to-end process of creating an AI-generated children's picture book video.
The tutorial emphasizes the importance of a beautiful production, guiding viewers to read carefully for the best results.
Support for the video and channel is encouraged, highlighting the effort put into the AI design and production process.