In 2022, a game designer named Jason Allen won first prize in the digital art category at the Colorado State Fair with his Midjourney-generated artwork "Théâtre D'opéra Spatial," causing a massive stir in the art world. Back then, the debate was whether "AI-generated images count as art." Four years later, Midjourney has evolved from a novelty toy on Discord into one of the world's most mainstream AI image generation platforms—though the journey hasn't been smooth.
In 2025, Midjourney released the V7 model, making "Model Personalization" default, introducing Omni Reference for cross-image consistency, and adding native video generation. In March 2026, a brand-new architecture codenamed V8 arrived in Alpha, boasting a ground-up rebuild with 5x faster generation and native 2K high‑resolution rendering—but it also sparked a storm of user backlash.
This review is based on the latest public information and multi-party test data as of 2026, revealing Midjourney's true face: what it can do, what it can't, how it fundamentally differs from other AI image tools, and whether it's worth your subscription.
1. What Is Midjourney? Clarifying Its Positioning
Midjourney is not a Photoshop plugin, nor a simple toy where you describe and it draws. In plain terms, it is an AI image generation engine with "aesthetic taste" as its core competitiveness.
Its operating logic is completely different from tools like Stable Diffusion and DALL·E. Stable Diffusion pursues "precise control"—every line, every compositional element must obey you. Midjourney pursues "taste output"—you give it a vague concept, and it returns an image ready for your portfolio. Adobe Firefly sits in between, deeply integrated into the Adobe ecosystem with a focus on commercial safety and workflow. Midjourney is better suited for "visual exploration"—its output carries an indescribable cinematic and art-direction feel, as if thoughtfully composed by a professional art director.
The company behind Midjourney is also called Midjourney, led by founder David Holz and headquartered in San Francisco. Unlike OpenAI and Stability AI, Midjourney has never raised large-scale funding and remains a highly independent, lean team. In its early days, Midjourney could only be used via Discord—you had to join its Discord server and type the /imagine command. Starting in 2025, Midjourney launched an independent web application (midjourney.com), allowing users to operate directly in the browser without being tied to Discord.
2. Version Evolution: From V6 to V8, Every Step at a Critical Point
To understand Midjourney's current state, you must first grasp its version evolution path. Midjourney's version numbers are not just a number game—each major version represents a significant shift in underlying architecture and output style.
V6 (Main model in 2024)
The main model of Midjourney in 2024, which greatly improved image coherence and detail. It could already generate extremely high-quality images, though prompt accuracy and style stability still had shortcomings. V6 established Midjourney's status as the "industry benchmark", making it clearly superior to competitors at the time in image quality.
V7 (Released 2025, still the default model in 2026)
V7 was released on April 3, 2025, and became the default model on June 17, 2025. Official documentation positions V7 around four core upgrades: higher text and image prompt accuracy, richer textures with more coherent details, Draft Mode for quick sketching, and Omni Reference with enhanced personalization workflow. V7 is especially suited for "stylized, taste-driven image generation"—the more aesthetic the task, the better V7 performs. It is also the first Midjourney version with Model Personalization enabled by default—meaning even if you don't adjust any parameters, V7 is quietly adapting its output style based on your usage habits.
Niji 7 (Released January 9, 2026)
An anime-specialized model co-developed by Midjourney and Spellbrush, tailored for 2D, Japanese/Korean illustration styles. Niji 7's core breakthrough lies in the leap in "image coherence"—facial features, especially the subtle details of eye highlights, reflections, and pupil structure, exhibit unprecedented clarity and consistency. For game illustrators, light novel artists, and Vtuber character designers, this was almost the most worthwhile upgrade to get immediately in early 2026.
V8 Alpha (Released March 17, 2026)
Midjourney describes it as a "brand-new model," not an improvement of V7—a ground-up rebuilt architecture. V8 brought 5x generation speed, native 2K high resolution, greatly improved text rendering, and better prompt understanding. But the launch of V8 Alpha wasn't met with universal applause—many long-time users complained that V8 broke old parameters, flattened image texture, and drastically reduced the sense of artistry and surprise. To sum up V8 in one sentence: Faster and more stable, but lost its soul.
V8.1 Alpha (Released April 15, 2026)
Just one month later, Midjourney launched V8.1 Alpha, focusing on "aesthetic return"—officially stating that V8.1 returns to V7's signature classic aesthetics while retaining V8's advanced features. More importantly, V8.1's native 2K high-res rendering is 3x faster and 3x cheaper than V8, with standard resolution output speed increased by 50%. This can be seen as Midjourney's emergency fix after V8's "flop."
3. Core Features Deep Dive
1. Draft Mode — Reduce Exploration Costs by 60-70%
Draft Mode is V7's biggest workflow upgrade. According to official documents, Draft Mode generates about 10x faster and costs about half the GPU time of standard mode. Running 5 to 10 iterations in Draft Mode during the exploration phase to lock in concept and composition, then clicking "Enhance" to re-render at full quality, can save approximately 60-70% of GPU quota in exploration-intensive projects.
On the web version, enabling Draft Mode switches the prompt bar to "conversation mode"—you can describe the desired image in natural language, see results in seconds, then directly say "make the lighting warmer" or "change the background to a rooftop at night," and the system will automatically adjust the prompt and regenerate.
2. Model Personalization — Spend 15 Minutes Setting It Up, and Every Subsequent Image Will Suit Your Taste Better
Model Personalization is V7's most underrated feature. It allows V7 to learn your aesthetic preferences and automatically adjust default outputs to match your taste. Setup: Go to the Midjourney website, click the "Personalize" button in the sidebar, and choose preferences from pairs of images. At least 200 pairs must be rated to unlock personalization; heavy users often go beyond 500 pairs for more meaningful calibration. Once set, your personalization profile is automatically applied to every V7 generation. The value: the more you use it, the better Midjourney understands you.
3. Omni Reference — Keep Multiple Images Stylistically Consistent
Omni Reference solves the long-standing pain point of "making multiple images look like they belong to the same series." You can specify one or more reference images to maintain consistent character appearance, color schemes, and artistic style in subsequent generations—revolutionary for those needing series output, such as brand visual design, game character design, or picture book illustration.
4. Style Reference (--sref)
The --sref parameter allows users to upload or reference existing images as a style blueprint, achieving highly consistent style transfer. On Niji 7, sref performance far exceeds previous generations, with significantly reduced style drift, regarded by the community as a powerful tool for repeatable character design and IP creation.
5. Moodboards
The Moodboards feature launched with Niji 7 allows users to create a visual reference collection, helping the model more accurately capture the desired tone and style direction. Extremely valuable for early-stage design inspiration and client communication—you can build a Moodboard first, let Midjourney understand your aesthetic direction, then start generating with purpose.
6. Describe — Reverse-Engineer Prompts from Images
Midjourney's Describe function can analyze any uploaded image and reverse-generate multiple possible text prompts. Particularly useful when you see a style you like but don't know how to describe it, want to analyze the components of a visual style, or want to learn better prompt writing. V8.1 further strengthens this feature for more precise reverse engineering.
4. Use Cases & Target Audience — Who Should and Shouldn't Use It?
✅ Who Should Use Midjourney
- Brand Visual Designers & Art Directors: Midjourney is currently the best tool to one-click generate images ready for a portfolio. Its lighting, composition taste, and material representation remain industry benchmarks.
- Game Illustrators & Light Novel Artists: Niji 7 is tailor-made for anime styles; no general-purpose AI can match its performance in Japanese-style animation and Eastern illustration. Eye highlights, line refinement, flat compositional treatment—these details are Niji's absolute strengths.
- Explorers in Early Creative Phases: Draft Mode makes rapid experimentation extremely economical. Running dozens of directions with Midjourney to spark inspiration and lock in visual style is currently the most cost-effective workflow.
- Vtubers & IP Creators: The combination of sref style reference and Omni Reference cross-image consistency makes Midjourney the ideal tool for building character visual systems.
❌ Who Should Not Use Midjourney
- Users Needing Precise Layouts & Design Systems: Midjourney is unsuitable for precise layout design, strict dimension output, or pixel-level fine editing.
- Design Workflows Requiring 100% Reproducibility: Midjourney's outputs are inherently uncertain. For deterministic "input A always yields output B," Stable Diffusion or Adobe Firefly is more suitable.
- Students or Hobbyists on Very Tight Budgets: Midjourney has no free plan; the cheapest Basic plan starts at $10/month.
- Users Pursuing Photorealistic Precision: While Midjourney excels in realistic style lighting, if you need deceptive photorealism (e.g., architectural visualization, product rendering), FLUX Pro or DALL·E might be better in some scenarios.
5. Pricing & Cost — Can You Afford It?
As of April 2026, Midjourney offers four subscription plans:
| Plan | Monthly | Annual (Monthly Avg.) | Fast GPU Time | Relax Mode | Stealth Mode |
|---|---|---|---|---|---|
| Basic | $10/mo | $96/yr ($8/mo) | 3.3 hrs/mo (200 min) | None | None |
| Standard | $30/mo | $288/yr ($24/mo) | 15 hrs/mo | Unlimited images | None |
| Pro | $60/mo | $576/yr ($48/mo) | 30 hrs/mo | Unlimited images + SD videos | Yes |
| Mega | $120/mo | $1,152/yr ($96/mo) | 60 hrs/mo | Unlimited images + SD videos | Yes |
Extra GPU time is uniformly $4/hr across all plans. Annual billing gets a 20% discount.
Plan Selection Advice:
- Basic for low-frequency personal users wanting a low barrier to entry. Note: once the 3.3 hours of Fast time are used up, you can't generate anymore (Basic has no Relax Mode).
- Standard is the best default for most individual users. Relax Mode's "unlimited generations" is the core selling point.
- Pro for heavy creators needing Stealth Mode privacy and higher concurrency.
- Mega for extremely high-frequency creative teams.
Beyond official subscriptions, there are also pay-per-use API services (e.g., EvoLink) with Draft Mode as low as $0.0375 per request, no monthly commitment, ideal for users wanting flexible cost control.
6. Pros & Cons — An Honest, No-Hype Summary
✅ Pros
- Irreplaceable Aesthetic Taste: From V6 to V8.1, Midjourney's core competitive edge remains its "aesthetic"—the cinematic lighting, artistic composition, and rich textures are still king.
- Draft Mode's Cost Revolution: Halves GPU cost during exploration, combined with conversational adjustments, significantly boosts creative workflow efficiency.
- Niji 7's Anime Dominance: In the 2D style generation arena, Niji has built an unshakeable moat.
- Personalization Makes the Tool Better Over Time: After rating 500+ pairs, Midjourney gradually aligns with your aesthetic habits, truly adapting the tool to the person.
- Moodboards & sref Lock Style: Help you nail down visual direction early on, keeping the entire project's style consistent.
❌ Cons
- No Free Plan: The minimum $10/mo barrier deters many potential trial users.
- V8 Alpha's Aesthetic Disconnect: Old parameters broke, image texture flattened, artistry faded—exposing the tension between technical upgrades and aesthetic legacy.
- Less Precise Controllability Than Competitors: Not ideal when you need exact control over every element's position, size, and color.
- High Cost for High-Quality Modes: Though V8.1 improved significantly, it remains relatively pricey compared to rivals.
- Cloud-Dependent, No Offline Capability: Entirely reliant on cloud computing, with no local deployment option.
- Generational Uncertainty: The same prompt run twice can yield vastly different results, making commercial reproducibility challenging.
7. Midjourney vs. DALL·E vs. Stable Diffusion vs. Adobe Firefly — Positioning Dictates Choice
Many first-time AI image generation users ask, "With so many tools, which one should I pick?" The answer: it depends on what you want.
- Midjourney: Positioned as an "aesthetic engine," it excels at producing tasteful images. The first choice for designers, art directors, and brand creatives.
- Adobe Firefly: Positioned as a "commercial-grade production tool," deeply integrated into the Adobe ecosystem with unmatched advantages in commercial copyright and compliance.
- Stable Diffusion: Positioned as an "open-source controllable platform"—deploy locally, fine-tune every parameter, use custom models. Best for those wanting full control.
- DALL·E (OpenAI): Positioned as a "general-purpose image generator," its biggest strength is integration within the ChatGPT ecosystem with an extremely low usage barrier.
In terms of image quality, for realistic photography styles, Midjourney V7 and FLUX Pro each have their strengths. Midjourney particularly excels at lighting and atmosphere, while FLUX Pro has a slight edge in anatomical accuracy (especially hands). As of 2026, the "hand problem" in AI image generation has been largely solved, though not perfectly.
8. The V8 Controversy: When the "Artistic Toy" Became a "Commercial Tool"
The launch of Midjourney V8 Alpha in March 2026 could be considered the biggest reputation crisis in the product's history. With no warm-up, no announcement, a beta was simply dropped into the community, touting faster, clearer, more controllable—and users responded with a barrage of complaints.
V8 was indeed a productivity-level upgrade: nearly 5x faster generation, text rendering finally usable, better understanding of complex prompts, native 2K high-res support, and solid details. But the cost of these advances: old parameters broke, old workflows were ruined, image texture flattened, lighting weakened, and the sense of artistry and surprise plummeted. One user on X said bluntly, "Is V8 really an improvement? So far, absolutely not."
Fortunately, the Midjourney team responded quickly. Just one month later, V8.1 Alpha made "aesthetic return" its core mission. This episode reflects the shared destiny of all AI products: the more you iterate technically, the harder it is to satisfy all users; the more you try to be everything, the easier it is to lose the very trait that made you attractive in the first place.
9. Usage Guide for Chinese-Speaking Users
Midjourney supports multilingual operation, including Chinese (Simplified), and the accuracy of Chinese prompt recognition is already quite high. However, users should distinguish between the official version and third-party localized versions. The official website (midjourney.com) provides the original English interface; some Chinese-speaking users access the tool via third-party localization plugins or independent Chinese-version websites—these channels are not official, so caution is advised regarding security.
User-friendly ways for Chinese-speaking users: directly enter Chinese prompts on the official website—current recognition is very accurate, no need to translate into English; join the official Midjourney server on Discord and use the /imagine command in channels; or subscribe to an official plan and operate through the web version. Niji 7 can be switched by adding --niji7 after the prompt on Discord, or simply selected from the Version dropdown menu on the Web interface.
10. Conclusion: Is Midjourney Still Worth Using in 2026?
If I had to sum it up in one sentence: Midjourney remains the "aesthetic benchmark" for AI image generation in 2026—its taste, composition, and lighting treatment still have no real rival in the industry. But it is undergoing a difficult transformation: from "artist's toy" to "business user's tool," from "winning by surprise" to "competing on stability and speed." V8's reputation flop exposed the growing pains of this transition; V8.1's quick fix demonstrated the team's responsiveness.
Final advice:
- If you pursue "beautiful images ready to use without much tweaking"—Midjourney V7 and V8.1 are both excellent choices; the annual Standard plan ($24/mo) offers the best value.
- If you're an anime-style creator—Niji 7 is an essential tool with no substitute.
- If you need a precise, controllable, reproducible commercial design pipeline—consider Adobe Firefly or Stable Diffusion.
- If you just want to try AI image generation on a budget—DALL·E or free Stable Diffusion options are better for starters.
- If you're an existing user unsure about upgrading—test V8.1 at alpha.midjourney.com first to see if it matches your aesthetic expectations before deciding.
Midjourney's story tells us a simple truth: in the AI era, technology can be caught up with, parameters can be stacked, but "taste" is the hardest competitive moat to replicate. That's why, in 2026 with so many AI image tools blooming, Midjourney still deserves serious attention.