LogoGemini Omni
  • Crear
  • Agent
  • Imagen con IA
  • Video con IA
  • Precios
Ahora lanzado oficialmente y accesible para todos los miembros de la comunidad públicaMarch 2026

Wan 2.7 AI Video Generator | Professional Frame Controls

Alibaba’s Wan 2.7 is a cutting-edge AI video generator with first/last frame control, multi-reference input, and instruction-based editing. Create polished 5–15 second videos at 720P or 1080P resolution.

Las descripciones detalladas de escenas, acciones y estilos darán como resultado una mejor calidad en la generación

2 segundos5 segundos15 segundos

Cargando...

Unirse a la comunidad de Discord
Aviso de política de contenido
Cualquier generación que no cumpla las normativas fallará. Las caras de personas reales, contenido NSFW, violencia y entradas potencialmente infractoras pueden ser rechazadas por las comprobaciones de seguridad a nivel de modelo. El arte estilizado, los personajes ficticios, los productos y los temas generados por IA suelen funcionar mejor.
Key Wan 2.7 Capabilities

What Sets Wan 2.7 Apart From Other AI Video Tools

Wan 2.7 brings first/last frame control, multi-reference input, intuitive instruction-based editing, and up to 15-second output to Alibaba’s portfolio of video generation models.

First & Last Frame Control

Define your desired opening and closing frames. Wan 2.7 will generate the seamless motion sequence between those points, giving you precise cinematic control without needing complex prompt work.

Lock in both your starting and final compositions before starting generation.
Perfect for product reveals, character transitions, and clean scene cuts.
Eliminates the guesswork involved in hitting a specific final visual state.

Simultaneous Multi-Reference Input

Upload up to 5 reference videos at once, and Wan 2.7 will use them to guide character appearance, scene environment, and overall motion style for your output.

Add up to 5 reference videos to guide your final video’s look and feel.
Preserves consistent visual styling for characters and scenes throughout your clip.
Ideal for brand marketing content, fashion showcases, and maintaining product visual continuity.

Prompt-Driven Video Editing

Modify existing videos using plain natural language. Swap backgrounds, adjust lighting, update clothing, or alter style without having to regenerate the entire clip from scratch.

Outline your desired changes in simple text, no advanced timeline editing needed.
Swap backgrounds, tweak clothing, or refine lighting with a single prompt.
Make quick revisions without losing the original clip’s core motion and timing.

Extended 15-Second Clip Length

Create clips up to 15 seconds long, which is 3 times longer than prior Wan video models. This length is perfect for a full product showcase or short standalone scene.

Choose output lengths of 5, 10, or 15 seconds tailored to your project’s needs.
Offers 480P, 720P, and 1080P resolution options.
Compatible with both 16:9 landscape and 9:16 portrait aspect ratios.
Explore Additional AI Models

More AI Video Generators to Explore

Compare Wan 2.7 to other leading video generation models available on our platform.

Kling v3.0

Native audio-integrated video generation powered by Kling’s 3.x motion language.

Descubre nuestra selección curada de modelos de IA asociados

Kling v3.0 Pro

Pro-level Kling 3.x generation with enhanced fidelity and finer detail.

Descubre nuestra selección curada de modelos de IA asociados

Hailuo 02

MiniMax’s latest video generation model featuring sharp, dynamic motion.

Descubre nuestra selección curada de modelos de IA asociados

PixVerse V6: Professional AI Video Generator

PixVerse V6 stands as PixVerse’s most advanced video generation model yet, boasting 20+ cinema-grade camera controls, native audio synchronization, a multi-shot sequencing engine, and support for 15-second 1080p clips. Turn text prompts or reference images into polished cinematic footage.

Descubre nuestra selección curada de modelos de IA asociados
FAQs

Preguntas frecuentes

Acerca de Gemini Omni, Google Omni y el soporte de la generación actual

What is Wan 2.7?

Wan 2.7 is Alibaba Tongyi Lab’s newest video generation model, launched in March 2026. Building on Wan 2.6, it adds first/last frame control, support for up to 5 concurrent reference video inputs, 9-grid image input, instruction-based editing, and more accurate motion physics.

What is first/last frame control in Wan 2.7?

First/last frame control (shortened to FLF2V) lets you define both the opening and closing frames of your target video. Wan 2.7 automatically generates the smooth motion sequence between those two points, giving you exact cinematic control. Pick a starting composition and an end state, and the model handles the interpolation between them.

How long can videos be with Wan 2.7?

Each clip generated with Wan 2.7 can run 2 to 15 seconds long — a major leap from prior Wan models that maxed out at roughly 5 seconds. On this platform, you can select preset clip lengths of 5, 10, or 15 seconds for your projects.

What modes does Wan 2.7 support?

Wan 2.7 supports text-to-video, image-to-video, first/last frame video generation (FLF2V), and instruction-based video editing. On this page, you can access text-to-video and image-to-video workflows.

What resolutions does Wan 2.7 support?

Wan 2.7 offers output resolutions of 480P, 720P, and 1080P. Both 16:9 landscape and 9:16 portrait aspect ratios are fully compatible with this model.

Is Wan 2.7 open source?

The prior Wan 2.1 model was fully open-sourced under the Apache-2.0 license. At launch, Wan 2.7’s official open-source release status hadn’t been confirmed — visit the Alibaba Wan GitHub repository at github.com/Wan-Video for the most up-to-date details.

How does Wan 2.7 compare to Wan 2.6?

Wan 2.7 introduces several key upgrades over Wan 2.6: first/last frame control, 9-grid multi-image input, support for up to 5 reference video inputs, and instruction-based editing, all of which were absent in the earlier model. The maximum clip length was also extended to 15 seconds, and motion physics accuracy plus character consistency saw notable improvements.

¿Aún tienes preguntas? Estamos aquí para ayudarte

Unirse a Discord
Recursos
  • Blog
  • Crear
  • Escenas
  • Obras
  • Prompts
  • Imagen a Prompt
  • Imagen a Prompt por Lotes
Empresa & Legal
  • Acerca de
  • Contacto
  • Política de Privacidad
  • Términos del Servicio
  • Política de Reembolsos
Image Models
  • Z-Image
  • GPT-4o
  • Flux 2
  • Flux 2 Pro
  • Flux 2 Klein
  • Qwen Image 2
  • Seedream 4.0
  • Seedream 4.5
  • Seedream 5.0
  • Grok Imagine
  • Gemini 3 Pro Image
  • Nano Banana Flash
  • Nano Banana 2
Video Models
  • Google Veo 3.1
  • Google Veo 3.1 Lite
  • Google Veo 3.1 Pro
  • Seedance 1.5 Pro
  • Seedance Fast
  • Seedance Quality
  • Seedance 2.0
  • Hailuo 02
  • Kling v2.6
  • Kling v2.5 Turbo
  • Kling v2.1
  • Kling v2.1 Master
  • Kling O1
  • Kling v3.0
  • Kling v3.0 Pro
Amigos
  • Gemini Omni
  • Seedream AI
  • Kling AI
LogoGemini Omni

Gemini Omni prompts · Generación de modelo actual · Lista de seguimiento Omni de Google

TwitterX (Twitter)DiscordEmail

Gemini Omni es un espacio de trabajo de IA de video independiente de terceros y la lista de seguimiento Omni de Google. No estamos afiliados con Google, Gemini, Veo, OpenAI, ByteDance, ni ningún proveedor de modelos. La disponibilidad, nombres, precios y capacidades de los modelos pueden cambiar.

© 2026 Gemini Omni All Rights Reserved. DREAMEGA INFORMATION TECHNOLOGY LLC

[email protected]