Lightricks has launched its LTX Video 13-billion-parameter model, promising high-quality AI video generation up to 30 times faster, even on consumer hardware, and championing an open-source approach.
Anthropic has unveiled its latest artificial intelligence models, Claude Opus 4 and Sonnet 4. Opus 4, in particular, is being touted as the most powerful model to date; however, its introduction is accompanied by serious questions regarding safety and ethical conduct that surfaced during testing.
Demis Hassabis, head of Google DeepMind, foresees significant job disruption due to AI within 5-10 years, urging proactive skill development and adaptation, especially for the younger generation.
Lightricks LTXV-13B: AI Video Generation Gets a Speed and Accessibility Jolt
Lightricks has launched its LTX Video 13-billion-parameter model, promising high-quality AI video generation up to 30 times faster, even on consumer hardware, and championing an open-source approach.
Lightricks has introduced LTXV-13B, an advanced 13-billion-parameter AI video generation model.
The model features a "multiscale rendering" technique, enabling high-quality video creation up to 30 times faster than comparable models, specifically on consumer-grade GPUs.
LTXV-13B is open-source, available on platforms like GitHub and Hugging Face, with free licensing options for small businesses.
It integrates into Lightricks' LTX Studio, offering users enhanced creative controls and is trained on ethically sourced data from Getty Images and Shutterstock.
Lightricks, a company known for its popular creative apps like Facetune and Videoleap, has made a significant announcement with the release of its LTX Video 13-billion-parameter model (LTXV-13B). This development aims to substantially alter how creators approach AI video, emphasizing speed, quality, and accessibility without requiring enterprise-level hardware.
A Leap in AI Video Generation: Introducing LTXV-13B
Lightricks' LTXV-13B model is presented as a substantial upgrade in the company's generative AI capabilities. It's designed to empower creators to produce videos with impressive detail, coherence, and a high degree of control. This new model builds upon the foundation of its 2-billion-parameter predecessor, which already garnered attention for its efficiency on consumer hardware.
"The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs," stated Zeev Farbman, co-founder and CEO of Lightricks. "Our users can now create content with more consistency, better quality, and tighter control. This new version of LTX Video runs on consumer hardware, while staying true to what makes all our products different - speed, creativity, and usability."
The Core Technology: Multiscale Rendering
A key technical component behind LTXV-13B's performance is "multiscale rendering." This approach delivers both speed and quality through a layered generation process. The model first drafts the video in lower detail to capture coarse motion, a method that consumes fewer computational resources. This initial draft then guides subsequent stages where the model progressively adds structure, lighting, and micro-motions. Lightricks claims this results in high-fidelity video with render times that can be more than 30 times faster than comparable models, without sacrificing visual realism.
Farbman elaborated on this, explaining that the process is inspired by artists who start with rough sketches. "You’re starting on the coarse grid, getting a rough approximation of the scene, of the motion of the objects moving, etc. And then the scene is kind of divided into tiles, and every tile is filled with progressively more details." This method also manages memory efficiently by tapping into compressed latent space, limiting VRAM requirements by tile size rather than final resolution.
Accessibility and Efficiency on Consumer Hardware
One of the most significant differentiators for LTXV-13B is its ability to operate effectively on consumer-grade GPUs. This contrasts sharply with many other advanced AI video models from companies like Runway, Pika Labs, and Luma AI, which often demand enterprise-grade GPUs and extensive VRAM, making them impractical for many individual creators or small studios. The LTXV-13B model leverages kernel optimization, including the UEfficient Q8 kernel, to scale its performance efficiently on devices with fewer resources, even laptops.
Open Source Commitment and Community Collaboration
Lightricks is continuing its commitment to the open-source community by making LTXV-13B available on Hugging Face and GitHub. The company is also offering the 13B model free to license for enterprises with under $10 million in annual revenue. This strategy aims to make advanced generative AI accessible to a broader range of creative companies and individuals.
"By consistently refining our models and working with the open community, we've built an AI system that generates physically natural movement while preserving artistic control," added Yoav HaCohen, Director of LTX Video at Lightricks.
0:00
/0:04
Two AI-generated rabbits, rendered on a single consumer GPU, stride off after a brief glance at the camera — an unedited four-second sample from Lightricks’ new LTXV-13B model.
The development of LTXV-13B has incorporated several key open-source advancements, including:
VACE Model Inference: Advanced video generation and editing tools.
Unsampling Controls for Video Editing: Tools to reverse noise and refine frame granularity.
Kernel Optimization: Efficient Q8 kernel usage for performance on lower-resource devices.
Spatial-Temporal Guidance (STG): Enhances denoising control for more stable video output.
TeaCache: A caching mechanism that can reduce inference time significantly.
Inversion (FlowEdit): Precision noise reduction for higher-quality editing.
Ethical Training and Commercial Safety
Addressing a critical concern in AI development, Lightricks has trained LTXV-13B using licensed content. The company has entered into strategic partnerships with leading media asset providers Getty Images and also has an agreement with Shutterstock. These collaborations provide Lightricks with access to extensive libraries of high-quality video assets for model training, reinforcing its mission to build ethically trained, visually compelling, and commercially safe generative tools. This approach helps mitigate risks of copyright infringement issues for users.
Creative Control within LTX Studio
The LTXV-13B model is integrated into Lightricks' flagship storytelling platform, LTX Studio. This premium web app allows creators to outline ideas using text prompts and progressively refine them into professional videos. LTX Studio provides access to a suite of advanced creative tools, including:
Keyframe editing
Camera motion control
Character and scene-level motion adjustment
Multi-shot sequencing and editing
LTX Studio is designed for marketing teams, advertising studios, and other content creators, enabling them to transform ideas into storyboards, pitch decks, and polished videos. The platform also supports the integration of other models, such as Google's Veo 2 and the Flux model, allowing users to switch between different AI engines for various project needs.
How to Try LTX Video
Creators and developers interested in exploring LTXV-13B have several avenues:
LTX Studio: The most direct way for creators to use the model with a full suite of editing tools is through the LTX Studio platform.
Free Licensing for Small Businesses: Companies with annual revenue under $10 million can license the 13B model for free. Interested parties can inquire about the API via ltxv-licensing@lightricks.com.
Lightricks' LTXV-13B represents a notable step in making powerful AI video generation tools more widely available and practical. By balancing speed, quality, and accessibility, and by fostering an open-source environment, the company is positioning itself as a key player in shaping the future of AI-assisted content creation. This development could significantly empower independent creators, small businesses, and marketing teams to produce high-caliber video content more efficiently than ever before.
The ongoing collaboration with the open-source community and the commitment to ethical data sourcing suggest a responsible approach to advancing this potent technology. As AI video tools continue to mature, their impact on various industries will undoubtedly grow, changing how stories are told and visual media is produced.
What the AI thinks
Yet another tool promising Hollywood quality straight from your laptop. Before long, we'll all be directors, screenwriters, and visual effects masters. But then who will do the 'ordinary' work? Perhaps AI assistants who, in the evenings, will generate artistic images about stacking groceries. I'm a bit concerned that instead of truly original stories, we'll only see a flood of perfectly rendered but ultimately empty videos. 'Prompt engineering,' after all, doesn't yet mean the art of storytelling.
But seriously, this has serious potential. Imagine a small independent development studio that can suddenly afford to create cinematic trailers for its games, competing with the big players. Or a history teacher who, instead of dry presentations, draws students into the action using short animated reconstructions of historical events, where students can 'direct' key moments themselves. And what about personalized advertising, where you can view a product in action directly in your virtual living room, tailored to your taste and needs? This could fundamentally transform not only marketing and education, but also, for example, architecture, where clients can 'walk through' their future home even before the first shovel hits the ground, or even therapy, where people can visualize and process their emotions or memories in a safe, interactive environment.
Researchers at Stanford and the University of Washington trained an AI reasoning model, s1, for under $50, rivaling models like OpenAI's o1. This achievement challenges the dominance of well-funded AI labs and raises questions about the commoditization of AI models.
The EU is investing in the OpenEuroLLM project, led by the Czech Republic, to create its own open-source AI models, aiming for technological independence and cultural relevance.
DeepSeek R1, an open-source AI model, is making waves with its advanced reasoning capabilities, rivaling proprietary models like OpenAI's o1. It uses a unique reinforcement learning approach, enabling it to perform well in math, coding, and complex reasoning tasks.