Connect with us

Technology

Lightricks Launches 13B Parameters LTX Video Model, Breakthrough Rendering Approach Generates High-quality, Efficient AI Video 30X Faster Than Comparable Models

Published

on

JERUSALEM and NEW YORK, May 6, 2025 /PRNewswire/ — Lightricks, a leader in AI-driven content creation technology, today announced the release of its LTX Video 13-billion-parameter model (LTXV-13B) – which may be the most advanced and efficient AI video generation model to date. This substantial upgrade dramatically increases quality while maintaining LTXV’s unparalleled speed generating AI videos. The 13B model is available within the company’s flagship storytelling platform, LTX Studio, shared with the open community and is being integrated across the Lightricks portfolio.

LTXV-13B introduces “multiscale rendering,” a major technical breakthrough that delivers both speed and quality through a layered process. The model drafts in lower detail first to capture coarse motion using fewer resources. This draft then guides the next stages, where the model progressively adds structure, lighting, and micro-motion (and spending time where it matters most). The result is high-fidelity video built through deliberate, multi-scale generation, with render times that can be more than 30X faster than comparable models – without compromising visual realism.

The new 13B model represents a significant leap forward in Lightricks’ generative AI capabilities, offering creators the ability to produce videos with stunning detail, coherence, and control. It utilizes the latest advancements in academia and the open source community, including unsampling controls and spatiotemporal guidance for video editing, and kernel optimization for running speeds.

Unlike other models that demand enterprise-grade GPU (and long rendering times), LTXV-13B delivers studio level video at unmatched speed, even on devices that creators already own, which differentiates LTX Video in the marketplace.

“The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs,” said Zeev Farbman, co-founder and CEO of Lightricks. “Our users can now create content with more consistency, better quality, and tighter control. This new version of LTX Video runs on consumer hardware, while staying true to what makes all our products different – speed, creativity, and usability.”

While developing and refining the 13B model, Lightricks entered into a strategic partnership with leading media asset provider Getty Images. In December 2024, Lightricks entered an agreement with Shutterstock to leverage their licensed content. These kinds of collaborations have given Lightricks access to an extensive library of high-quality video assets for model training, reinforcing its mission to build ethically trained, visually compelling, and commercially safe generative tools.

The LTXV-13B model now empowers creators with even more control and flexibility, seamlessly supporting all of the platform’s advanced creative tools, including:

Keyframe editingCamera motion controlCharacter and scene-level motion adjustmentMulti-shot sequencing and editing

In support of startups and small businesses, Lightricks is offering the 13B model free to license for enterprises with under $10 million in annual revenue. This initiative and the release of all LTXV models in open source reflect Lightricks’ commitment to making cutting-edge generative AI accessible to the next generation of creative companies and innovators. Open source versions of LTXV are available on Hugging Face (LTX-Video) and GitHub (LTX-Video).

“By consistently refining our models and working with the open community, we’ve built an AI system that generates physically natural movement while preserving artistic control,” added Yoav HaCohen, Director of LTX Video at Lightricks.

Since launching LTX Video in November 2024, Lightricks has collaborated with researchers and open-source contributors to improve motion consistency, scene coherence, and creative adaptability. Key open-source advancements in LTXV-13B include:

VACE Model Inference – advanced video generation and editing tools, including reference-to-video (R2V). Details on GitHubUnsampling Controls for Video Editing – Tools that reverse noise and refine frame granularity. Details on GitHub.Kernel Optimization – Efficient Q8 kernel usage allows performance scaling on lower-resource devices. Details on GitHub and HuggingFace.

With a growing library of models designed for diverse creative needs and a commitment to open development, Lightricks is shaping the future of generative AI video, bridging research-driven breakthroughs with real-world application. For more information about Lightricks, its products, technology, and open-source initiatives, visit www.lightricks.com.

View original content to download multimedia:https://www.prnewswire.com/news-releases/lightricks-launches-13b-parameters-ltx-video-model-breakthrough-rendering-approach-generates-high-quality-efficient-ai-video-30x-faster-than-comparable-models-302447660.html

SOURCE Lightricks

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

X Square Robot Unveils New Embodied AI Model, Says Robots Will Arrive in Homes in 35 Days

Published

on

By

Backed by Alibaba, ByteDance, Xiaomi and Meituan, X Square Robot unveiled a next-generation embodied AI foundation model for home robots and said its first deployments in everyday households will begin within 35 days.

BEIJING, April 23, 2026 /PRNewswire/ — X Square Robot on Tuesday unveiled Wall-B, a new embodied AI foundation model designed for deployment in real-world homes, marking what the company described as a major step toward bringing general-purpose robots into daily family life.

At a launch event themed “Born to Bot, Bot to Family,” the company also introduced its World Unified Model (WUM) architecture, a training framework that combines vision, language, action and physical prediction within a single system from the outset. X Square said the model is intended to help robots operate in the far more unpredictable setting of a home, where tasks, layouts and interactions vary from moment to moment.

“Robots in factories and robots in homes are fundamentally different,” said Qian Wang, founder and CEO of X Square Robot. “In factories, they repeat the same action 10,000 times. In a home, they may need to perform 10,000 different actions, each in a different context. The real challenge is not repetition, but whether a robot can execute new, untrained actions in an unstructured environment.”

Wall-B is the company’s first full implementation of its World Unified Model architecture. Unlike modular systems that train perception, language and control separately, X Square Robot said World Unified Model optimizes those capabilities jointly from the very beginning. The company said that allows physical prediction — including force, friction and collision dynamics — to emerge as part of the model itself, rather than being layered on afterward.

“We train vision, language, action and prediction in the same network from day one,” said Wang Hao, chief technology officer of X Square. “Human infants do not learn to see, move and communicate in isolated stages. They learn by integrating perception and action at the same time, with constant feedback from the physical world. That is the principle behind our architecture.”

X Square Robot said the model was built on two core foundations. The first is a data strategy centered on real, non-staged home environments, aimed at exposing the system to the long tail of household scenarios — misplaced objects, temporary occlusion, unexpected obstacles and spontaneous human activity. The second is a physics-aware predictive mechanism that enables the robot to anticipate physical outcomes before taking action, rather than merely reacting after contact occurs.

Together, those elements are meant to narrow one of robotics’ hardest gaps: moving from controlled demos to reliable performance in live environments. The company said its work on physical robotic platforms has helped it accumulate practical experience in bridging simulation and reality across diverse operating conditions.

At the event, X Square demonstrated a series of live tasks. In one experience zone, a robot arranged flowers while adjusting its grip and motion in real time as stems shifted position under visual occlusion. The task was completed without pre-set trajectories, according to the company, and drew attention from both domestic and international media attending the event.

Even so, X Square acknowledged that the technology remains early. Wang said current systems can make mistakes that require remote intervention — such as placing slippers in the kitchen or pausing mid-task to process the next action. But he said the robots’ ability to operate continuously and generate new real-world data around the clock gives the system a path to rapid improvement.

That learning loop is central to the company’s next milestone: within 35 days, X Square plans to place its robots into everyday homes, underscoring the company’s long-term commitment to the home robotics sector.

Photo – https://mma.prnewswire.com/media/2963913/X_Square_Robot.jpg

 

View original content:https://www.prnewswire.co.uk/news-releases/x-square-robot-unveils-new-embodied-ai-model-says-robots-will-arrive-in-homes-in-35-days-302751058.html

Continue Reading

Technology

Manhattan Associates Announces Latest Enhancements for Retailers

Published

on

By

SYDNEY, April 23, 2026 /PRNewswire/ — Manhattan Associates (NASDAQ: MANH), the global leader in supply chain commerce with unmatched AI capabilities, today announced major enhancements to Manhattan Active® Omni. These innovations are designed to help retailers maximise in-store and online sales while delivering best-in-class customer experiences across all touchpoints. New capabilities include embedded agentic AI for store associates and customer service teams, real-time sales, and fulfilment insights delivered natively within the user experience, and brand-new capabilities focused on maximising both revenue and profit when shipping from stores.

Manhattan announced commercial availability of three new AI agents, a Store Associate Agent, a Contact Centre Agent, and an OMS Configuration Agent, all available within the Manhattan Active Omni user interface, to support retailers’ selling and service teams. Using a natural language interface, these agents deliver immediate, actionable insights into store activity, sales trends, inventory, returns, and customer behaviour, helping associates and customer service teams resolve issues faster and provide more personalised support.

“Retailers are under constant pressure to move faster, operate smarter, and deliver seamless experiences across every touchpoint,” said Brian Kinsella, SVP of Product Management at Manhattan Associates. “Our latest updates reflect Manhattan’s ongoing commitment to delivering cutting edge artificial intelligence within our applications. Whether it’s the myriad machine learning algorithms present for years or our new Agentic AI and Fulfilment Simulation capabilities, we’ve long believed true AI needs to live within rather than alongside our applications. We’re proud to partner with a number of world class retailers on the design and development of these breakthrough technologies.”

Along with the newly announced agentic AI innovations, Manhattan Active® Point of Sale continues to advance with Customer Facing Display, a powerful new enhancement that brings shoppers into the checkout experience. Customers can view their cart in real time, attach their loyalty information to a transaction, enter shipping details, and choose how they’d like to receive their receipt, all from a dedicated display. Retailers can also capture additional customer input, ensuring greater accuracy and faster transactions at the point of sale, bridging the gap between associates and shoppers, and delivering a smoother, more engaging checkout experience.

Additionally, the Fulfilment Optimisation Simulation engine enables enterprises to model and compare alternative fulfilment strategies by balancing cost, speed, service level, and margin. It provides data-driven insights into split shipments, total fulfilment costs, location-level distribution, and key KPIs using a consistent set of orders for each strategy. Users can easily adjust optimisation rules, rerun simulations, and compare results side-by-side to understand the true impact of each change. The engine also supports “what if” scenario planning – allowing teams to anticipate constraints, evaluate operational shifts, and analyse trade-offs in a fully self-serve manner. By replaying historical or selected orders, businesses can continuously optimise fulfilment, uncover new savings, and drive meaningful performance improvements.

Together, these innovations reflect Manhattan’s continued focus on delivering practical, enterprise-ready advancements that help retailers move faster and operate with greater confidence.

Receive up-to-date product, customer and partner news directly from Manhattan on LinkedIn.

ENDS

ABOUT MANHATTAN ASSOCIATES:

Manhattan Associates is a global technology leader, providing supply chain and omnichannel commerce solutions with unmatched AI capabilities. We design, build and offer best-in-class, AI-powered, cloud-based solutions that drive resilience and efficiency for businesses. We enable enterprises to uniquely unify front-end sales with back-end supply chain execution.

Our commitment to innovation, cloud-native platform and API-first architecture create simpler experiences and faster paths to value for our customers. We empower them to preempt and react to emerging trends and global disruptions with technical expertise and operational confidence, transforming challenges into competitive advantage. For more information, please visit www.manh.com.

View original content:https://www.prnewswire.com/apac/news-releases/manhattan-associates-announces-latest-enhancements-for-retailers-302751061.html

SOURCE Manhattan Associates

Continue Reading

Technology

Global Telecom Leaders to Convene in Singapore for Definitive Summit on AI-Native Transformation and Industry Reinvention

Published

on

By

SINGAPORE, April 23, 2026 /PRNewswire/ — Twimbit, the global research and advisory firm, has finalized the strategic agenda for the Twimbit Telecom Summit & Awards 2026, scheduled for 21 May 2026 at the Capitol Theatre, Singapore. This high-level forum serves as a catalyst for addressing the shift toward AI-native architectures and digital sovereignty.

As the telecommunications sector moves beyond traditional connectivity toward a ‘Techco’ model, the 2026 summit will provide a framework for navigating margin pressure through structural innovation, with insights on ROIC growth, EBITDA optimization, and the integration of generative technologies into core business functions.

Architects of the Industry: Featured Perspectives

The 2026 summit features a curated lineup of visionaries redefining the telecom blueprint:

Soma Velayutham, VP Telecoms & AI, NvidiaWong Soon Nam, Chief Planning and Transformation Officer, TelekomselRajesh Chandiramani, CEO, ComvivaVikram Sinha, CEO, Indosat Ooredoo HutchisonAayush Bhatnagar, Chief Technology Development Officer, JioUlf Ewaldsson, Advisor, Indosat (Former President of Technology, T-Mobile)Juhi McClelland, Managing Partner, IBM Consulting APACManoj Menon, Founder & CEO, Twimbit

Strategic Forum: The Telecom Summit (08:00 – 14:35)

Designed as a high-impact leadership forum, the morning sessions will address three critical levers for telco success in 2026:

Accelerating the AI-Native Core: Leveraging generative AI to rebuild network operations and customer service modelsDigital Sovereignty & Infrastructure: Navigating data residency and localized AI infrastructure for competitive advantageGrowth Engineering & Customer Experience: Implementing high-touch service philosophies to drive customer lifetime value

The Recognition Gala: Twimbit Telecom Awards (17:00 – Late)

The day concludes with a prestigious black-tie awards ceremony, celebrating organisations and leaders demonstrating innovation and strategic transformation, using Twimbit’s proprietary research frameworks across Asia-Pacific.

Strategic Partnerships and Support

The event is supported by industry leaders. F5 joins as Strategic Partner, while Nokia and Comviva serve as Gold Sponsors, highlighting the role of secure infrastructure, customer experience, and digital financial solutions.

“We are at a point where incremental change is no longer sufficient,” said Manoj Menon, Founder & CEO of Twimbit. “This summit is about the reinvention of the telecom business model and providing a roadmap for leaders to architect the next era of digital intelligence.”

About Twimbit

A global tech and advisory firm powering customer success through research, innovation and community, Twimbit provides actionable insights that fuel innovation and growth through its proprietary research platform.

Media Contacts:
Vansh Sehgal
vansh@twimbit.com 

Photo: https://mma.prnewswire.com/media/2960475/Twimbit_Awards.jpg
Logo: https://mma.prnewswire.com/media/2960480/Twimbit_Logo.jpg

 

View original content:https://www.prnewswire.co.uk/news-releases/global-telecom-leaders-to-convene-in-singapore-for-definitive-summit-on-ai-native-transformation-and-industry-reinvention-302750208.html

Continue Reading

Trending