Connect with us

Technology

Groq® Acquires Definitive Intelligence to Launch GroqCloud

Published

on

Definitive Intelligence Co-founder and CEO Sunny Madra to Lead New GroqCloud Business Unit and Launch New Developer Playground

MOUNTAIN VIEW, Calif., March 1, 2024 /PRNewswire/ — Groq®, a generative AI solutions company, has acquired Definitive Intelligence, a company redefining how businesses utilize data and empowering organizations to unlock actionable insights – all powered by AI. Definitive Intelligence Co-founder and CEO Sunny Madra will lead the new GroqCloud business unit, leveraging his expertise in serving enterprise customers to significantly expand access to the Groq LPU™ Inference Engine via GroqCloud. The Definitive Intelligence team has already been working closely with Groq to build GroqCloud, a new developer playground with fully integrated documentation, code samples, and self-serve access, which is available today at https://console.groq.com/.

“At Groq, we’re committed to creating an AI economy that’s accessible and affordable for anyone with a brilliant idea,” said Groq Founder and CEO Jonathan Ross. “We’re excited to welcome Sunny and his team from Definitive Intelligence to help us achieve this mission. As a serial entrepreneur and go-to person for the latest in AI, Sunny’s podcast reaches hundreds of thousands of listeners weekly while he showcases the latest developments in the field. The Definitive team has expertise in AI solutions and go-to-market strategies, as well as a proven dedication to sharing knowledge with the community. They are the perfect fit to lead our GroqCloud business unit. Together, we will empower developers to leverage GroqCloud services to help accelerate the development of AI.”

GroqCloud makes it easy for customers to access the Groq LPU Inference Engine via the self-serve playground and helps customers deploy new generative AI applications that can take advantage of the incredible speed that only Groq offers. Since its soft launch on February 19, GroqCloud has been flooded with developers, with thousands actively using the Groq API. Madra and the GroqCloud team will initially focus on expanding capacity, improving efficiency, forming partnerships, and building out the developer platform.

“The world is just now realizing how important high-speed inference is to generative AI,” said Madra. “At Groq, we’re giving developers the speed, low latency, and efficiency they need to deliver on the generative AI promise. I have been a big fan of Groq since I first met Jonathan in 2016 and I am thrilled to join him and the Groq team in their quest to bring the fastest inference engine to the world.”

In addition to the GroqCloud business unit, the infusion of engineering resources from Definitive Intelligence has enabled Groq to formalize a Groq Systems business unit, which will focus on innovation and serve the public sector and customers that require Groq hardware for AI compute centers.

“Separating GroqCloud and Groq Systems into two business units will enable Groq to continue to innovate at a rapid clip, accelerate inference, and lead the AI chip race, while the legacy providers and other big names in AI are still trying to build a chip that can compete with our LPU,” added Ross.

About Groq
Groq® is a generative AI solutions company and the creator of the LPU™ Inference Engine, the fastest language processing accelerator on the market. It is architected from the ground up to achieve low latency, energy-efficient, and repeatable inference performance at scale. Customers rely on the LPU Inference Engine as an end-to-end solution for running Large Language Models (LLMs) and other generative AI applications at 10x the speed. Groq Systems powered by the LPU Inference Engine are available for purchase. Customers can also leverage the LPU Inference Engine for experimentation and production-ready applications via an API in the GroqCloud by purchasing Tokens-as-a-Service. Jonathan Ross, inventor of the Google Tensor Processing Unit (TPU), founded Groq to preserve human agency while building the AI economy. Experience Groq speed for yourself at https://groq.com/.

Media Contact for Groq
Allyson Scott
PR-media@Groq.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/groq-acquires-definitive-intelligence-to-launch-groqcloud-302077413.html

SOURCE Groq

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

EPG Publishes Inaugural ESG Report, Establishing Baseline for Sustainable Global Expansion

Published

on

By

SINGAPORE, April 19, 2026 /PRNewswire/ — EPG today released its 2025 ESG Report, outlining its sustainability approach and performance across global operations as it scales internationally.

Environmental EPG achieved full compliance with applicable environmental regulations, with 100% of waste treated and disposed of. The company completed its inaugural greenhouse gas (GHG) inventory, encompassing Scope 1, Scope 2, and key Scope 3 categories, establishing the foundation for its emissions management strategy and long-term decarbonization roadmap.

Social Female represented 31% of total employees, and 85% of employees recruited locally in Malaysia hold managerial positions. EPG maintained a diversified supply chain, with approximately 47% of suppliers based outside of mainland China.

Governance As of the date of this press release, the EPG Board of Directors includes two female directors, representing 22% of board members. The Board convened two meetings with 100% attendance.

As EPG matures its ESG framework, the company is forming a dedicated ESG Committee to oversee this progress. ESG management systems will be embedded into existing and planned facilities, starting with its Malaysia manufacturing plant currently under construction. EPG will also extend these standards through its supply chain at its upcoming Shanghai partner conference.

“Scaling globally only means something if we scale responsibly,” said Alick Wan, EPG Founder and Chairman. “We see an opportunity to redefine what sustainable infrastructure looks like for the AI era — proving that high performing infrastructure can also carry light footprint. We believe modular is how the industry gets there.”

EPG is proud to have contributed to the book Greener Data, Volume III, launching on Earth Day 2026. The chapter shared EPG’s philosophy on how modular construction reduces on-site waste, lowers embodied carbon, and enables full lifecycle sustainability, making the case that responsible scaling and commercial ambition are not in conflict.

Following approximately $200 million in Series B and B+ financing, EPG will keep strengthening company-wide ESG governance and scale its modular approach across an expanding international footprint.

Read the full report: https://www.epg-module.com/list-27-1.html

Contact: communications@epg-module.com

About EPG

EPG is a Singapore-headquartered provider of modular and prefabricated data center infrastructure, powered by dual R&D centers in Singapore and Shanghai and advanced manufacturing hubs in Malaysia and China. With over 20 years of engineering expertise, EPG delivers innovative and sustainable solutions for hyperscale, cloud, and enterprise deployments across APAC, EMEA, and other global markets.

View original content to download multimedia:https://www.prnewswire.com/news-releases/epg-publishes-inaugural-esg-report-establishing-baseline-for-sustainable-global-expansion-302746582.html

SOURCE EPG Singapore Pte Ltd

Continue Reading

Technology

Simpli5 Announces Platform Expansion Designed to Close the Gap Between Self-Awareness and Team Action

Published

on

By

Behavioral intelligence leader addresses the knowing-doing problem that leaves most assessment investments unrealized

AUSTIN, Texas, April 19, 2026 /PRNewswire/ — Simpli5, the behavioral intelligence platform that powers team effectiveness at organizations including LinkedIn, Kaiser Permanente, and Notion, today announced a significant expansion of its platform aimed at solving one of the most persistent challenges in enterprise learning and development: the knowing-doing gap.

While behavioral assessments have proliferated across the Fortune 500, the vast majority of users never return to their insights after initial onboarding — leaving significant organizational investment unrealized. The upcoming Simpli5 release is engineered specifically to close that gap, translating one-time self-awareness into an ongoing team practice embedded in the flow of daily work.

“Self-awareness that lives in a report is just data. Self-awareness that lives in your daily relationships is transformation,” said Karen Wright Gordon, Founder and CEO of Simpli5. “We built this because we knew the highest-value moments in our platform were sitting unused for too many users. These features are about closing that gap without adding friction.”

The expansion introduces a suite of interconnected capabilities designed to keep behavioral insights present in the flow of daily work — accessible at the moments that matter most, and creating reinforcing loops that grow in value as organizational adoption scales.

Unlike point-in-time assessments, Simpli5 is engineered to compound in value over time. Each connection made, each insight applied, and each colleague activated increases the network intelligence available to every user on the platform. The upcoming release is designed to accelerate that compounding effect.

Full feature details and availability will be announced in the coming weeks.

About Simpli5

Simpli5 powered by 5 Dynamics is a behavioral intelligence platform built on the science of five natural work energy phases: Explore, Excite, Examine, Execute, and Evaluate. Unlike static assessment tools, Simpli5 is a living team intelligence platform that deepens in value as adoption scales across an organization. Its AI coaching product, SenSai, delivers personalized behavioral insights at the moment of need.

For more information, visit simpli5.com.

View original content to download multimedia:https://www.prnewswire.com/news-releases/simpli5-announces-platform-expansion-designed-to-close-the-gap-between-self-awareness-and-team-action-302746293.html

SOURCE Simpli5

Continue Reading

Technology

SK hynix Begins Mass Production of 192GB SOCAMM2 ‘Setting a New Standard for AI Server Memory Performance’

Published

on

By

–     Mass production of 192GB high capacity products designed for the NVIDIA Vera Rubin platform
–     Maximizes power efficiency by featuring high density DRAM based on the latest 1cnm process
–     Company to closely collaborate with NVIDIA to solve bottlenecks in AI infrastructure and provide optimal performance

SEOUL, South Korea, April 19, 2026 /PRNewswire/ — SK hynix Inc. (or “the company”, www.skhynix.com) announced today that it has begun mass production of the 192GB SOCAMM2, a next-generation memory module standard based on the 1cnm process (sixth-generation of the 10-nanometer technology) LPDDR5X low-power DRAM.

SOCAMM2[1] is a module that adapts low-power memory – which was previously used mainly in mobile products like smartphones – for server environments. It is designed to be a primary memory solution for next-generation AI servers.

[1]SOCAMM2 (Small Outline Compression Attached Memory Module 2): An AI server–optimized memory module based on LPDDR. It offers a slim form factor and high scalability, while its compression connector enhances signal integrity and allows for easy module replacement

SK hynix emphasized that the 1cnm based SOCAMM2 product that is now in mass production delivers more than double the bandwidth with over 75% improved power efficiency compared to conventional RDIMM[2], providing an optimized solution for high performance AI operations.

[2]RDIMM (Registered Dual In-Line Memory Module): DRAM module for server/workstation that includes a register or buffer chip to relay address and command signals between the memory controller and DRAM chip in a memory module

In particular, the company noted that its SOCAMM2 products are designed for NVIDIA Vera Rubin platform.

SK hynix expects the new SOCAMM2 product will fundamentally resolve the memory bottlenecks encountered during the training and inference of large language model (LLM) with hundreds of billions of parameters, thereby playing a pivotal role in dramatically accelerating the processing speed of the overall system.

The company stated that with the AI market shifting focus from inference to training, SOCAMM2 is gaining significant attention as a next-generation memory solution capable of operating LLMs with low power consumption. To meet the demands of its global Cloud Service Provider (CSP) customers, SK hynix has not only been providing a supply portfolio, but also stabilized its mass production system early on.

“By supplying the 192GB SOCAMM2, SK hynix has established a new standard for AI memory performance,” Justin Kim, President & Head of AI Infra (CMO, Chief Marketing Officer) at SK hynix said. “We will solidify our position as the most trusted AI memory solution provider, through close collaboration with our global AI customers.”

About SK hynix Inc.

SK hynix Inc., headquartered in Korea, is the world’s top-tier semiconductor supplier offering Dynamic Random Access Memory chips (“DRAM”) and flash memory chips (“NAND flash”) for a wide range of distinguished customers globally. The Company’s shares are traded on the Korea Exchange, and the Global Depository shares are listed on the Luxembourg Stock Exchange. Further information about SK hynix is available at www.skhynix.com, news.skhynix.com.

View original content:https://www.prnewswire.com/news-releases/sk-hynix-begins-mass-production-of-192gb-socamm2–setting-a-new-standard-for-ai-server-memory-performance-302746711.html

SOURCE SK hynix Inc.

Continue Reading

Trending