Connect with us

Technology

Open Compute Project Foundation and UALink™ Consortium Announce a New Collaboration

Published

on

Establishing a Framework to Optimize Scale-up Interconnect for AI and HPC Clusters

DUBLIN, April 29, 2025 /CNW/ — Today, the Open Compute Project Foundation (OCP), the nonprofit organization bringing hyperscale innovations to all, and the Ultra Accelerator Link™ (UALink™) Consortium announced a new collaboration to enhance scale-up interconnect performance in AI clusters and High-Performance Computing (HPC). The UALink Consortium is developing an open industry standard for high-performance accelerated compute scale-up interconnects tailored for AI and HPC workloads, while the OCP Community is actively designing sustainable, large-scale data center infrastructure with a focus on Open Systems for AI. Together, OCP and UALink aim to integrate UALink’s scale-up AI interconnect technology into OCP Community-delivered AI clusters, providing the high-bandwidth, low-latency, low-power connectivity required for high-performance AI training and inference.

“The rapid adoption of AI across industries, from autonomous systems to enterprise analytics, is driving unprecedented demand for scalable, high-performance AI infrastructure. This has created a pivotal moment for data center investments, with hyperscale operators deploying large-scale AI clusters to meet these needs. By collaborating, the UALink Consortium and the OCP Community can shape system specifications to address critical challenges in interconnect bandwidth and scalability posed by advanced AI models,” said George Tchaparian, CEO at the OCP Foundation.

Key aspects of the collaboration will focus on aligning OCP’s community-led infrastructure development with UALink’s interconnect innovations, ensuring seamless integration and shared objectives. The alliance will leverage the expertise of both organizations to advance scale-up AI interconnect performance. Following the release of UALink 1.0 Specification earlier this month, both organizations and their communities are setting up for collaboration across OCP’s Open Systems for AI Strategic Initiative and OCP’s Future Technologies Initiative Short-Reach Optical Interconnect workstream.

“AI and HPC workloads require ultra-low latency and massive bandwidth to handle the scale and complexity of accelerated compute data processing to meet LLM requirements. The UALink Consortium was formed to create an open standard for accelerated compute interconnects that meets these demands, enabling faster and more efficient data exchange. Partnering with the OCP Community will accelerate the adoption of UALink’s innovations into complete systems, delivering transformative performance for AI markets,” said Peter Onufryk, UALink Consortium President.

“The surge in generative AI and HPC applications is placing immense pressure on data center interconnects to deliver the bandwidth and responsiveness needed for training and inference. The alliance between OCP and UALink creates a powerful collaborative framework to develop and integrate advanced interconnect solutions, enhancing the performance of large-scale AI clusters. This alliance has the potential to redefine industry solutions for AI infrastructure,” said Sameh Boujelbene, VP at Dell’Oro Group.

About the Open Compute Project Foundation
The Open Compute Project (OCP) brings at-scale innovations and hyperscaler best practices to all, spanning technology domains from the data center to the edge, and the technology stack from silicon, to systems, to site facilities and services. The international OCP Community is made up of organizations and people from hyperscale and tier-2 cloud data center operators, communications providers, colocation providers, diverse enterprises, and technology vendors. With the tenets of openness, impact, efficiency, scale and sustainability, the OCP engages and educates thousands of engineers every year. Across many projects and initiatives the OCP Foundation and Community are meeting the market today and shaping the future.

Learn more at: www.opencompute.org.

About Ultra Accelerator Link Consortium
The Ultra Accelerator Link (UALink) Consortium, incorporated in October 2024, is the open industry standard group dedicated to developing the UALink specifications, a high-speed, scale-up accelerator interconnect technology that advances next-generation AI & HPC cluster performance. The consortium is led by a board made up of stalwarts of the industry; Alibaba, AMD, Apple, Astera Labs, AWS, Cisco, Google, HPE, Intel, Meta, Microsoft, and Synopsys. The Consortium develops technical specifications that facilitate breakthrough performance for emerging AI usage models while supporting an open ecosystem for data center accelerators. For more information on the UALink Consortium, please visit www.UALinkConsortium.org.

Ultra Accelerator Link and UALink are trademarks of the UALink Consortium. All other trademarks are the property of their respective owners.

Contacts

Dirk Van Slyke
Open Compute Project Foundation
dirkv@opencompute.org
+1 303-999-7398

Nolan Morgan
UALink Consortium
+1 971-271-2657
press@members.ualinkconsortium.org 

 

View original content to download multimedia:https://www.prnewswire.com/news-releases/open-compute-project-foundation-and-ualink-consortium-announce-a-new-collaboration-302440947.html

SOURCE Open Compute Project Foundation

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

EPG Publishes Inaugural ESG Report, Establishing Baseline for Sustainable Global Expansion

Published

on

By

SINGAPORE, April 19, 2026 /PRNewswire/ — EPG today released its 2025 ESG Report, outlining its sustainability approach and performance across global operations as it scales internationally.

Environmental EPG achieved full compliance with applicable environmental regulations, with 100% of waste treated and disposed of. The company completed its inaugural greenhouse gas (GHG) inventory, encompassing Scope 1, Scope 2, and key Scope 3 categories, establishing the foundation for its emissions management strategy and long-term decarbonization roadmap.

Social Female represented 31% of total employees, and 85% of employees recruited locally in Malaysia hold managerial positions. EPG maintained a diversified supply chain, with approximately 47% of suppliers based outside of mainland China.

Governance As of the date of this press release, the EPG Board of Directors includes two female directors, representing 22% of board members. The Board convened two meetings with 100% attendance.

As EPG matures its ESG framework, the company is forming a dedicated ESG Committee to oversee this progress. ESG management systems will be embedded into existing and planned facilities, starting with its Malaysia manufacturing plant currently under construction. EPG will also extend these standards through its supply chain at its upcoming Shanghai partner conference.

“Scaling globally only means something if we scale responsibly,” said Alick Wan, EPG Founder and Chairman. “We see an opportunity to redefine what sustainable infrastructure looks like for the AI era — proving that high performing infrastructure can also carry light footprint. We believe modular is how the industry gets there.”

EPG is proud to have contributed to the book Greener Data, Volume III, launching on Earth Day 2026. The chapter shared EPG’s philosophy on how modular construction reduces on-site waste, lowers embodied carbon, and enables full lifecycle sustainability, making the case that responsible scaling and commercial ambition are not in conflict.

Following approximately $200 million in Series B and B+ financing, EPG will keep strengthening company-wide ESG governance and scale its modular approach across an expanding international footprint.

Read the full report: https://www.epg-module.com/list-27-1.html

Contact: communications@epg-module.com

About EPG

EPG is a Singapore-headquartered provider of modular and prefabricated data center infrastructure, powered by dual R&D centers in Singapore and Shanghai and advanced manufacturing hubs in Malaysia and China. With over 20 years of engineering expertise, EPG delivers innovative and sustainable solutions for hyperscale, cloud, and enterprise deployments across APAC, EMEA, and other global markets.

View original content to download multimedia:https://www.prnewswire.com/news-releases/epg-publishes-inaugural-esg-report-establishing-baseline-for-sustainable-global-expansion-302746582.html

SOURCE EPG Singapore Pte Ltd

Continue Reading

Technology

Simpli5 Announces Platform Expansion Designed to Close the Gap Between Self-Awareness and Team Action

Published

on

By

Behavioral intelligence leader addresses the knowing-doing problem that leaves most assessment investments unrealized

AUSTIN, Texas, April 19, 2026 /PRNewswire/ — Simpli5, the behavioral intelligence platform that powers team effectiveness at organizations including LinkedIn, Kaiser Permanente, and Notion, today announced a significant expansion of its platform aimed at solving one of the most persistent challenges in enterprise learning and development: the knowing-doing gap.

While behavioral assessments have proliferated across the Fortune 500, the vast majority of users never return to their insights after initial onboarding — leaving significant organizational investment unrealized. The upcoming Simpli5 release is engineered specifically to close that gap, translating one-time self-awareness into an ongoing team practice embedded in the flow of daily work.

“Self-awareness that lives in a report is just data. Self-awareness that lives in your daily relationships is transformation,” said Karen Wright Gordon, Founder and CEO of Simpli5. “We built this because we knew the highest-value moments in our platform were sitting unused for too many users. These features are about closing that gap without adding friction.”

The expansion introduces a suite of interconnected capabilities designed to keep behavioral insights present in the flow of daily work — accessible at the moments that matter most, and creating reinforcing loops that grow in value as organizational adoption scales.

Unlike point-in-time assessments, Simpli5 is engineered to compound in value over time. Each connection made, each insight applied, and each colleague activated increases the network intelligence available to every user on the platform. The upcoming release is designed to accelerate that compounding effect.

Full feature details and availability will be announced in the coming weeks.

About Simpli5

Simpli5 powered by 5 Dynamics is a behavioral intelligence platform built on the science of five natural work energy phases: Explore, Excite, Examine, Execute, and Evaluate. Unlike static assessment tools, Simpli5 is a living team intelligence platform that deepens in value as adoption scales across an organization. Its AI coaching product, SenSai, delivers personalized behavioral insights at the moment of need.

For more information, visit simpli5.com.

View original content to download multimedia:https://www.prnewswire.com/news-releases/simpli5-announces-platform-expansion-designed-to-close-the-gap-between-self-awareness-and-team-action-302746293.html

SOURCE Simpli5

Continue Reading

Technology

SK hynix Begins Mass Production of 192GB SOCAMM2 ‘Setting a New Standard for AI Server Memory Performance’

Published

on

By

–     Mass production of 192GB high capacity products designed for the NVIDIA Vera Rubin platform
–     Maximizes power efficiency by featuring high density DRAM based on the latest 1cnm process
–     Company to closely collaborate with NVIDIA to solve bottlenecks in AI infrastructure and provide optimal performance

SEOUL, South Korea, April 19, 2026 /PRNewswire/ — SK hynix Inc. (or “the company”, www.skhynix.com) announced today that it has begun mass production of the 192GB SOCAMM2, a next-generation memory module standard based on the 1cnm process (sixth-generation of the 10-nanometer technology) LPDDR5X low-power DRAM.

SOCAMM2[1] is a module that adapts low-power memory – which was previously used mainly in mobile products like smartphones – for server environments. It is designed to be a primary memory solution for next-generation AI servers.

[1]SOCAMM2 (Small Outline Compression Attached Memory Module 2): An AI server–optimized memory module based on LPDDR. It offers a slim form factor and high scalability, while its compression connector enhances signal integrity and allows for easy module replacement

SK hynix emphasized that the 1cnm based SOCAMM2 product that is now in mass production delivers more than double the bandwidth with over 75% improved power efficiency compared to conventional RDIMM[2], providing an optimized solution for high performance AI operations.

[2]RDIMM (Registered Dual In-Line Memory Module): DRAM module for server/workstation that includes a register or buffer chip to relay address and command signals between the memory controller and DRAM chip in a memory module

In particular, the company noted that its SOCAMM2 products are designed for NVIDIA Vera Rubin platform.

SK hynix expects the new SOCAMM2 product will fundamentally resolve the memory bottlenecks encountered during the training and inference of large language model (LLM) with hundreds of billions of parameters, thereby playing a pivotal role in dramatically accelerating the processing speed of the overall system.

The company stated that with the AI market shifting focus from inference to training, SOCAMM2 is gaining significant attention as a next-generation memory solution capable of operating LLMs with low power consumption. To meet the demands of its global Cloud Service Provider (CSP) customers, SK hynix has not only been providing a supply portfolio, but also stabilized its mass production system early on.

“By supplying the 192GB SOCAMM2, SK hynix has established a new standard for AI memory performance,” Justin Kim, President & Head of AI Infra (CMO, Chief Marketing Officer) at SK hynix said. “We will solidify our position as the most trusted AI memory solution provider, through close collaboration with our global AI customers.”

About SK hynix Inc.

SK hynix Inc., headquartered in Korea, is the world’s top-tier semiconductor supplier offering Dynamic Random Access Memory chips (“DRAM”) and flash memory chips (“NAND flash”) for a wide range of distinguished customers globally. The Company’s shares are traded on the Korea Exchange, and the Global Depository shares are listed on the Luxembourg Stock Exchange. Further information about SK hynix is available at www.skhynix.com, news.skhynix.com.

View original content:https://www.prnewswire.com/news-releases/sk-hynix-begins-mass-production-of-192gb-socamm2–setting-a-new-standard-for-ai-server-memory-performance-302746711.html

SOURCE SK hynix Inc.

Continue Reading

Trending