Connect with us

Technology

Sony Semiconductor Solutions to Release the Industry’s First CMOS Image Sensor for Automotive Cameras That Can Simultaneously Process and Output RAW and YUV Images

Published

on

Contributing to Simplified Systems by Expanding Single-Camera Applications

ATSUGI, Japan, Oct. 3, 2024 /PRNewswire/ — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the ISX038 CMOS image sensor for automotive cameras, the industry’s first*1 product that can simultaneously process and output RAW*2 and YUV*3 images.

The new sensor product has proprietary ISP*4 inside and can process and output RAW and YUV images simultaneously. RAW images are required for external environment detection and recognition in advanced driver-assistance systems (ADAS) and autonomous driving systems (AD), while the YUV images are provided for infotainment applications such as the drive recorder and augmented reality (AR).

By expanding the applications a single camera can offer, the new product helps simplify automotive camera systems and saves space, cost, and power.

*1   Among CMOS sensors for automotive cameras. According to SSS research (as of announcement on October 4, 2024).
*2   Image for recognition on a computer.
*3   Image for driver visual such as recording or displaying on a monitor.
*4   Image signal processor – a circuit for image processing.

Model name

Sample
shipment date
(planned)

Sample price
(including tax)

ISX038 1/1.7-type (9.30 mm diagonal)
8.39- effective-megapixel*5
CMOS image sensor

October 2024

¥15,000*6

*5   Based on the image sensor effective pixel specification method.
*6   May vary depending on the volume shipped and other conditions.

The roles of automotive cameras continue to diversify in line with advances in ADAS and AD and increasing needs and requirements pertaining to the driver experience. On the other hand, there is limited space for installing such cameras, making it impossible to continue adding more indefinitely, which in turn has created a demand to do more with a single camera.

The ISX038 is the industry’s first*1 CMOS image sensor for automotive cameras that can simultaneously process and output RAW and YUV images. It uses a stacked structure consisting of a pixel chip and a logic chip with signal processing circuit, with the SSS’ proprietary ISP on the logic chip. This design allows a single camera to provide high-precision detection and recognition capabilities of the environment outside the vehicle and visual information to assist the driver as infotainment applications. When compared with conventional methods such as a multi-camera system or a system that outputs RAW and YUV images using an external ISP, the new product helps simplify automotive camera systems, saving space, costs, and power. 

ISX038 will offer compatibility with the EyeQ™6 System-on-a-Chip (SoC) currently offered by Mobileye, for use in ADAS and AD technology.

Main Features

Industry’s first*1 sensor capable of processing and outputting RAW and YUV images simultaneously
The new sensor is equipped with dedicated ISPs for RAW and YUV images and is capable of outputting two types of images simultaneously with image quality optimized for each application on two independent interfaces. Expanding the applications a single camera can offer helps build systems that save space, costs, and power compared to multi-camera systems or systems with an external ISP.

Wide dynamic range even during simultaneous use of HDR and LED flicker mitigation
In automobile driving, objects must be precisely detected and recognized even in road environments with significant differences in brightness, such as tunnel entrances and exits. Automotive cameras are also required to suppress LED flicker, even while in HDR mode, to deal with the increasing prevalence of LED signals and other traffic devices. The proprietary pixel structure and unique exposure method of this product improves saturation illuminance, yielding a wide dynamic range of 106 dB even when simultaneously employing HDR and LED flicker mitigation (when using dynamic range priority mode, the range is even wider, at 130 dB). This design also helps reduce motion artifacts*7 generated when capturing moving subjects.

*7   Noise generated when capturing moving subjects with HDR.

Compatibility with conventional products*8
This product shares the same compatibility with SSS’ conventional products,*8 which have already built a proven track record for ADAS and AD applications with multiple automobile manufacturers. The new product makes it possible to reuse data assets collected on previous products such as driving data from automotive cameras. This helps streamline ADAS and AD development for automobile manufacturers and partners.

*8 SSS’ IMX728 1/1.7 type 8.39 effective megapixel CMOS image sensor.

Compliant with standards required for automotive applications
The product is qualified for AEC-Q100 Grade 2 automotive electronic component reliability tests by mass production. Also, SSS has introduced a development process compliant with the ISO 26262 road vehicle functional safety standard, at automotive safety integrity level ASIL-B(D). This contributes to improve automotive camera system reliability.

Key Specifications

Model name

ISX038

Effective pixels

3,857×2,177(H×V), approx. 8.39 megapixels

Image size

Diagonal 9.30mm (1/1.72-type)

Unit cell size

2.1μm×2.1μm (H×V)

Frame rate (all pixels)

30fps (RAW&YUV dual output)

Sensitivity (standard value F5.6, 1/30 second
cumulative)

880mV (Green Pixel)

Dynamic range (EMVA 1288 standard)

106 dB (with LED flicker mitigation)

130 dB (dynamic range priority)

Interface

MIPI CSI-2 serial output (Single port with 4-
lanes / Dual port with 2-lanes per port)

Package

192pin BGA

Package size

11.85mm×8.60mm (H×V)

 

View original content to download multimedia:https://www.prnewswire.com/news-releases/sony-semiconductor-solutions-to-release-the-industrys-first-cmos-image-sensor-for-automotive-cameras-that-can-simultaneously-process-and-output-raw-and-yuv-images-302264904.html

SOURCE Sony Semiconductor Solutions Corporation

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Simply announces compatibility with AI glasses from Meta

Published

on

By

NEW YORK, April 29, 2026 /PRNewswire/ — Simply, the creative hobbies leader behind the market leading apps Simply Piano, Simply Guitar, Simply Sing, and Simply Draw, today announced compatibility with AI glasses from Meta.

 

The launch signals Simply’s next leap – from mobile and augmented reality into AI glasses – as part of its long–term vision to build a fully multimodal AI platform that connects physical creativity, digital experiences, and wearable interfaces.

After pioneering music learning through augmented reality with Simply Piano for Apple Vision Pro and Simply Piano for Android XR, Simply is now expanding its creative hobbies ecosystem into AI–powered wearables. The new integration with Simply Draw and AI glasses from Meta lets learners capture their drawing process in real time, generating AI–enhanced timelapses and shareable creative assets that showcase their creation. 

“This is an exciting step toward a new era for creativity,” said Yuval Kaminka, CEO and Co–Founder of Simply. “We believe that the way we experience the arts, learning, playing and creative expression at home will become fully contextual. AI glasses allow us to move closer to a true AI creative companion – a multimodal AI, one that understands what you’re doing and supports you in the moment.”

“AI glasses are becoming a natural extension of how we learn and create,” added Eliran Douenias, Head of Product Innovation at Simply. “Our products already enable immersive and virtual experiences with XR and spatial computing, now we’re adding AI glasses from Meta as the next interface – and it’s just the first of an exciting roadmap ahead.”

“Simply’s early move into the AI glasses space puts us ahead of the curve and positions us to lead in how wearables – specifically AI glasses – become part of everyday creative life,” said Douenias.

With this launch, Simply is expanding its platform for the AI era. The new compatibility with AI glasses from Meta enhances how learners see, capture, and share their creative process, with many more experiences to follow.

About Simply

Simply is the world’s leading AI creativity platform redefining how people learn and express themselves through music, arts, crafts, and more. Its award–winning apps – Simply Piano, Simply Guitar, Simply Sing, and Simply Draw – have empowered millions globally to pick up and develop fulfilling creative hobbies that last.

Contact info: eliran@hellosimply.com

Video – https://www.youtube.com/watch?v=VquEDFtY-40
Photo – https://mma.prnewswire.com/media/2940139/Simply.jpg

View original content to download multimedia:https://www.prnewswire.com/news-releases/simply-announces-compatibility-with-ai-glasses-from-meta-302755903.html

SOURCE Simply

Continue Reading

Technology

Levine Leichtman Capital Partners Hires James Smith as Managing Director

Published

on

By

LONDON, April 29, 2026 /PRNewswire/ — Levine Leichtman Capital Partners (“LLCP”) announced today that James Smith has joined the Firm as a Managing Director in the Investment Management group. James will be based in LLCP’s London office.

Josh Kaufman, Head of Europe at LLCP, said, “We are thrilled to welcome James to LLCP. James adds valuable experience to the team within our core Business Services sector vertical. We look forward to the impact he will have as our European business and team continues to grow.”

James joins LLCP from Advent International where he was a senior member of the European Business & Financial Services team and participated in numerous successful transactions over his 12-year tenure. Prior to Advent, James worked at Bain & Company. James’ full biography can be found at https://www.llcp.com/team

About Levine Leichtman Capital Partners

Levine Leichtman Capital Partners, LLC is a middle-market private equity firm with a 42-year track record of investing across various targeted sectors, including Business Services, Franchising & Multi-unit, Education & Training and Engineered Products & Manufacturing. LLCP utilizes a differentiated Structured Private Equity investment strategy, combining debt and equity capital investments in portfolio companies. LLCP believes that by investing in a combination of debt and equity securities, it offers management teams growth capital in a highly tailored, flexible investment structure that can be a more attractive alternative than traditional private equity.

LLCP’s global team of dedicated investment professionals is led by 9 partners who have worked at LLCP for an average of 20 years. Since inception, LLCP and its affiliates have managed approximately $18.5 billion of capital across nearly 20 investment funds and has invested in approximately 120 portfolio companies. LLCP currently manages $12.6 billion of assets and has offices in Los Angeles, New York, Chicago, Miami, London, Stockholm, Amsterdam and Frankfurt.

Media Contact: Isabel Moon, imoon@llcp.com

Logo – https://mma.prnewswire.com/media/2349427/5942845/LLCP_Logo.jpg

View original content:https://www.prnewswire.co.uk/news-releases/levine-leichtman-capital-partners-hires-james-smith-as-managing-director-302756349.html

Continue Reading

Technology

Appian Advances AI in Process to Deliver Enterprise Outcomes at Scale

Published

on

By

New capabilities in agentic automation and AI-assisted spec-driven development transform complex work.

ORLANDO, Fla., April 29, 2026 /PRNewswire/ — Appian [Nasdaq: APPN] today announced enhancements to the Appian Platform, including AI-assisted spec-driven development and Model Context Protocol (MCP) integration for agents. By anchoring AI within processes, Appian eliminates the primary hurdles to AI value: fragmented data, and a lack of reliability and control. Process models provide the structure needed to deliver results safely, and at scale.

Advancements in AI agents enable more intelligent, coordinated work

AI agents in Appian are smarter, safer and more effective because they have better structure, context and guardrails. Appian is enhancing interoperability across its AI ecosystem. By adopting powerful standards like Model Context Protocol (MCP), Appian agents will be able to interface securely with external enterprise systems. Third party AI agents will have access to powerful Appian tools like data fabric which uniquely provides unified read-write access to enterprise data.

Appian is also advancing agent learning by providing users the ability to track agent performance, and then apply an agent’s memory across processes to improve decision making. Users will soon be able to expand on this by giving AI guidance on what objectives to optimize against and recommend improvements that can be applied safely.

Customer value

Global Excel Management, a worldwide healthcare risk management provider, uses Appian to transform claims processes with AI.

“As part of our digital transformation we are evolving our claims processes by transitioning from fragmented workflows to an enhanced level of operations using technological advancements enabled with AI features,” said Pascal Tanguay, SVP, Global Technology Services, Global Excel Management. “With Appian, our processes will be unified. From initial intake to adjudication, our advanced technology will reduce redundant tasks and lessen complexity for our team members. This ensures that our claims processes are consistent and completed more efficiently and accurately.”

Context gives agents a common vocabulary for business data

To support advanced agent capabilities, Appian is augmenting its industry-leading data fabric. Appian’s data fabric has been enhanced to provide a unified metadata model that gives agents clearer context about how information is structured and connected across systems.

Furthering its commitment to supporting industry-leading data platforms, Appian is launching a technology partnership with Snowflake. This unites Appian as the AI orchestration layer with Snowflake’s AI Data Cloud, combining data aggregation, model training, and process orchestration to enable immediate business value. Direct MCP-enabled integration between Appian data fabric and Snowflake equips agents with deep enterprise context, and allows them to interact directly with Snowflake Cortex AI to drive intelligent, data-backed decisions.

“Enterprises don’t need more AI experiments, they need AI that delivers real business outcomes on governed data,” said Baris Gultekin, Vice President of AI, Snowflake. “By combining Appian’s process orchestration and data fabric with the Snowflake AI Data Cloud, we’re bringing intelligence directly into the flow of work. Together, we enable secure, enterprise-grade AI where agents can access trusted data through Cortex AI, act with context, and drive measurable impact across the business.”

AI-assisted spec-driven development

AI-assisted development has revolutionized coding, but mission-critical work needs more than fast, cheap code. Appian puts structure around AI-assisted development. Without that structure, AI-generated code can introduce compliance issues and technical debt instead of business value.

Appian is introducing AI-assisted spec-driven development. AI extracts rich specifications from legacy applications to create a clear visual plan. This plan helps visualize the UI, data models and process flows for rapid and iterative operational improvements. AI developer agents, operating under human supervision, complete tasks according to specifications, accelerating delivery and reducing rework.

New developer MCP servers will allow organizations to use their choice of AI development tools, such as Claude Code or Kiro to build and update Appian applications. Appian will support a wide range of AI models, enabling teams to work in the environments they prefer.

Together, these enhancements will deliver the speed and developer productivity of AI-assisted development, with enterprise-grade control.

“Appian Composer, Agents and Appian MCP servers enable trusted agentic process orchestration and application modernization,” said Mike Beckley, Chief Technology Officer and Founder of Appian. “Composer complements Appian’s agentic orchestration and data fabric with new spec-driven development tools that are both conversational and iterative. Beneath the covers, Appian Composer is built on Appian’s new open MCP – a model-driven representation of your complete application estate—requirements, apps, data entities, logic, workflows, security/governance rules, integrations, and multi-object dependencies—now exposed as context for developers and agents to safely evolve and optimize.”

The advancements announced today were unveiled at Appian World 2026 and will be available in coming releases. Learn more at www.appian.com

About Appian

Appian provides process automation technology. We automate complex processes in large enterprises and governments. Our platform is known for its unique reliability and scale. We’ve been automating processes for 25 years and understand enterprise operations like no one else. For more information, visit appian.com. [Nasdaq: APPN]

Follow Appian: LinkedIn, Youtube, Instagram, Facebook, and X.

Logo – https://mma.prnewswire.com/media/1488235/5943345/Appian_Caption_2700px_Logo.jpg

 

View original content:https://www.prnewswire.co.uk/news-releases/appian-advances-ai-in-process-to-deliver-enterprise-outcomes-at-scale-302756511.html

Continue Reading

Trending