Connect with us

Technology

Real or Fake? Finding the best ways to detect digital deception

Published

on

Deepfake technology has people wondering — is what I’m seeing real or fake? University researchers are making deepfake detection tools that can help journalists, intelligence analysts, and all of our trusted decision makers.

ROCHESTER, N.Y., Nov. 20, 2024 /PRNewswire-PRWeb/ — Seeing is believing. Well, it used to be, anyway.

How do deepfakes work? The process uses AI deep learning algorithms to analyze thousands of images and videos of the person being replicated. The neural network then recognizes patterns, like facial features, so it can continuously generate new ones.

Today, artificial intelligence (AI) is being used to manipulate media.

It can face-swap celebrities. It allowed a de-aged Luke Skywalker to guest star in The Mandalorian. It also falsely showed Ukrainian President Volodymyr Zelensky surrendering to the Russian invasion.

Deepfakes are videos, audio, or images that have been altered using AI. In a deepfake, people can be shown saying and doing things that they have never said or done.

This capability has profound implications for entertainment, politics, journalism, and national security. As deepfakes become more convincing, the challenge of distinguishing fact from fiction grows, threatening the credibility of news sources and the stability of democratic institutions.

At RIT, a team of student and faculty researchers is leading the charge to help journalists and intelligence analysts figure out what is real and what is fake. Their work, called the DeFake Project, has more than $2 million in funding from the National Science Foundation and Knight Foundation.

The RIT team aims to mobilize the best deepfake detectors around—observant humans armed with the right tools. “There is real danger in shiny new deepfake detectors that confidently offer often inaccurate results,” said Saniat (John) Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project. “We need to provide journalists—and other experts who vet reality—with forensic tools that help them make decisions, not make the decisions for them.”

Journalists agree and they are working with RIT.

Scott Morgan, a reporter and producer with South Carolina Public Radio, said that it’s increasingly harder to spot a fake and a good detector tool would be invaluable. He said he’s often relying on a “would that person really have said that” kind of approach.

“And ultimately, that’s what DeFake is trying to be—a tool that supplements the journalist’s gut feeling and complements old-fashioned legwork, but doesn’t replace them,” said Morgan. “Because even an AI-driven program that analyzes videos for the teeny-tiniest of clues that it might have been doctored shouldn’t be left to make decisions about what to do with that information or disinformation.”

Spotting the Fake

Matthew Wright, endowed professor and chair of the Department of Cybersecurity, first saw a high-quality deepfake lip sync of President Obama in 2017. He called it a real “OMG moment.”

“It was really disconcerting,” said Wright. “The potential to use this to make misinformation and disinformation is tremendous.”

As an expert in adversarial machine learning, Wright was studying how AI can impact cybersecurity for good and bad. Deepfakes seemed like a valuable offshoot of this.

In 2019, Wright and the newly formed DeFake Project team answered a call from the Ethics and Governance of Artificial Intelligence Initiative to build a deepfake detector. After developing some specialized techniques, their detector worked perfectly on curated deepfake datasets—it had 100-percent accuracy. Then they pulled up some YouTube videos to run through their detector.

“It would make mistakes,” said Wright. “But this wasn’t just our design. There is a cottage industry around developing deepfake detectors and none of these are foolproof, despite the claims of the company.”

Detectors can become confused when video is even slightly altered, clipped out of context, or compressed. For example, in 2019, a Myanmar news outlet used a publicly available deepfake detector to analyze a video of a chief minister confessing to a bribe. The tool was 90-percent confident that the video was fake, yet expert analysis later determined it was in fact real.

“Users tend to trust the output of decision-making tools too much,” said Sohrawardi. “You shouldn’t make a judgment based on percentage alone.”

That’s why the DeFake Project is so important, said Andrea Hickerson, dean and professor of the School of Journalism and New Media at The University of Mississippi and a member of the project. The goal is to make a tool that journalists can actually use.

“If a trusted journalist accidentally shares a deepfake, it would reach a wide audience and undermine trust in the individual and the profession as a whole,” said Hickerson, the former director of RIT’s School of Communication.

“Journalists have important contextual expertise that can be paired with a deepfake detection tool to make informed judgments on the authenticity of a video and its newsworthiness.”

To better understand the journalistic process, the DeFake researchers interviewed 24 reporters, ranging from national broadcast networks to local print media. Taking inspiration from a popular tabletop game, the team created a role-playing exercise called Dungeons & Deepfakes. The journalists were placed in a high-stakes newsroom scenario and asked to verify videos using traditional methods and deep-learning-based detection tools.

The team observed that journalists diligently verify information, but they too have the potential to over rely on detection tools, just like in the Myanmar incident.

Most of all, journalists saw the overall fakeness score and had a healthy skepticism. They needed insight into its calculation. Unfortunately, AI is not inherently good at explaining the rationale behind its decisions.

Unboxing the Black Box

When Pamposh Raina is asked to investigate a potential deepfake, she checks with multiple sources and often reaches out to RIT’s experts.

She is an experienced reporter who has worked with The New York Times, written for international publications, and currently heads the Deepfakes Analysis Unit at the Misinformation Combat Alliance, which is helping fight AI-generated misinformation in India.

One clip she questioned was being passed around social media in 2024. It was a video in Hindi that apparently featured Yogi Adityanath, chief minister of the most populated state in India, promoting a pilot gaming platform as a quick means to make a financial gain.

After running the video through detection tools from Hive AI, TrueMedia, and escalating to ElevenLabs for audio analysis, the investigators wanted an expert view on possible AI tampering around Adityanath’s mouth area in the video.

The DeFake team noted that the chief minister’s mouth animation looked disjointed and could be a result of the algorithm failing to extract proper facial landmarks. Ultimately, the Deepfakes Analysis Unit concluded that the video was fake and Adityanath did not utter the words attributed to him.

Creating meaningful tools like this is why Kelly Wu, a computing and information sciences Ph.D. student, came to RIT. After completing her undergraduate degrees in mathematics and economics at Georgetown University, Wu jumped at the chance to research deepfakes with the RIT team.

“Right now, there is a huge gap between the user and detection tools, and we need to collaborate to bring that together,” said Wu. “We care about how it will transition into people’s hands.”

Just like human brains, AI systems identify trends and make predictions. And just like in humans, it’s not always clear how a model comes to any particular conclusion.

Wu is figuring out how to unbox that AI black box. She aims to produce explanations that are both faithful to the AI model and interpretable by humans.

A lot of today’s detection tools use heatmaps to present explanations of results. A blob of dark red highlighting the eye region signifies that this area is more important for the model’s decision-making process.

“But, even to me, it just looks like a normal eye,” said Wu. “I need to know why the model thinks this is important.”

The DeFake tool will highlight areas and provide detailed text explanations. The detector displays information on the processed content, including metadata, overall fakeness, top fake faces, and an estimation of the deepfake manipulation method used. It also incorporates provenance technology, extracting Content Credentials—a new kind of tamper-evident metadata. Due to the resource-intensive nature of AI, the tool allows people to assess specific snippets of a video.

Most recently, the DeFake Project, which now has nine members from three universities, is expanding to meet the needs of intelligence analysts.

In 2023, RIT earned a grant to work with the Department of Defense on bolstering national security and improving intelligence analysis.

RIT’s team is interviewing analysts and using their insights to help create a Digital Media Forensic Ontology that makes the terminology of manipulated media detection methods clearer and more consistent. Analysts can use the DeFake all-in-one platform along with the ontology to narrow down why content needs to be analyzed, where in the media analysts should focus their attention, and what artifacts they should look for.

Candice Gerstner, an applied research mathematician with the Department of Defense, is collaborating on the project. She said that when analysts write a report that will be passed up the chain, they need to be sure that information has integrity.

“I’m not satisfied with a single detector that says 99 percent—I want more,” said Gerstner. “Having tools that are easily adaptable to new techniques and that continue to strive for explainability and low error rates is extremely important.”

In the future, the DeFake Project plans to expand to law enforcement, who are worried about fake evidence getting into the court system. RIT students are also researching reinforcement learning to limit bias and make sure AI models are fair.

Akib Shahriyar, a computing and information sciences Ph.D. student, is taking it one step further. He’s attacking the underlying model that powers the DeFake tool to uncover its weaknesses.

“In the end, we’re not just creating a detector and throwing it out there, where it could be exploited by adversaries,” said Shahriyar. “We’re building trust with the users by taking a responsible approach to deepfake detection.”

How to Identify a Deepfake

Although RIT’s DeFake tool is not publicly available, here are some common ways to identify fake content.

Artifacts in the face: Look for inconsistencies in eye reflections and gaze patterns. Anomalies may occur in the face—unnatural smoothness, absence of outlines of individual teeth, and irregular facial hair.Body posture: Deepfakes prioritize altering facial features, so body movements could appear odd or jerky.Audio discrepancies: Does the audio sync seamlessly with the speaker’s mouth movements?Contextual analysis: Consider the broader context, including the source, timestamps, and post history.External verification: Do a reverse image search and try contacting the original sources.Check the news: Look for reports about the content in reputable news sites.

Media Contact

Scott Bureau, Rochester Institute of Technology, 585-475-2481, sbbcom@rit.edu, rit.edu

View original content to download multimedia:https://www.prweb.com/releases/real-or-fake-finding-the-best-ways-to-detect-digital-deception-302311795.html

SOURCE Rochester Institute of Technology

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Neusoft Showcases Full-Stack & Global Innovations at Auto China 2026

Published

on

By

BEIJING, April 26, 2026 /PRNewswire/ — At Auto China 2026, Neusoft Corporation hosted a press conference on April 25th and announced three key strategic moves: the iteration of Neusoft OneCoreGo® Global In-Vehicle Intelligent Mobility Solution 7.0, the launch of Neusoft NAGIC.AI Cockpit Software Platform, and the strategic upgrade of its subsidiary, Neusoft Smart Go. By leveraging full-stack technology and a global ecosystem to drive innovation and empowerment, Neusoft is transforming vehicles into proactive, connected and collaborative mobile intelligent spaces.

OneCoreGo® Global In-Vehicle Intelligent Mobility Solution 7.0: An Evolved AI Companion for Global Intelligent Mobility

Intelligent mobility requires proactive perception, scenario integration, and global connectivity to meet personalized user needs and complex driving scenarios. Neusoft, whose products cover over 130 countries and regions worldwide, addresses these challenges with its OneCoreGo® Global In-Vehicle Intelligent Mobility Solution 7.0 through AI-driven innovation and global ecosystem collaboration. Powered by One Mate’s cross-agent collaboration and a sub-product matrix including One Map, One Sight, One Cloud, One Pay, One Store, One Link, and One Guard, the solution delivers full-link global mobility services spanning navigation, in-cabin AR, payment, app ecosystem services, connectivity and security. By breaking down functional silos, it streamlines multi-step operations into a single “depart” command, leveraging full-stack AI technology across perception, decision-making, interaction, and execution processes.

Guan Xin, Vice President of Neusoft and General Manager of Neusoft Automotive Innovative Solutions Division, said, “Adhering to the core principles of AI and globalization, OneCoreGo® 7.0 keeps innovating, evolving into a globally intelligent mobility companion that truly understands user needs.”

To enhance driving safety and mobility efficiency, OneCoreGo® 7.0 has also comprehensively upgraded its sub-products: One Map Global Navigation newly introduces 3D city effects, 3D lane-level maps, and traffic light guidance, offering dedicated solutions for two-wheelers and commercial vehicles as well. One Sight AR For Car improves navigation display effects, reducing instances of taking wrong routes. One Pay In-Vehicle Payment achieves over 90% payment coverage for parking services across core European cities. Combined with One Cloud’s global compliance cloud monitoring platform and One Guard’s full-stack vehicle networking security services, it creates a truly comprehensive OneCoreGo® Global In-Vehicle Intelligent Mobility Solution.

Neusoft NAGIC.AI Cockpit Software Platform: Dual-track Architecture for AI Integration in Every Vehicle

Amid the AI-driven transformation of the automotive industry, the market faces two challenges: limited computing power in legacy vehicles and high adaptation difficulties for next-gen models. Neusoft’s NAGIC.AI Cockpit Software Platform adopts a flexible “distributed + centralized” dual-track architecture approach. For existing vehicle models, it introduces the AI BOX solution, rapidly boosting computing power via external AI computing units, significantly reducing upgrade costs and timelines. For new vehicle models built on next-gen central computing platforms, Neusoft provides a full-stack AI cockpit software product suite, meeting automakers’ stringent requirements for system stability, reliability, and full-domain control.

Pang Hongyan, Vice President of Neusoft and General Manager of the Automotive Intelligent Software Division, said, “Our dual-track architecture enables every vehicle to embrace AI and enjoy an intelligent future. Both existing models and new-generation vehicles can find the most suitable path to intelligentization.”

Moreover, Neusoft’s NAGIC.AI Cockpit Software Platform features scenario-based, human-centric AI Agents seamlessly integrating driving safety, occupant care services, intelligent assisted driving and in-cabin entertainment. Neusoft also collaborates with global ecosystem partners to drive intelligent upgrades of in-cabin interaction products, fostering a more open and dynamic intelligent cockpit ecosystem.

Strategic Upgrade of Neusoft Smart Go: A World-leading Provider of Full-Domain Upper-Body Electronics Solutions for Intelligent Vehicles

Aligning with the trend of E/E architecture evolution from distributed control to “central computing + zonal control”, Neusoft Smart Go, a subsidiary of Neusoft in the field of intelligent vehicle connectivity, has completed a strategic upgrade, aiming to become a global leader in full-domain upper-body electronics solutions for intelligent vehicles.

This strategic upgrade positions Neusoft Smart Go to focus on full-domain scenarios in upper-body electronics, building a product matrix covering full-category in-vehicle electronics solutions, including central computing platforms, cockpit-driving-parking integration, intelligent cockpits, intelligent communications, intelligent audio systems, and zonal control units, and pioneering the integration of large model algorithms.

Jian Guodong, Senior Vice President of Neusoft and CEO of Neusoft Smart Go, said, “This strategic upgrade represents a significant leap from partial focus to comprehensive layout. Through our dual-track strategy of high-end cutting-edge solutions and mature standardized products, we can flexibly meet the mass production needs of vehicle models across different regions and price segments worldwide.” Neusoft Smart Go will provide mass-producible, adaptable hardware-software integrated solutions, empowering global automakers in achieving intelligent transformation.

Neusoft’s President, Mr.Gai Longjia stated, “In the future, Neusoft Smart Go will create stronger synergy with Neusoft Corporation by sharing internal technologies and capabilities while responding jointly to external demands. This specialized yet collaborative model will preserve business unit’s agility and expertise while enhancing Neusoft’s full-stack technological advantages.”

As a trusted partner in a smarter world, Neusoft is committed to collaborating with global automakers and ecosystem partners to build an open and inclusive intelligent automotive community together for the future of global mobility.

For more information about Neusoft, please visit www.neusoft.com.

 

View original content:https://www.prnewswire.com/apac/news-releases/neusoft-showcases-full-stack–global-innovations-at-auto-china-2026-302753701.html

SOURCE Neusoft Corporation

Continue Reading

Technology

Lianlian DigiTech Returns to Money20/20 Asia to Expand Partnerships, Share Industry Trends, and Explore AI-Enabled Global Financial Infrastructure

Published

on

By

BANGKOK, April 26, 2026 /PRNewswire/ — Lianlian DigiTech, a leading global provider of digital payment services, was once again invited to participate in Money20/20 Asia, one of the world’s most influential fintech gatherings, held in Bangkok, Thailand from April 21 to 23. At the event, the company presented its latest developments in cross-border payment infrastructure, technology innovation, and ecosystem collaboration, offering a comprehensive view of its work enhancing global cross-border payment capabilities.

During the conference, Lianlian DigiTech announced a strategic partnership with UK-based fintech company USI Money to further strengthen its global cross-border payment network, delivering more efficient and reliable fund flows for merchants worldwide. Shen Enguang, Co-President of Lianlian DigiTech; Mark Ma, Head of Global Banking Partnership at LianLian Global; and Bryan Jiang, General Manager Hong Kong of LianLian Global, attended the event and engaged with representatives from international financial institutions. They shared perspectives on fintech trends and global payment innovation, offering industry insight into the continued evolution of a more integrated and interoperable cross-border payments ecosystem.

Building a Borderless Payment Network with Global Partners Including USI Money

At the event, Lianlian DigiTech formalized a strategic collaboration with London-headquartered USI Money to further develop its global payment infrastructure.

The partnership will focus on cross-border remittance and foreign exchange services, combining both companies’ technological capabilities and resources to deliver a one-stop payment and collection solution for global businesses. The offering is built to be efficient, secure, and cost-effective, improving overall fund flow efficiency and streamlining foreign exchange execution.

Syed Bukhari, Group Chief Business and Operating Officer at USI Money, said: “Our partnership with Lianlian will strengthen our remittance capabilities, creating greater value for our customers through broader network coverage and improved transaction performance.”

Bryan Jiang, General Manager Hong Kong of LianLian Global, said: “By leveraging the complementary strengths of our ecosystem partners in technology and compliance, Lianlian will continue to scale its global payment network and improve transaction efficiency. We remain committed to enhancing financial connectivity across global financial markets and delivering more efficient and reliable cross-border payment solutions for our customers.”

Founded in 2009 and listed on the Main Board of the Hong Kong Stock Exchange in 2024 (2598.HK), Lianlian DigiTech is a China-based, globally focused digital payment company with increasingly integrated AI capabilities across its platform. Guided by its mission of “Connecting the world, Empowering global commerce,” the company focuses on developing a trusted and scalable financial infrastructure. As of the end of 2025, Lianlian DigiTech has built a cross-border payment network covering more than 100 countries and regions, serving over 10.4 million customers worldwide.

USI Money is a foreign exchange and international remittance service provider offering tailored cross-border financial solutions for businesses and individuals. With competitive real-time exchange rates and efficient execution as its core strengths, the company delivers fast, secure, and reliable global fund transfers.

In addition, Lianlian DigiTech co-hosted a networking session with Unlimit during the event, providing a forum for industry dialogue. The session brought together a broad group of fintech partners to explore collaborative models and help foster a more connected ecosystem.

Industry Roundtables: Unlocking Layered Collaboration in AI-Driven Cross-Border Payments and Advancing Financial Inclusion in Emerging Markets

At the same time, Mark Ma and Bryan Jiang were invited to the themed roundtable discussions, where they shared insights drawn from industry practice and outlined new approaches to aligning fintech innovation with the global financial system.

At the roundtable on “Fintech and Banks,” Mark Ma noted that the global payment system is rapidly shifting from isolated capabilities to a layered, collaborative model. Banks continue to serve as the foundational infrastructure, responsible for clearing networks and liquidity management. Fintech firms like Lianlian, meanwhile, build on top of this foundation to deliver application-layer services for businesses, transforming complex cross-border payment channels into more accessible solutions that support a wider range of practical business scenarios. He also emphasized fintech’s growing role in compliance and value creation. By embedding risk controls and verification processes into technology workflows, fintech companies can act as compliance intermediaries, improving efficiency while filtering risk and enabling banks to operate more effectively at scale. Meanwhile, insights derived from transaction data and business flows allow for more precise evaluation of small and medium-sized businesses, shifting capital allocation from experience-based decisions to data-driven approaches and improving access to financial services.

At the roundtable titled “Different Worlds, Shared Challenges: Bridging Emerging Markets,” Bryan Jiang pointed out that the core of financial inclusion is shifting from scale of coverage to practical usability in everyday financial activity. The ability to serve underserved segments such as small and micro merchants and overseas workers in a sustained and reliable manner ultimately depends on continuous improvements in product design and operational capabilities. Using emerging markets as an example, Jiang explained that small and medium-sized businesses in these regions often face challenges such as difficult account setup, complex cross-border collections, high foreign exchange costs, and multi-layered tax requirements. Many existing solutions still follow traditional business-focused models, resulting in cumbersome KYB processes and lengthy review cycles that are misaligned with the asset-light, high-frequency, fast-turnover nature of these businesses. In response, Lianlian has lowered barriers to fund flows by offering local collection accounts, optimizing foreign exchange mechanisms, and improving settlement efficiency. The company has also restructured account architecture, streamlined review processes, and enhanced fund visibility, creating a more seamless and intuitive user experience that better aligns financial services with its clients’ business operations and day-to-day activities.

As digital technologies increasingly integrate with the real economy, innovations in AI and blockchain are reshaping the foundations of global financial services. Lianlian DigiTech has long invested in AI capabilities, global compliance, and the growth of its international service network. Its broad licensing coverage, regulatory track record, localized service capabilities, and technical reliability have earned the trust of regulators, customers, and partners worldwide.

Looking ahead, Lianlian DigiTech will continue to build on its cross-border expertise and compliance experience to further develop its AI capabilities and deepen collaboration with global partners. The company aims to extend its role beyond payment network services into more integrated financial infrastructure solutions. Lianlian DigiTech remains committed to serving as a trusted platform for global financial transactions in an increasingly digital environment, enabling businesses and individuals worldwide to access faster, more efficient, and more seamless cross-border financial services.

View original content:https://www.prnewswire.com/apac/news-releases/lianlian-digitech-returns-to-money2020-asia-to-expand-partnerships-share-industry-trends-and-explore-ai-enabled-global-financial-infrastructure-302753667.html

SOURCE LianLian Global

Continue Reading

Technology

The Building & Furniture Category Highlights Sustainable and Human‑Centric Design at the 139th Canton Fair

Published

on

By

GUANGZHOU, China, April 26, 2026 /PRNewswire/ — Phase 2 of the 139th Canton Fair has seen the Building & Furniture category emphasize green Infrastructure and human-centric design.

A major highlight of the building and decorative materials section is the introduction of photovoltaic marble-textured cladding. This innovative surfacing material bridges the gap between high-end aesthetics and renewable energy. Unlike traditional solar panels that rely on glass, this non-opaque cladding uses precise microscopic structures to guide light to internal PV cells.

This technology offers 60% higher efficiency than traditional transparent solar systems while reducing carbon emissions by over 50%. Its ability to reproduce stone, wood, or brick‑like 3D textures allows architects to integrate power generation into a wide range of building styles without the industrial appearance of traditional solar panels.

Indoor environments are also becoming smarter and safer. Manufacturers are showcasing high-efficiency antibacterial surfacing, utilizing visible light catalysis to provide 24-hour protection against mold and bacteria. These advanced decorative papers and panels are becoming the new standard for high-end interior decoration, prioritizing long-term hygiene in residential and commercial spaces.

The sanitary ware sector is increasingly focused on the aging global population and those with limited mobility. A standout innovation is the electric lift-and-rotate shower chair. Designed for the dry-wet separation bathroom layout, it allows users to sit in a dry area and be safely rotated and lifted into the shower via remote control. This waterproof, low-voltage system provides dignity and independence for the elderly while reducing the physical strain on caregivers.

Hygiene and ease of maintenance have also seen a breakthrough with wall-mounted toilets. By moving the lid connection to the tank wall and adopting a mortise‑and‑tenon structure, the design eliminates the hard‑to‑clean areas where bacteria typically accumulate. Many of these units also incorporate ergonomic grab bars directly into the frame, blending safety with a minimalist aesthetic.

In the sports and leisure industry, the shift toward sustainability is seen in non-infill synthetic turf. This next-generation football grass eliminates the need for rubber granules or sand, providing a natural touch and superior shock absorption while significantly reducing maintenance costs and microplastic pollution.

All these innovations demonstrate how the Building & Furniture sector is advancing toward greener materials, smarter functionality, and more human‑centered design, setting new benchmarks for the future of living spaces.

For pre-registration, please click: https://buyer.cantonfair.org.cn/register/buyer/email?source_type=16

Photo – https://mma.prnewswire.com/media/2965701/Image1.jpg

View original content:https://www.prnewswire.co.uk/news-releases/the-building–furniture-category-highlights-sustainable-and-humancentric-design-at-the-139th-canton-fair-302753654.html

Continue Reading

Trending