Connect with us

Technology

Real or Fake? Finding the best ways to detect digital deception

Published

on

Deepfake technology has people wondering — is what I’m seeing real or fake? University researchers are making deepfake detection tools that can help journalists, intelligence analysts, and all of our trusted decision makers.

ROCHESTER, N.Y., Nov. 20, 2024 /PRNewswire-PRWeb/ — Seeing is believing. Well, it used to be, anyway.

How do deepfakes work? The process uses AI deep learning algorithms to analyze thousands of images and videos of the person being replicated. The neural network then recognizes patterns, like facial features, so it can continuously generate new ones.

Today, artificial intelligence (AI) is being used to manipulate media.

It can face-swap celebrities. It allowed a de-aged Luke Skywalker to guest star in The Mandalorian. It also falsely showed Ukrainian President Volodymyr Zelensky surrendering to the Russian invasion.

Deepfakes are videos, audio, or images that have been altered using AI. In a deepfake, people can be shown saying and doing things that they have never said or done.

This capability has profound implications for entertainment, politics, journalism, and national security. As deepfakes become more convincing, the challenge of distinguishing fact from fiction grows, threatening the credibility of news sources and the stability of democratic institutions.

At RIT, a team of student and faculty researchers is leading the charge to help journalists and intelligence analysts figure out what is real and what is fake. Their work, called the DeFake Project, has more than $2 million in funding from the National Science Foundation and Knight Foundation.

The RIT team aims to mobilize the best deepfake detectors around—observant humans armed with the right tools. “There is real danger in shiny new deepfake detectors that confidently offer often inaccurate results,” said Saniat (John) Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project. “We need to provide journalists—and other experts who vet reality—with forensic tools that help them make decisions, not make the decisions for them.”

Journalists agree and they are working with RIT.

Scott Morgan, a reporter and producer with South Carolina Public Radio, said that it’s increasingly harder to spot a fake and a good detector tool would be invaluable. He said he’s often relying on a “would that person really have said that” kind of approach.

“And ultimately, that’s what DeFake is trying to be—a tool that supplements the journalist’s gut feeling and complements old-fashioned legwork, but doesn’t replace them,” said Morgan. “Because even an AI-driven program that analyzes videos for the teeny-tiniest of clues that it might have been doctored shouldn’t be left to make decisions about what to do with that information or disinformation.”

Spotting the Fake

Matthew Wright, endowed professor and chair of the Department of Cybersecurity, first saw a high-quality deepfake lip sync of President Obama in 2017. He called it a real “OMG moment.”

“It was really disconcerting,” said Wright. “The potential to use this to make misinformation and disinformation is tremendous.”

As an expert in adversarial machine learning, Wright was studying how AI can impact cybersecurity for good and bad. Deepfakes seemed like a valuable offshoot of this.

In 2019, Wright and the newly formed DeFake Project team answered a call from the Ethics and Governance of Artificial Intelligence Initiative to build a deepfake detector. After developing some specialized techniques, their detector worked perfectly on curated deepfake datasets—it had 100-percent accuracy. Then they pulled up some YouTube videos to run through their detector.

“It would make mistakes,” said Wright. “But this wasn’t just our design. There is a cottage industry around developing deepfake detectors and none of these are foolproof, despite the claims of the company.”

Detectors can become confused when video is even slightly altered, clipped out of context, or compressed. For example, in 2019, a Myanmar news outlet used a publicly available deepfake detector to analyze a video of a chief minister confessing to a bribe. The tool was 90-percent confident that the video was fake, yet expert analysis later determined it was in fact real.

“Users tend to trust the output of decision-making tools too much,” said Sohrawardi. “You shouldn’t make a judgment based on percentage alone.”

That’s why the DeFake Project is so important, said Andrea Hickerson, dean and professor of the School of Journalism and New Media at The University of Mississippi and a member of the project. The goal is to make a tool that journalists can actually use.

“If a trusted journalist accidentally shares a deepfake, it would reach a wide audience and undermine trust in the individual and the profession as a whole,” said Hickerson, the former director of RIT’s School of Communication.

“Journalists have important contextual expertise that can be paired with a deepfake detection tool to make informed judgments on the authenticity of a video and its newsworthiness.”

To better understand the journalistic process, the DeFake researchers interviewed 24 reporters, ranging from national broadcast networks to local print media. Taking inspiration from a popular tabletop game, the team created a role-playing exercise called Dungeons & Deepfakes. The journalists were placed in a high-stakes newsroom scenario and asked to verify videos using traditional methods and deep-learning-based detection tools.

The team observed that journalists diligently verify information, but they too have the potential to over rely on detection tools, just like in the Myanmar incident.

Most of all, journalists saw the overall fakeness score and had a healthy skepticism. They needed insight into its calculation. Unfortunately, AI is not inherently good at explaining the rationale behind its decisions.

Unboxing the Black Box

When Pamposh Raina is asked to investigate a potential deepfake, she checks with multiple sources and often reaches out to RIT’s experts.

She is an experienced reporter who has worked with The New York Times, written for international publications, and currently heads the Deepfakes Analysis Unit at the Misinformation Combat Alliance, which is helping fight AI-generated misinformation in India.

One clip she questioned was being passed around social media in 2024. It was a video in Hindi that apparently featured Yogi Adityanath, chief minister of the most populated state in India, promoting a pilot gaming platform as a quick means to make a financial gain.

After running the video through detection tools from Hive AI, TrueMedia, and escalating to ElevenLabs for audio analysis, the investigators wanted an expert view on possible AI tampering around Adityanath’s mouth area in the video.

The DeFake team noted that the chief minister’s mouth animation looked disjointed and could be a result of the algorithm failing to extract proper facial landmarks. Ultimately, the Deepfakes Analysis Unit concluded that the video was fake and Adityanath did not utter the words attributed to him.

Creating meaningful tools like this is why Kelly Wu, a computing and information sciences Ph.D. student, came to RIT. After completing her undergraduate degrees in mathematics and economics at Georgetown University, Wu jumped at the chance to research deepfakes with the RIT team.

“Right now, there is a huge gap between the user and detection tools, and we need to collaborate to bring that together,” said Wu. “We care about how it will transition into people’s hands.”

Just like human brains, AI systems identify trends and make predictions. And just like in humans, it’s not always clear how a model comes to any particular conclusion.

Wu is figuring out how to unbox that AI black box. She aims to produce explanations that are both faithful to the AI model and interpretable by humans.

A lot of today’s detection tools use heatmaps to present explanations of results. A blob of dark red highlighting the eye region signifies that this area is more important for the model’s decision-making process.

“But, even to me, it just looks like a normal eye,” said Wu. “I need to know why the model thinks this is important.”

The DeFake tool will highlight areas and provide detailed text explanations. The detector displays information on the processed content, including metadata, overall fakeness, top fake faces, and an estimation of the deepfake manipulation method used. It also incorporates provenance technology, extracting Content Credentials—a new kind of tamper-evident metadata. Due to the resource-intensive nature of AI, the tool allows people to assess specific snippets of a video.

Most recently, the DeFake Project, which now has nine members from three universities, is expanding to meet the needs of intelligence analysts.

In 2023, RIT earned a grant to work with the Department of Defense on bolstering national security and improving intelligence analysis.

RIT’s team is interviewing analysts and using their insights to help create a Digital Media Forensic Ontology that makes the terminology of manipulated media detection methods clearer and more consistent. Analysts can use the DeFake all-in-one platform along with the ontology to narrow down why content needs to be analyzed, where in the media analysts should focus their attention, and what artifacts they should look for.

Candice Gerstner, an applied research mathematician with the Department of Defense, is collaborating on the project. She said that when analysts write a report that will be passed up the chain, they need to be sure that information has integrity.

“I’m not satisfied with a single detector that says 99 percent—I want more,” said Gerstner. “Having tools that are easily adaptable to new techniques and that continue to strive for explainability and low error rates is extremely important.”

In the future, the DeFake Project plans to expand to law enforcement, who are worried about fake evidence getting into the court system. RIT students are also researching reinforcement learning to limit bias and make sure AI models are fair.

Akib Shahriyar, a computing and information sciences Ph.D. student, is taking it one step further. He’s attacking the underlying model that powers the DeFake tool to uncover its weaknesses.

“In the end, we’re not just creating a detector and throwing it out there, where it could be exploited by adversaries,” said Shahriyar. “We’re building trust with the users by taking a responsible approach to deepfake detection.”

How to Identify a Deepfake

Although RIT’s DeFake tool is not publicly available, here are some common ways to identify fake content.

Artifacts in the face: Look for inconsistencies in eye reflections and gaze patterns. Anomalies may occur in the face—unnatural smoothness, absence of outlines of individual teeth, and irregular facial hair.Body posture: Deepfakes prioritize altering facial features, so body movements could appear odd or jerky.Audio discrepancies: Does the audio sync seamlessly with the speaker’s mouth movements?Contextual analysis: Consider the broader context, including the source, timestamps, and post history.External verification: Do a reverse image search and try contacting the original sources.Check the news: Look for reports about the content in reputable news sites.

Media Contact

Scott Bureau, Rochester Institute of Technology, 585-475-2481, sbbcom@rit.edu, rit.edu

View original content to download multimedia:https://www.prweb.com/releases/real-or-fake-finding-the-best-ways-to-detect-digital-deception-302311795.html

SOURCE Rochester Institute of Technology

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Ningbo’s “Eco+” Integration Practice showcased at SCO forum

Published

on

By

NINGBO, China, April 28, 2026 /PRNewswire/ — The Shanghai Cooperation Organization’s (SCO) Green and Sustainable Development Forum was held in Ningbo, Zhejiang province, April 28–30. At this international event focused on green, low–carbon transition, Ningbo showcased concrete examples of harmonious coexistence between people and nature.

An abandoned quarry has been converted into an international racetrack, whose engine roars have stimulated a growing cultural and tourism sector. A once–barren “firewood trail” has been upgraded into a national mountaineering route, spawning distinctive local industries now known as the “Hometown of China’s Sports Walking Sticks” and the “Capital of China’s Flashlights.” Ningbo continues to advance its “Eco+” integration model, turning ecological assets into industrial momentum, development potential and measurable gains in shared prosperity. The city has developed a practical path for realizing the value of lucid waters and lush mountains, offering a replicable “Ningbo model” for green, low–carbon urban transformation worldwide.

In Beilun, more than 500 species have been recorded, and the district has been designated a UN “Biodiversity Charming City.” From the revitalized Meishan Bay in Beilun to the misty expanse of Dongqian Lake in Yinzhou and the historic allure of Moon Lake in Haishu, local authorities are using the “golden key” of ecological governance to revive dormant green mountains and clear waters. Notably, Ningbo’s ecological governance goes beyond mountain repair and water treatment: it integrates “Eco+” development from the source through unified planning and coordinated implementation—protecting ecological foundations while preserving space for industry. This forward–looking approach has produced a win–win outcome for ecology and development; along Meishan Bay’s shore, cultural tourism and leisure industries have rapidly clustered, receiving more than 2 million visitors annually.

In Fenghua, the “Common Prosperity Studio” initiative has built a full–chain platform integrating “5G + IoT + Agriculture,” and introduced a model that combines village–collective fixed–rent leasing, professional enterprise operation and flexible farmer participation. In Yuyao, Hemudu pioneered China’s first ecological integrated farming of breeding soft–shelled turtles in water oat fields, balancing ecological protection, food security and farmers’ income growth. Zhenhai Refining & Chemical has established China’s first “Zero–Waste Petrochemical Base,” recognized as a national model case of a “Zero–Waste Industrial Park.”

Leveraging its mountain and sea resources, Ningbo has deepened chain–based integration of “Ecology + Cultural Tourism + Sports + Manufacturing,” continuously converting ecological value into tangible benefits for residents. Ninghai has transformed abandoned ancient paths into a 500–km national mountaineering trail and established a national sports–industry demonstration base. Xiangshan has used the Asian Games to invigorate coastal tourism and open channels for converting marine ecological value. Yinzhou and Haishu have developed biodiversity–friendly districts and townships, fostering new sectors such as educational tourism and cultural–creative industries.

To ensure green development proceeds steadily and sustainably, Ningbo is building a multi–stakeholder governance system that includes government, enterprises and the public. Ninghai has pioneered a “Soil and Forestland Bank,” using financial instruments to unlock the value of forestry resources. Yinzhou, Cixi and other areas have mobilized broad public participation in ecological protection, creating a co–construction and shared–benefits model that supports ongoing ecological value realization.

Photo – https://mma.prnewswire.com/media/2968174/Ningbo_Meishan_Port.jpg
Logo – https://mma.prnewswire.com/media/2968173/Ningbo_Logo.jpg

 

View original content to download multimedia:https://www.prnewswire.com/news-releases/ningbos-eco-integration-practice-showcased-at-sco-forum-302756146.html

SOURCE Ningbo International Communication Center

Continue Reading

Technology

Hyde Park Capital Advises DevRefactory on its Sale to Capacity

Published

on

By

TAMPA, Fla., April 28, 2026 /PRNewswire/ — Hyde Park Capital announced today that its client, DevRefactory, a leading customer experience software platform specializing in omnichannel journey orchestration, embedded middleware, and integrated managed services, has been acquired by Capacity. Hyde Park Capital served as the exclusive investment banker to DevRefactory for this transaction. Shumaker, Loop & Kendrick served as legal counsel to DevRefactory.

DevRefactory’s platform orchestrates customer interactions across voice, chat, web, and social channels, enabling seamless transitions while preserving context. It centralizes knowledge, powers self-service and chatbot experiences, and streamlines engagement across the customer lifecycle. These capabilities are complemented by a specialized managed services team supporting implementation, optimization, and ongoing performance improvement.

Marcus Alexander, CFO and Head of Corporate Development at Capacity, stated, “The DevRefactory team have built an incredible business in the telecom space and this acquisition allows us to scale that innovation across the entire Capacity platform. Together, we’re accelerating a future where contact centers can unify their customer interactions, reduce costs and deliver consistently better experiences, without the complexity of fragmented tools.”

James Ramey, Co-Founder and Managing Partner of DevRefactory, commented, “After rapidly establishing Refactory as a leader in AI enablement—delivering enterprise-grade solutions to Fortune 50 organizations—we are excited to announce that Refactory has been acquired by Capacity AI. This partnership will expand our ability to deliver true omnichannel AI experiences at scale, leveraging Capacity AI’s platform and reach across more than 20,000 customers worldwide.”

Ramey continued, “Following strong inbound acquisition interest, we partnered with Hyde Park Capital as our exclusive financial advisor to evaluate strategic opportunities. Their team brought exceptional focus, deep alignment with our vision, and a disciplined process that prioritized both enterprise impact and employee value. Hyde Park Capital curated a highly complementary group of potential partners and guided us through a transaction that positions our team and technology for long-term success. Together, we selected Capacity AI as the ideal partner to accelerate our mission and extend the reach of Refactory’s platform globally. We are incredibly grateful to the Hyde Park Capital team and the Capacity AI team for their partnership throughout this process, and we are excited for the next chapter as part of Capacity AI.”

Michael Johnson, Managing Director at Hyde Park Capital, reflected on the transaction, “It has been a privilege to advise JC, Brian, and Dustin, the founders of DevRefactory, throughout this process. From day one, it was clear that they built something truly differentiated, a platform rooted in deep technical expertise and a genuine passion for reimagining how enterprises engage with their customers. DevRefactory’s capabilities are a natural fit within Capacity’s platform, and we are excited to see the impact this combination will have for their clients and the broader customer experience market.”

Trevor Mumford, Vice President at Hyde Park Capital, added, “It was genuinely refreshing to work with a founding team that has been close friends since high school and has spent years building technology together. Working alongside entrepreneurs who combine that kind of personal conviction with serious technical innovation makes for a truly rewarding engagement. We’re proud of the outcome and confident Capacity is the right home to take DevRefactory’s mission to the next level.”

About DevRefactory

Founded in 2020, DevRefactory is a customer experience software platform that enables enterprises to deliver seamlessly connected, omnichannel customer journeys at scale. Through its OCX (Omnichannel Customer Experience) suite, DevRefactory provides embedded middleware, managed services, and practical AI frameworks that orchestrate engagement across voice, web chat, SMS, mobile apps, email, and social media. The Company’s solutions empower customers to interact on their own terms while eliminating the complexity of managing disparate channel technologies independently. Partnering with leading platforms, DevRefactory serves as an innovation accelerator, helping enterprises prove value rapidly and integrate modern omnichannel capabilities into their existing ecosystems. For additional information, please visit www.refactory.dev.

About Capacity

Founded in 2017, Capacity is an AI-powered support automation platform that gives organizations the capacity to do more with less. Its unified platform combines intelligent virtual agents, conversational AI, agent assist and live support tools, campaigns and workflow automation, and advanced analytics, enabling businesses to automate customer inquiries, reduce handle times, and drive consistent, high-quality experiences across every channel, including voice, chat, email, SMS, and web. Trusted by more than 20,000 organizations and powering over 36 billion automated interactions, Capacity serves leading brands across financial services, healthcare, retail, education, insurance, and more. With over 250 pre-built integrations and enterprise-grade security, Capacity delivers seamless deployment into existing technology ecosystems. Proudly headquartered in St. Louis, Missouri, Capacity is part of the Equity.com incubator. For additional information, please visit https://capacity.com/main.

About Hyde Park Capital

Hyde Park Capital is a boutique investment banking firm specializing in mergers and acquisitions of successful founder and family-owned companies. Hyde Park Capital’s principals have extensive investment banking experience, including managing sell-side and buy-side transactions, recapitalizations, financial advisory assignments, fairness opinions, raising growth and acquisition capital for companies, including equity, mezzanine, senior debt, and project finance. Hyde Park Capital has bankers who specialize in numerous industry sectors, including healthcare, industrials, business services, technology, consumer, and cleantech/power finance particularly in connection with data centers. This transaction represents another successful engagement closed by Hyde Park Capital within the technology sector. Hyde Park Capital is headquartered in Tampa, Florida, with additional offices in Nashville, Tennessee, and San Francisco, California, and is a member of FINRA and SIPC. For additional information, please visit www.hydeparkcapital.com.

Media Contacts:

Michael Johnson

Managing Director

johnson@hydeparkcapital.com

813-769-3284

Trevor Mumford

Vice President

mumford@hydeparkcapital.com

813-209-9071

 

View original content to download multimedia:https://www.prnewswire.com/news-releases/hyde-park-capital-advises-devrefactory-on-its-sale-to-capacity-302756148.html

SOURCE Hyde Park Capital

Continue Reading

Technology

Micro Center Launches Retail Media Offering to Reach Tech Enthusiasts and Builders

Published

on

By

Powered by Epsilon, Micro Center Retail Media combines AI, person-level identity, and closed-loop measurement to deliver provable, measurable impact for advertisers.

HILLIARD, Ohio, April 28, 2026 /PRNewswire-PRWeb/ — Micro Center, a leading national retailer of computers and electronic devices, today announced the launch of its retail media offering, Micro Center Retail Media. The offering gives brands access to more than 20 million highly engaged Micro Center customers—tech enthusiasts including PC builders, small businesses, IT professionals and creators—audiences that are difficult to reach at scale through mass-market retail media networks. Using first party data and deep category insights, advertisers can influence purchase decisions where technical guidance, compatibility, and performance matter most.

“For more than 40 years, Micro Center has earned the trust of tech enthusiasts and builders by pairing deep expertise with an unmatched in-store experience,” said Steve Rado, Chief Marketing Officer of Micro Center. “Our retail media offering builds on that foundation, giving advertisers a powerful way to engage our customers with relevance, credibility and clear measurement.”

Developed in partnership with global technology, data, and services company Epsilon, Micro Center Retail Media couples AI with person level identity in the ad server to help advertisers engage high intent shoppers with greater precision. The offering intelligently determines when, where, and how often to engage customers, optimizing media investment to maximize performance.

Brands can activate campaigns across Micro Center’s digital and physical touchpoints to influence both online and in-store purchases. Available channels include onsite display and sponsored product placements as well as offsite display, video and CTV—all supported by closed-loop measurement that quantifies incremental sales across digital and physical environments.

“Micro Center Retail Media connects advertisers with some of the most knowledgeable, high-intent tech buyers in retail,” said Rado. “It’s a best-in-class offering designed to help advertisers engage serious tech buyers with confidence.”

Named PCMag’s Best Tech Retailer for three consecutive years, Micro Center is widely recognized for its expert-led customer experience and technical credibility. Instore offerings such as the Knowledge Bar, Insider credit card benefits, and exclusive instore only “loss leader” bundles create high-impact moments for brands at the point of decision. Customers rely on Micro Center associates to identify compatibility issues and recommend better builds—trust that translates directly to advertiser performance.

“Micro Center boasts one of the most informed and intentional audiences in retail,” said Chris Wissing, Chief Product Officer at Epsilon. “Our data and technology help identify more of these niche shoppers, follow their journey from online to in-store, and connect media directly to purchase outcomes for Micro Center and its brand partners.”

About Micro Center

Micro Center operates thirty large computer and electronics stores in major markets nationwide. Founded in 1979 in Columbus, Micro Center is designed to satisfy the dedicated computer and electronics user. Uniquely focused on computers and related products, Micro Center offers more computers and related items (more than 20,000 items in stock) than any other retailer. Micro Center is passionate about offering a high level of customer service, with a knowledgeable and tenured sales team. Customers can visit Micro Center’s 30 stores (with more locations coming soon) from coast-to-coast or microcenter.com for thousands of computer-related items, electronics, and other technology products.

Micro Center stores are located in:

Atlanta (2), Baltimore, Boston, Chicago (2), Charlotte, Cincinnati, Cleveland, Columbus, Dallas, Denver, Detroit, Houston, Indianapolis, Kansas City, Los Angeles, Miami, Minneapolis, New York (5), Philadelphia, Phoenix, St. Louis, Washington, D.C. (2), Santa Clara, and coming soon, Austin.

About Epsilon

Epsilon is a global technology, data, and services company that the world’s leading brands use to harmonize consumer engagement across their paid, owned, and earned channels.

The Epsilon PeopleCloud platform includes capabilities such as data, identity resolution, customer data platforms, clean rooms, digital media, retail media, site personalization, direct mail, loyalty, email marketing, and measurement. By applying artificial intelligence against privacy-centric identity resolution—embedded in data-enriched analytic, marketing, and media solutions and services—Epsilon allows marketers to bridge the divide between marketing and advertising technology, engaging consumers with 1 View, 1 Vision, and 1 Voice. For more information, visit www.epsilon.com.

Media Contact

Meg Adrion, Micro Center, 1 614-850-3000, madrion@microcenter.com, microcenter.com
Dan Ackerman, Micro Center, dackerman@microcenter.com, microcenter.com

View original content:https://www.prweb.com/releases/micro-center-launches-retail-media-offering-to-reach-tech-enthusiasts-and-builders-302755943.html

SOURCE Micro Center

Continue Reading

Trending