Connect with us

Technology

Minnesota Schools Awarded Funding to Implement Evidence-Based BARR System

Published

on

BARR Center (Building Assets, Reducing Risks) announced today that 18 additional Minnesota schools have been selected to implement its evidenced-based system. The 2023 Minnesota legislature approved $5 million for the expansion of the BARR system which focuses on improving school culture and student outcomes. The BARR system is used in 40 schools in Minnesota and over 300 schools in the nation.

MINNEAPOLIS, Oct. 12, 2023 /PRNewswire/ — Funded through Minnesota’s 2023 K12 Education Finance bill (HF 2497) and working with the Minnesota Department of Education in the selection process, the following schools were awarded three-years of funding to roll out BARR’s system:

 

I am so grateful to the Minnesota legislature for allocating this funding.

Elementary Schools: Brookview Elementary School, Stillwater School District; Ellen Hopkins Elementary School, Moorhead School District; North Metro Flex Academy Charter School, North St. Paul; Northport Elementary School, Robbinsdale School District; Roosevelt Elementary School, Detroit Lakes School District; Southview Elementary School, Marshall School District; Wilshire Park Elementary School, St. Anthony New Brighton School District.

Middle Schools: Detroit Lakes Middle School, Detroit Lakes School District; Marshall Middle School, Marshall School District; North Junior High School, St. Cloud School District; Oak-Land Middle School, Stillwater School District; South Junior High School, St. Cloud School District.

High Schools: Apple Valley High School, Rosemount– Apple Valley-Eagan Public Schools; Crookston High School, Crookston School District; Eden Prairie High School, Eden Prairie School District; Jordan High School, Jordan School District; Mankato East High School, Mankato School District; Two Rivers High School, ISD 197 West St. Paul and Mendota Heights.

Highly competitive, more than 50 schools applied. Schools selected are geographically distributed and priority consideration was given to schools with a concentration of black, indigenous, and students of color, and those experiencing poverty.

Educator training is underway as schools begin implementing the BARR system during the 2023-24 school year. These schools will form a statewide network and also join the national network of BARR schools.

BARR’s mission is to create equitable schools where every student, regardless of race, ethnicity, or economic status, has access to high-quality education where adults know them, recognize their strengths, and help them succeed,” explains Angela Jerabek, founder and executive director of the BARR Center. “I am so grateful to the Minnesota legislature for allocating this funding so more educators and students can experience the BARR system, a true evidence-based school success system. I am very excited to have additional schools entering BARR’s system.”

Now more than ever, in this challenging, post-pandemic period, educators need support addressing school climate and student mental health issues. Using existing resources, the BARR system focuses on building meaningful relationships – capitalizing on the strengths of every student – and leveraging student data to truly transform a school’s culture. In every school implemented it has proven to be a successful way to meet the social and emotional needs of all students while simultaneously increasing student achievement and teacher satisfaction and effectiveness.

The BARR system stands alone as the most consistently proven school improvement model in the country. Through rigorous studies conducted by the American Institute for Research (AIR), the BARR system has demonstrated statistically significant results in 20 areas, including increasing math and English achievement scores, improving student credit attainment, reducing course failure, closing the achievement gap, and reducing chronic absenteeism all while improving the school environment for both students and staff.

About BARR Center
BARR Center (Building Assets, Reducing Risks) delivers the expertise and resources required for a school to implement the BARR system, an evidence-backed system designed to nurture a collaborative and strengths-based culture of support and success for every student through intentionally deepening relationships and improving the use of data. For more information, visit, https://barrcenter.org/.

The contents of this press release do not necessarily represent the policy of the federal Department of Education or the state Department of Education and you should not assume endorsement by the federal or state government.

View original content to download multimedia:https://www.prnewswire.com/news-releases/minnesota-schools-awarded-funding-to-implement-evidence-based-barr-system-301955641.html

SOURCE BARR Center

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Fastino Launches Pioneer, the First Agent for Fine-tuning and Inference of LLMs

Published

on

By

Pioneer turns language model development and fine-tuning from a months-long, expert-driven workflow into a single prompt and introduces adaptive inference, a new category in model serving where deployed models continuously improve on live production data, without human intervention.

PALO ALTO, Calif., Apr. 21, 2026 /PRNewswire/ — Fastino Labs, the applied AI research lab behind the widely adopted open source GLiNER model family, today announced the launch of Pioneer, a state-of-the-art language model fine-tuning agent and adaptive inference platform for open source small language models. With Pioneer, any developer can fine-tune and deploy production-ready models like Qwen, Gemma, Llama, Nemotron, and GLiNER with a single prompt.

Pioneer is the first platform to bring adaptive inference to production: a new approach to model serving in which deployed models are continuously and autonomously retrained on their own live inference data, with improved checkpoints automatically validated and promoted over time. The era of “deploy and forget” for language models has arrived.

“We believe that the future will not just be a few large models, but billions of small models working together. However, in reality, building language models today is extremely difficult,” said Ash Lewis, CEO and co-founder of Fastino. “Pioneer collapses that into a prompt. And once your model is deployed, it keeps getting better on its own. For the first time, the model you ship on day one is the worst model you’ll ever use.”

“Frontier model token costs haven’t dropped as expected, making accurate, task-specifc open source models the most important tools in the AI stack.”

Small models, built agentically

Frontier large language models have pushed the boundaries of AI, but most production workloads only need a fraction of their parameters and compute. Fine-tuned small language models consistently match or exceed frontier model accuracy on specific tasks at a fraction of the latency and cost. Fastino believes specialized small models will be the primary building blocks of agentic AI, and that the tooling to build them should be accessible to every developer.

Pioneer delivers this through two agentic modes:

Agent Mode lets users fine-tune and deploy a model in minutes through a simple chat interface. The agent handles synthetic data generation, hyperparameter selection, evaluation, and deployment with no code required.

Deep Research Mode is a fully autonomous fine-tuning agent with web browsing access. Given only a natural-language task description, it discovers training data, runs multiple experiments in parallel, recovers from failed runs, and iteratively improves the model until it reaches an optimal configuration.

Introducing adaptive inference

Pioneer is also the first platform to offer adaptive inference for all deployed models. Pioneer’s agent continuously monitors deployed models through their inference traces, identifies failure patterns, and automatically trains and deploys improved checkpoints.

Across seven benchmark scenarios designed to simulate real-world deployment drift, Pioneer maintained monotonic improvement while naive retraining approaches degraded, with final performance gaps of up to 43 percentage points. To support this research, Fastino is also introducing AdaptFT-Bench, a new benchmark for evaluating autonomous model improvement under realistic production conditions.

“Adaptive inference will soon be a standard feature in model serving,” said George Hurn-Maloney, COO and co-founder of Fastino. “Your production data is the most valuable training signal you have. Pioneer is the first tool actually using it to make your models better, automatically.”

Breakthrough Benchmark Results & Technical Report

Alongside the Pioneer launch, Fastino is publishing a detailed technical report documenting the unprecedented performance gains achieved by its fine-tuning agent, results that redefine what is possible in automated model development. Across a wide range of academic benchmarks, Pioneer’s Research Mode improved accuracy by as much as 83.8 percentage points over base models, with end-to-end runs completing in a matter of hours at a cost measured in tens of dollars. The report also covers the methodology behind the platform’s fine-tuning and adaptive inference systems, and introduces AdaptFT-Bench, a new benchmark for evaluating autonomous model improvement under realistic production conditions.

On a wide variety of academic benchmarks, Pioneer delivered substantial accuracy gains across every base model tested, including a 19.0 percentage point improvement on IFEval with NVIDIA-Nemotron-3B, a 21.4 percentage point improvement on HumanEval with Qwen 8B, a 67.3 percentage point improvement on ARC-Challenge with Llama 3.2-3B, and an 83.8 percentage point improvement on SMS Spam with GLiNER2. End-to-end agent runs completed in an average of 6 hours at a cost of around $35, a fraction of what a senior machine learning engineer would spend on the same workflow.

Backed by leading investors

Pioneer is the latest product from Fastino Labs, which has raised $25 million in total funding across its pre-seed and seed rounds, led by Khosla Ventures and Insight Partners, with participation from M12 (Microsoft’s venture fund), NEA, Valor Equity Partners, and angels including GitHub CEO Thomas Dohmke, former Docker CEO Scott Johnston, and Weights & Biases CEO Lukas Biewald.

Fastino’s open source GLiNER model family has been downloaded more than 6 million times and is used in production by teams at leading Fortune 500 companies.

Availability

Pioneer is available to developers starting today. To learn more or start fine-tuning a model from a single prompt, visit pioneer.ai.

Fastino Labs is an applied AI research lab building small open source models and the infrastructure to make them continuously better in production. Founded in 2024 and based in Palo Alto, California, Fastino is the creator of the GLiNER open source model family and Pioneer, the first agentic fine-tuning and adaptive inference platform. The company is backed by Khosla Ventures, Insight Partners, M12, NEA, and others. Learn more at fastino.ai.

View original content to download multimedia:https://www.prnewswire.com/news-releases/fastino-launches-pioneer-the-first-agent-for-fine-tuning-and-inference-of-llms-302748105.html

SOURCE Fastino Labs

Continue Reading

Technology

XPPen Engages Italy’s Creative Community at Romics April 2026, Exploring the Role of Digital Tools in Comic Creation

Published

on

By

ROME, April 21, 2026 /PRNewswire/ — XPPen, a leading global brand in digital art innovation, made a significant appearance at Romics April 2026 — the International Festival of Comics, Animation, Cinema, and Games — held in Italy from April 9 to 12. Drawing more than 150,000 visitors, the event offered XPPen a premier platform to engage Italy’s creative community and reinforce its growing presence across the European market.

At its exhibition booth, XPPen showcased a comprehensive lineup of professional-grade pen displays and drawing tablets for artists. Visitors were invited to experience the devices firsthand, exploring the precision, pressure sensitivity, and responsiveness that have made XPPen a trusted name among digital creators worldwide.

The Magic Drawing Pad is the industry’s first professional mobile standalone drawing tablet, built for creators who need a full drawing experience anywhere. The Magic Note Pad, the world’s first 3-in-1 color note pad, redefines digital note-taking for professionals, students, and creatives alike. The compact Artist 12 3rd proved a crowd favorite, turning heads with its innovative industrial design while delivering a lightweight 719g body paired with the X4 Smart Chip Stylus for a portable punch well above its size. Rounding out the showcase was the acclaimed Artist Pro series, offering tools for creators at every stage.

A highlight of XPPen’s presence was its partnership with Silly Studios, an Italian independent comic publishing house best known for The Little Trashmaid and Simply Silly. XPPen sat down with CEO Davide Valente to discuss the studio’s creative vision and the role of digital tools in their work.

Davide emphasized that while strong foundational drawing skills remain critical, digital tools have become indispensable to how artists develop and work today. “XPPen has given us a huge hand in this fundamental aspect, and our artists are starting to use it much more frequently and expand their skills thanks to XPPen,” he said. “I definitely see a bright future in this respect and it will only get better.”

XPPen’s participation at Romics April 2026 reflects its ongoing commitment to engaging artists and creators around the world. True to its mission of delivering cutting-edge, accessible tools — for everyone from independent illustrators to studio professionals — XPPen looks forward to connecting with creative communities at leading events worldwide.

For more information about XPPen and its products, visit www.xp-pen.com.

Photo – https://mma.prnewswire.com/media/2961420/XPPen_x_Romics_2026.jpg
Photo – https://mma.prnewswire.com/media/2961421/XPPen_x_Romics_2026_1.jpg

View original content:https://www.prnewswire.co.uk/news-releases/xppen-engages-italys-creative-community-at-romics-april-2026-exploring-the-role-of-digital-tools-in-comic-creation-302748184.html

Continue Reading

Technology

Antimatter Launches as the World’s First Vertically Integrated Neocloud for AI Inference

Published

on

By

Combining over 1GW of power capacity secured through grid connection agreements and reserved sites across distributed micro-power sites in the US, Europe and GCC, Antimatter will deploy a global network of 1,000 distributed micro data centers to serve the growing AI inference market — 5 times faster and 50% cheaper than hyperscalers.

CANNES, France, April 21, 2026 /PRNewswire/ — Antimatter, a new category of neocloud purpose-built for the distributed AI economy, today announced its launch through the strategic combination of three companies: Datafactory (US-based energy and power infrastructure), Policloud (modular micro data center network), and Hivenet (distributed cloud provider).

The combined entity creates the industry’s first fully integrated AI infrastructure platform spanning energy sourcing, physical hardware, and cloud software — designed to serve the explosive global demand for AI inference at a fraction of hyperscale cost and dramatically faster time to market.

Antimatter is deploying capital at an unprecedented pace to build out the first global neocloud network optimized for AI inference. The company is securing €300 million to fund the deployment of its first 100 Policloud units by 2027, representing 40,000 GPUs and over 3.6 exaFLOPS of active compute capacity.

By the end of 2030, the planned network of 1,000 Policlouds will provide more than 400,000 GPUs and over 36 exaFLOPS of distributed AI inference capacity — the equivalent of five traditional hyperscale data centers, deployed across dozens of countries with 50% lower capital spending and significantly faster time to market.

Antimatter is led by David Gurlé, the serial high-tech entrepreneur who founded Microsoft’s Real-Time Communications business (today’s Microsoft Teams), led Skype’s enterprise division and its sale to Microsoft, and founded Symphony Communication Services.

“In the age of AI, intelligence is not the bottleneck — energy is,” said David Gurlé, Cofounder, Executive Chairman, and CEO of Antimatter. “The infrastructure built for the first era of cloud and AI was designed around centralized scale. But the inference era requires a different model: more distributed, faster to deploy, and sovereign by design. That is the infrastructure Antimatter is building.”

Why AI Inference is Breaking the Cloud Model

The first wave of AI was about training massive models in centralized data centers. But the next phase — inference — is about running those models billions of times per day, across applications like copilots, agents, and real-time decision systems.

That shift changes everything. Inference requires infrastructure that is closer to users, faster to deploy, more energy-efficient, and geographically distributed. Traditional hyperscalers were not built for this. Their model relies on massive, centralized campuses that can take years to build and require enormous upfront capital.

Antimatter’s answer: bring the data center to the energy, not the energy to the data center.

The global data center capacity market is projected to grow from 55GW in 2023 to 220GW by 2030 — a 22% CAGR — yet grid connection queues and infrastructure delays are emerging as the primary bottleneck. In Europe alone, more than 12 TWh of renewable electricity were curtailed in 2023, representing over €4.2 billion in lost value. At the same time, more than 1,000GW of additional renewable capacity remains stuck in permitting and grid-connection queues across Europe and the GCC.

A Full-Stack Neocloud Built for the AI Inference Era

Antimatter is uniquely positioned as the only neocloud that controls the complete value chain:

Energy-first model

More than 1GW of power capacity secured through formal grid connection agreements and site reservations, including over 160MW already operational across Texas and Oregon, USA. Antimatter deploys Policloud units directly at or near existing power assets — including wind, solar, hydro, or biogas sites — converting stranded generation into productive AI infrastructure in a matter of months, rather than waiting years for new transmission capacity.

Decentralized infrastructure layer

A fleet of modular, containerized micro data centers, each housing up to 400 GPUs and deployable in as little as five months, compared with 24+ months for traditional hyperscale builds. Antimatter currently operates 10 units across 8 sites and has a commercial pipeline of more than 500 additional units.

Distributed software layer

A proprietary distributed computing and storage platform providing the orchestration intelligence that connects distributed hardware into a single, sovereign cloud fabric with global default Tier 3 capability — supporting billions of inference requests each day, with sub-10ms latency for edge workloads and full data sovereignty for regulated industries.

Key Competitive Advantages

Metric

Antimatter

Traditional Hyperscale

Capex per fully loaded MW

~$7M

~$35M

Deployment timeline

5 months

24+ months

Customer pricing

~50% below hyperscalers

Market rate

Edge latency

Sub-10ms

Variable

Carbon reduction

~70% lower; zero water cooling

Standard

Data sovereignty

Sovereign-by-design; local jurisdiction

Bolt-on solutions

Strong Commercial Traction

Antimatter enters the market with demonstrated commercial momentum:

$20m forward looking revenue3,344 GPUs deployed with demand for 10,000+100 Policlouds being deployed in 2027, representing 40,000+ GPUs1,000 Policlouds planned by end of 2030, representing 400,000+ GPUsDiversified customer base: Energy sector (35%), Public sector (30%), Agriculture (15%), Corporates (20%)

The company is targeting $250M+ in revenue within the next 18 months and $3.0B+ by the end of 2030.

Investor Perspectives

“AI infrastructure is now a strategic asset class, and the winners will be those who can combine hard assets with software at scale. Antimatter’s vertically integrated model — from megawatts to APIs — is exactly the kind of infrastructure we believe can define the next decade of digital growth.” — Alex Manson, CEO of SC Ventures, Standard Chartered Bank

“France and Europe need sovereign, energy-efficient infrastructure to compete in AI. What convinced us about Antimatter is not just the technology, but the ability to deploy micro data centers in months, on existing power assets, while meeting the most demanding regulatory constraints.” — Stéphanie Hospital, Founder and CEO of OneRagtime

“We are witnessing first-hand how emerging markets are leapfrogging legacy infrastructure and going straight to AI-native architectures. Antimatter’s model — distributed, capital-efficient and deeply integrated with energy — is built for these environments and for an economy increasingly shaped by AI.” — Noor Sweid, Founder and Managing Partner, Global Ventures

“At Inria, we work every day at the frontier of AI and high-performance computing. Antimatter’s approach is compelling because it reconciles cutting-edge AI workloads with more frugal, sustainable infrastructure — distributed, software-defined, and close to available energy. It is a strong illustration of the deeptech industrial story we want to see emerge in Europe.” — Bruno Sportisse, Chairman and CEO of Inria

About the Founder

David Gurlé is a French entrepreneur, engineer, and Chevalier of the Légion d’Honneur. He has founded seven companies, including Symphony Communication Services ($1.4B valuation), and held senior leadership roles at Microsoft (where he founded the Real-Time Communications business), Thomson Reuters, and Skype (VP & General Manager, Enterprise). He holds an MSc in Computer Science and Telecommunications from EFREI Paris.

About Antimatter

Antimatter is the distributed neocloud for AI inference. By vertically integrating energy, modular infrastructure, and orchestration software, Antimatter deploys enterprise-grade AI compute infrastructure faster, cheaper, and more sustainably than traditional hyperscale providers. Headquartered in Cannes, France, with major operations in the United States, Antimatter serves enterprises, governments, and AI companies worldwide.

www.antimatter.com

Note on exaFLOPS calculation: RTX 5090 = ~90 TFLOPS FP32. 40,000 GPUs x 90 TFLOPS = 3,600 petaFLOPS = 3.6 exaFLOPS. For 400,000 GPUs = 36 exaFLOPS.

CONTACT: Ariane Forgues, aforgues@mantu.com 

SOURCE Antimatter

Continue Reading

Trending