Become an Edge AI Earth Guardian

The world is facing unprecedented biodiversity loss — and we need technology that does more than just measure it. The Edge AI Earth Guardians Challenge, hosted by Hackster and the EDGE AI FOUNDATION, calls on innovators everywhere to design AI-powered solutions that actively protect and connect with nature.

This is more than a competition. It’s a collaborative effort to blend cutting-edge engineering with empathy, ethics, and ecological awareness.

Why This Matters

Hackster has a long history of turning bold ideas into real-world impact — from Elephant Edge tracking collars to IoT Into the Wild plug-and-play conservation tools. Each challenge proves that with the right tools and purpose, creators can design resilient, ethical, and scalable solutions that serve both people and the planet.

Edge AI Earth Guardians builds on this legacy, inviting students, entrepreneurs, and researchers to create systems that move beyond detection — to active protection.

Three Tracks, One Mission

Wherever you are in your journey, there’s a place for you:

  • Track 1 – Idea to Prototype: Turn early concepts into tangible prototypes with guidance, community resources, and hardware support.
  • Track 2 – Prototype to Market: Scale and refine working demos for real-world deployment.
  • Track 3 – Fast Track for Research: Showcase mature, research-backed solutions ready for global collaboration.

Support to Build Faster

Thanks to NextPCB, the top 30 hardware applicants will receive $500 in manufacturing credit to accelerate development. Whether you’re building wildlife trackers, environmental sensors, or edge AI field devices, this helps you get from breadboard to field-ready hardware faster.

Competition Timeline

  • July 29: Challenge opens
  • August 15: Hardware application deadline
  • September 24: Project submission deadline
  • October 1: Finalists announced
  • November 2025: Winners present at EDGE AI Taipei

Top prizes include $2,500 and a paid trip to Taiwan to share your innovation with a global audience.

How to Get Started

  1. Register for the competition on Hackster.
  2. Propose your idea and apply for hardware by August 15.
  3. Build, document, and submit your project before September 24.
  4. Compete for your chance to present at EDGE AI Taipei 2025.

🌍 Your skills can help protect our planet. Be part of the change.
👉 Sign up now on Hackster and learn more at edgeaifoundation.org

Generative Edge AI Working Group: Enabling Creativity and Intelligence at the Network’s Edge

The Generative Edge AI Working Group is a collaborative initiative within the EDGE AI FOUNDATION dedicated to advancing the frontiers of generative artificial intelligence in real-time, resource-constrained, and decentralized environments.

As large-scale generative models continue to reshape how we interact with technology—from multimodal assistants and real-time translation to autonomous systems and industrial monitoring—bringing generative capabilities to the edge is the next bold step in AI democratization. This working group brings together academic researchers, industry practitioners, and open-source contributors to make this vision a reality.

We aim to empower edge devices with generative AI capabilities that are energy-efficient, privacy-preserving, responsive, and autonomous, unlocking intelligent behavior closer to the user, the sensor, and the moment of interaction.

 

 Charter

Generative Edge AI is defined as a breakthrough field in edge AI and tinyML. It targets resource-restricted generative artificial intelligence technologies and applications including hardware, algorithms, tools, ecosystems, applications, and software solutions capable of enabling natural interaction on edge devices at extremely high energy efficiency, typically in the peta-operations per Watt (POp/W) range or higher.

This new field is poised to enable an unprecedented generation of powerful yet low-power neural processor units (NPUs), in-memory computing, and systems-on-chip (SoCs) that leverage heterogeneous integration to support scalable and sustainable edge intelligence.

 

Definition

Generative Edge AI refers to deploying and running generative AI models directly on edge devices (e.g., smartphones, IoT devices, sensors, autonomous vehicles) rather than relying on centralized cloud infrastructure. These models generate outputs such as text, images, or actions in real time, at the point of data collection or user interaction, enabling low-latency, personalized, and private AI services.

 

Mission Statement

The Generative Edge AI Working Group empowers and connects academia, industry, and individuals to advance knowledge, collaboration, and innovation in Edge AI through education, community engagement, and recognition of groundbreaking achievements.

 

Objectives

To fulfill its mission, the Generative Edge AI Working Group has defined a set of objectives. These are designed to promote a dynamic, inclusive, and forward-thinking community that bridges the gap between cutting-edge research and practical deployment, bringing together perspectives from both industry and academia.

The goal is to facilitate knowledge exchange, active collaboration, and the celebration of innovation. In this spirit, the group aims to become a key reference point for sustained progress in the field of Generative Edge AI. Each objective reflects the belief that success in this domain depends on the convergence of diverse expertise, from hardware to software, from academic inquiry to real-world engineering.

The following  are the Working Group core objectives:

  • Foster Knowledge Sharing: Facilitate the exchange of ideas and insights through seminars, tutorials, roundtable discussions, and whitepapers.
  • Promote Collaboration: Build meaningful connections between academia, industry, and individual innovators to drive collective progress in Generative Edge AI.
  • Highlight Achievements: Recognize and amplify the contributions of members actively shaping the field to inspire and attract new participants.
  • Educate the Community: Provide accessible resources and updates on the latest breakthroughs, trends, and advancements in Generative Edge AI.
  • Encourage Innovation: Nurture a culture of exploration and creativity by sharing demos, showcasing individual contributions, and supporting cutting-edge initiatives.

 

Deliverables

The Working Group is committed to producing tangible outcomes that benefit both the community and the broader AI ecosystem. These deliverables are defined to support learning, promote collaboration, and accelerate the responsible deployment of generative technologies at the edge.

From educational content and hands-on resources to recognition programs and cross-sector publications, the group’s outputs are meant to serve as building blocks for continued innovation. In particular, the working group will maintain a strong focus on open access, interoperability, and practical relevance, ensuring that its contributions are both accessible and impactful across the edge AI landscape.

Educational Content

  • Tutorials, webinars, and seminars covering both foundational and advanced topics in Generative Edge AI.
  • Whitepapers and reports detailing industry trends, research advancements, and best practices.

Community Engagement Activities

  • Roundtable discussions to foster dialogue between academia, industry, and individual contributors.
  • Networking events to build relationships and encourage collaboration across sectors and disciplines.

Knowledge Dissemination

  • Regular updates on breakthroughs, tools, and technologies in Generative Edge AI
  • Curated newsletters summarizing key developments and insights from the field.

Recognition and Amplification

  • Case studies and success stories showcasing member contributions and achievements.
  • Spotlight series on individuals and organizations advancing the field.

Practical Resources

  • Demonstrations and walkthroughs of innovative Generative Edge AI solutions.
  • Open-access repositories for tools, datasets, and frameworks to enable reproducibility and reuse.

Future-Oriented Initiatives

  • A dynamic and evolving definition of Edge AI that reflects current advancements in hardware, software, and applications.
  • Strategic plans to attract new participants, foster innovation, and ensure the community remains inclusive and forward-looking.

Collaborative Publications

  • Co-authored articles, research papers, or blog posts between academic and industry members.
  • Annual reviews summarizing the group’s impact and the broader progress in the field.

 

Working Group Leadership

The Generative Edge AI Working Group is led by two internationally recognized experts in the field of edge computing and AI:

  • Danilo Pietro Pau (STMicroelectronics)
    Danilo Pau (Fellow, IEEE) received the degree from the Politecnico di Milano in 1992. He joined STMicroelectronics, where he worked on HDMAC and MPEG2 video memory reduction, video coding, embedded graphics, and computer vision. His current work focuses on developing solutions for deep learning tools and applications. With over 80 patents, 104 publications, 113 MPEG authored documents, and 39 invited talks/seminars at various worldwide universities and conferences, his favorite activity remains mentoring undergraduate students, M.Sc. engineers, and Ph.D. students from various universities. He is currently a member of the IEEE Region 8 Action for Industry and the Machine Learning, Deep Learning and AI in the CE (MDA) Technical Stream Committee IEEE Consumer Electronics Society (CESoc).
  • Prof. Hajar Mousannif (Cadi Ayyad University)
    Hajar Mousannif is a Full Professor at Cadi Ayyad University in Morocco, with over 19 years of experience in Artificial Intelligence, Machine Learning, and Data Science. She has published more than 100 research papers and holds several AI patents. She founded the first Bachelor’s and Master’s programs in Artificial Intelligence at her university. Hajar also co-chairs the Generative Edge AI Working Group (EDGE AI FOUNDATION) and the Artificial Intelligence Working Group at the OPCW (Organization for the Prohibition of Chemical Weapons). She is an active member of the global AI community and regularly speaks at conferences to promote responsible and impactful AI development.

Community Momentum: Generative Edge AI Forum

Even before the official formation of the Generative Edge AI Working Group, the EDGE AI FOUNDATION recognized the transformative potential of generative models at the edge. This vision was brought to life through two editions of the Generative Edge AI Forums, which gathered global experts to discuss cutting-edge research, share practical insights, and explore future directions for generative intelligence in resource-constrained environments.

In March and October 2024, the first two forums became cornerstone events, marking the transition from tinyML to a broader conversation around Generative Edge AI. They laid the groundwork for the working group’s creation and remain a core part of its ongoing activities, showcasing the community’s commitment to open dialogue, interdisciplinary collaboration, and real-world impact.

Since then, a surge of innovation has followed, new studies, novel applications, and a better understanding of edge-specific use cases. The EDGE AI FOUNDATION community continues to express a strong desire to stay up-to-date, share knowledge, and build a common foundation for the future of generative edge intelligence.

You can revisit the presentations from both editions here:

The journey continues with the third edition of the Generative Edge AI Forum (link), a two-day livestream event focused on the impact of Generative Edge AI platforms, highlighting progress in hardware, software, tooling, applications, and services, and exploring emerging paradigms such as agentic and physical AI.

Journey to Impact, Generative EDGE AI
EDGE AI FOUNDATION Livesteam Journey to Impact, Generative Edge AI

Stay tuned to the Generative Edge AI Working Group for updates, recordings, and opportunities to participate in upcoming events.

 

Highlights from the First Generative Edge AI Forum

The inaugural Generative Edge AI Forum set the stage for a vibrant, interdisciplinary exchange around deploying generative models on resource-constrained platforms. With contributions from academia, industry, and the open-source community, the event covered both visionary ideas and hands-on engineering advances. Key themes included:

  • Miniaturized LLMs and Efficient Inference
    Talks by Syntiant, NXP, and Arm highlighted strategies for distilling and quantizing LLMs to run efficiently on embedded platforms, including the use of NPUs, custom SoCs, and advanced model optimization techniques.
  • Generative AI for Hardware Design
    Speakers from Harvard, UC Davis, and Efabless explored how foundation models can be used to accelerate chip design, optimize architectures, and even auto-generate Verilog for edge-specific hardware
  • Edge Applications in Real-World Domains
    Sessions from Bosch, Qualcomm, UNICEF, and Johns Hopkins showcased how GenAI is being applied to domains such as connected vehicles, education, healthcare, and embodied systems—often leveraging novel data modalities and hybrid architectures.
  • Human-AI Interaction and Design Futures
    Contributions from IDEO and Useful Sensors pushed the boundaries of how GenAI systems should interact with humans, with alternative models of AI experience inspired by calm technology and creative narratives
  • Research Frontiers and System-Level Thinking
    Presentations by EPFL, Meta, and others offered a forward-looking lens on emerging capabilities—such as multimodal foundation models, agentic AI, and strategies for lifelong learning and adaptation at the edge.

 

Highlights from the Second Generative Edge AI Forum

Building on the momentum of the first event, the second edition of the Generative Edge AI Forum continued to expand the community’s understanding of deploying generative models in edge environments. The forum featured leaders from academia, industry, and research institutes, offering a wide-angle view of current innovations and real-world challenges.

Key highlights included:

  • Edge Infrastructure & Strategic Perspectives
    Dave McCarthy of IDC opened the forum with a forward-looking perspective on how LLMs and transformer models are reshaping the edge computing landscape, accelerating adoption and infrastructure readiness.
  • Model Deployment & Optimization
    Talks from Meta, Arm, and ETH Zurich explored techniques for compressing and optimizing generative models to fit within the tight constraints of edge hardware, including use of ExecuTorch, RISC-V SoCs, and ARM MPUs.
  • Lifecycle Integration & TinyML Synergies
    EURECOM and Fondazione Bruno Kessler presented work on merging TinyML lifecycles with LLMs and deploying advanced generative applications—such as neural style transfer—on ultra-low-power MCUs.
  • Domain-Specific Applications
    BOSCH and Wipro shared lessons from deploying Small Language Models in automotive and enterprise contexts, with applications ranging from custom code generation to in-vehicle personalization.
  • New Approaches to Privacy, Memory & Security
    Speakers from NXP, Kyung Hee University, and the Technology Innovation Institute discussed advances in memory optimization, secure fine-tuning, and model compression, using examples like Falcon Mamba and privacy-preserving inference.
  • Tools, Platforms & Future Directions
    The forum also showcased community-driven tools such as TinyRAG, hardware design strategies like SECDA-LLM, and deployment considerations for 5G edge platforms shared by Particle.io.

This second forum reinforced the community’s shared belief that Generative Edge AI is not just possible—it’s already happening, and it requires continued collaboration across disciplines to scale responsibly, efficiently, and inclusively.

 

Help Shape the Future: Generative EDGE AI Survey

As part of its commitment to inclusive innovation and global collaboration, the Generative Edge AI Working Group has launched a strategic community survey. Initially shared with partners of the EDGE AI FOUNDATION, this survey aims to gather insights from key stakeholders across academia, industry, and the open-source ecosystem to inform the group’s priorities, initiatives, and outputs.

The questionnaire explores a wide range of topics, from technical readiness and adoption barriers to preferred application domains, collaboration formats, and emerging trends. It also captures early community sentiment on key topics such as Agentic AI at the edge, education and outreach needs, and the types of deliverables that would bring the most value to participants.

Here’s a brief summary of initial findings and trends, which reflect early community input:

Survey Highlights

The initial wave of responses from the Generative Edge AI Working Group community survey offers a timely snapshot of expectations, priorities, and barriers in the evolving Generative Edge AI landscape.

Market timing expectations are optimistic: a clear majority of respondents (over 70%) anticipate that Generative Edge AI solutions will begin appearing on the already in 2025, with significant momentum expected to continue into 2026 and beyond. Only a small fraction projected timelines beyond 5 years or expressed uncertainty.

 

 

When asked about preferred applications, the community showed strong interest in Small Language Models, Visual Question Answering, Speech-to-Text, and Text-to-Speech technologies. These were followed by media-based use cases such as captioning, generation, and enhancement—underscoring the perceived value of multimodal generative capabilities in constrained environments.

 

 

On the solution front, respondents expect to see impact across the stack: hardware/chips, applications, and services were the most anticipated areas, with tools also seen as important enablers.

 

 

Beyond technical priorities and adoption timelines, the survey revealed several important trends shaping the direction of Generative Edge AI.

Adoption is primarily driven by the desire to improve human-machine interaction and to enable novel AI-native products, both cited by over 76% of respondents. Closely behind, over 70% highlighted the emergence of use cases that were previously not possible with traditional AI approaches.

 

In terms of organizational focus, product development and R&D lead the way, with 88% and 76% of respondents prioritizing them, respectively. Model deployment, while still relevant, was seen as secondary, suggesting that the community is still in a foundational exploration phase.

 

Collaboration interests reflect how organizations wish to engage with others in the ecosystem. The most common preference was for use-case–driven projects (82.4%), followed by collaborations around datasets and customer initiatives (64.7%), and joint research efforts or technical workshops (58.8%). These responses point to a strong desire for partnerships that are grounded in practical relevance and mutual experimentation, rather than abstract or siloed efforts.

 

When asked about desired forms of support from the foundation and the working group, the top responses included open-source initiatives, real-world case studies and demos, and access to cutting-edge research. In contrast, areas like policy guidance and access to large-scale compute resources were noted as lower priority for many respondents at this stage.

 

Several emerging trends were also identified. IoT and Industrial applications topped the list of sectors to watch, followed by consumer-facing systems, humanoid robotics, and multimodal AI.

 

The community also showed strong interest in Agentic AI at the edge, with over 76% supporting further exploration of the topic. That interest, however, was often paired with concerns about safety, hallucination risks, and trustworthiness—suggesting a need for transparent frameworks and continued education.

 

Multiple comments emphasized the need for proof-of-concept deployments and educational content, especially around new paradigms like agentic and autonomous systems. While excitement is clearly growing, practical grounding and responsible innovation remain top of mind.

 

The perceived impact of Generative Edge AI is overwhelmingly positive, with nearly all respondents rating it as either transformational or incremental, and very few expressing uncertainty or skepticism.

 

Finally, the survey highlighted key adoption barriers, led by the definition of use cases, ROI/investment concerns, and energy efficiency limitations. The lack of productionready silicon, high implementation costs, and education gaps were also cited frequently, suggesting where coordinated action and resources could have the most immediate effect.

 

These insights are helping to inform the working group’s agenda and will guide future initiatives.

We are considering opening the survey to the broader public. Whether your organization is already active in Generative Edge AI or just beginning to explore its potential, your input can help steer the group’s direction and ensure that its work is aligned with real-world challenges and opportunities.

Your voice matters, and together, we can build a stronger, more connected, and impactful Generative Edge AI ecosystem.

Stay tuned for updates and future opportunities to contribute!!

 

Get Involved

Whether you’re developing models, building systems, optimizing hardware, or exploring novel applications, the Generative Edge AI Working Group welcomes your voice. Join us in shaping a future where generative intelligence is accessible, efficient, and embedded at the very edge of our connected world.

Leveling the Playing Field for Edge AI Research Through High Quality Datasets

Published by EDGE AI FOUNDATION Datasets & Benchmarks Working Group:
  • Adam Fuks – NXP, Chair
  • Petrut Bogdan – Innatera
  • Vijay Janappa Reddi – Harvard University
  • Eiman Kanjo – Imperial College
  • Colby Banburry – Harvard University
  • Sam Al Attiyah – imagimob
  • Xianghui Wang – Renesas
  • Emil Jorgenson Njor – Technical University of Denmark


Introduction and the Challenge of Edge AI

As outlined in the document by the EDGE AI FOUNDATION Datasets & Benchmarks Working Group, the past decade has seen remarkable advancements in neural network (NN) techniques. These include innovative topologies, training methods, quantization-aware approaches, data augmentation, and model compression. This progress has significantly boosted fields like image recognition (powered by datasets such as ImageNet) and natural language processing (NLP) (driven by vast internet-scale corpora), enabling AI systems to rival and even surpass human performance in specific tasks. However, a critical challenge arises when deploying these increasingly complex AI systems on edge devices. Unlike cloud servers, edge devices operate under stringent constraints, including limited power, memory, and compute resources. They are often battery-powered and thermally limited, demanding smaller, more efficient models that maintain accuracy while fitting within tight energy and memory budgets.


This realm is what we refer to as “edge AI.” Success here hinges on developing tailored techniques and establishing robust benchmarking methods. Currently, a significant barrier exists: a fragmented landscape and a lack of standardized, high-quality datasets that accurately reflect real-world edge use cases. While numerous publications claim efficiency gains in “tiny” or edge ML, they often rely on simplistic “toy” examples that fail to translate to production-ready applications. What’s missing is a universally accepted, credible benchmark for comparing performance in realistic environments.

The EDGE AI FOUNDATION’s Response and Goals

The EDGE AI FOUNDATION Datasets & Benchmarks Working Group was formed to address this gap directly. The primary objective is to create a level playing field for tinyML and Edge AI research by providing the necessary infrastructure for effective benchmarking. As part of this group, you, Pete, are instrumental in driving these efforts forward.

Our key goals are threefold:

  1. Curate Realistic, Appropriately Sized Datasets: We aim to develop and curate datasets that are both realistic and appropriately sized for edge devices. These datasets will be continuously expanded and refined through collaborative efforts with the community, ensuring they remain relevant and reflective of diverse, real-world scenarios.
  2. Support Open Research into Performance Trade-offs: A critical aspect of edge AI development is understanding the trade-offs between various performance metrics such as power consumption, memory usage, and accuracy. We will provide datasets that enable open research and facilitate thorough evaluation of these trade-offs in the context of edge deployments.
  3. Foster Shared Learning and Optimization: By establishing a public repository of datasets specifically tailored to edge AI and tinyML use cases, we aim to foster a culture of shared learning and optimization within the ecosystem. This repository will empower researchers, developers, and companies to confidently evaluate their models against real-world benchmarks and align their innovations with practical deployment requirements.

Importantly, the Working Group’s role is not to judge submission quality or conduct official benchmarks. Instead, we are focused on enabling honest, community-driven comparison by providing the necessary infrastructure. We are creating the tools and resources that will empower the community to drive innovation collaboratively.

Choosing Use Cases Thoughtfully

Selecting the right datasets and corresponding use cases is crucial for developing meaningful benchmarks. The edge AI community spans a broad range of applications, each with distinct technical requirements. To ensure our benchmarks are relevant and broadly applicable, we propose organizing use cases along several dimensions:

  • Real-time vs. Batched Processing: Distinguishing between tasks requiring instantaneous response (like fall detection) and those that benefit from batch analysis is critical.
  • Energy Constraints: Recognizing the significant impact of energy limitations, particularly for battery-powered devices like wearables and sensors, versus wall-powered devices.
  • Always-on Operation: Considering the unique challenges posed by applications that demand continuous inference, such as health monitoring or predictive maintenance.
  • Task Nature: Accounting for differences between classification tasks, regression tasks, and transformation tasks, each influencing model architecture and evaluation metrics.
  • Data Modality: Ensuring benchmarks reflect the specialized input types used in edge solutions, such as time-series data, images, or audio.

By mapping benchmarks to these categories, we aim to highlight a system’s actual capabilities in real-world scenarios, not just its performance on isolated, artificial tasks.

Improving Datasets Together

High-quality data is the bedrock of trustworthy machine learning. This includes not just training data, but also testing and validation data that accurately reflect the complexities of the real world. A poor test set can misrepresent a model’s performance, leading to inaccurate conclusions.

Therefore, the EDGE AI FOUNDATION is committed to creating and maintaining continually updated, diverse, and well-labeled datasets. For each dataset, we will:

  • Build on existing work: Enhancing and expanding upon proven datasets where possible.
  • Ensure variety: Capturing a wide range of real-world scenarios and edge cases.
  • Provide rich metadata: Ensuring accurate and comprehensive data labeling.
  • Evolve continuously: Regularly updating test sets to stay aligned with state-of-the-art models.

Our initial focus is on Visual Wake Words, with future expansions planned for other modalities. Each dataset will be vetted for its generalization across edge use cases and will be equipped with the necessary metadata for effective benchmarking.

Your Role and the Community’s Contribution

The success of this initiative hinges on the active participation of the Edge AI community, including your valuable contributions, Pete. We encourage contributions in several key areas:

  • Suggest datasets or base sets: Identify valuable starting points or areas for improvement.
  • Provide feedback: Offer insights on case coverage and diversity.
  • Contribute new test cases: Help create more realistic test scenarios.
  • Assist with labeling and annotations: Improve data quality and usability.
  • Expand edge case scenarios: Provide niche or underrepresented data.

Together, we can build a robust foundation that supports honest comparisons, accelerates development, and unlocks new possibilities for edge AI.

A Call to Action

We must move beyond toy benchmarks and embrace community-led, production-grade testing environments to truly advance edge AI and tinyML. We call on the EDGE AI FOUNDATION community to:

  • Share challenges: Help us prioritize use cases by highlighting the issues you’re facing.
  • Contribute datasets and evaluation techniques: Align your contributions with your organizational goals.
  • Collaborate on establishing optimization best practices: Ensure meaningful benchmarking methods.

Let’s collaborate to build a shared, open, and inclusive ecosystem that drives edge AI forward. Join us at joinus@edgeaifoundation.org and be part of this transformative journey.

A Call To Action: The Pipeline Is Stalling – The Higher Education Pledge

By Pete Bernard
CEO, EDGE AI FOUNDATION
Professor Vijay Janapa Reddi
HARVARD UNIVERSITY
Evgeni Gousev
Chairman of the Board, EDGE AI FOUNDATION

 

The EDGE AI FOUNDATION is committed to creating a highly active community of knowledge sharing, collaboration, networking, advocacy and education that democratizes and advances edge Al technologies.

 

We are a place of limitless opportunity and the hotbed of activity, facilitating the sharing of knowledge, the dissemination of reference materials, the setting of industry best practices, and the nurturing of talent ensuring the advancements in edge AI technology solutions benefit all of society and the environment we share. In these challenging times, we recognize the need to speak up and use our platform to drive for positive change that profoundly affects our industry and our community.

 

The United States has long stood at the forefront of technological advancement, largely fueled by a world-class higher education system. For decades, research universities, particularly in computer science, engineering, and the broader STEM ecosystem, have been engines of innovation, driving breakthroughs that have transformed industries, created millions of high-paying jobs for Americans, and enriched the global economy. This success has never been the product of academia alone. It has relied on a synergistic model: consistent federal investment in basic research, strong partnerships with industry, and an open-door policy that welcomes top international talent.

 

Today, that model is under an existential threat. Across the country, higher education institutions face growing financial pressure from internal constraints and a wave of federal and state-level budget cuts that the current administration is unleashing on higher education. Even institutions like Harvard that have long been viewed as financially insulated are now confronting massive budget cuts that directly threaten their research and teaching missions . These cuts are not occurring in a vacuum; they reflect retaliation against universities that have taken public stands on academic freedom, diversity, or democratic values.

 

At the same time, international student flows, which are critical to the health and vitality of STEM programs, are being disrupted by immigration policies and increasing geopolitical friction. These developments jeopardize more than university balance sheets; they threaten the long-term competitiveness of the U.S. innovation ecosystem and exacerbate critical workforce shortages in strategic STEM fields where American companies struggle to find qualified talent.

 

If left unaddressed, the erosion of academic research infrastructure will reverberate throughout the economy for decades to come. Rebuilding these complex innovation networks—once damaged—will require years, even with renewed commitment and funding—weakening the very foundation on which so many American industries and technological revolutions have been built.

 

A fundamental rethinking of supporting and sustaining academic research is urgently needed, particularly in computing and engineering. If public funding can no longer provide the same level of support it once did, it is time for industry to step up, not out of charity, but out of enlightened self-interest. The companies that profit from the fruits of academic innovation must help sustain the ecosystem that makes it possible. Otherwise, the very engine of progress could halt in the U.S.

To ground this argument, the “Pipeline Is Stalling” whitepaper explores the historical foundations of academic research in the U.S., examines past and present models of industry-academic collaboration, and quantifies the immense contribution of international talent to American innovation. Drawing lessons from both domestic successes and international examples, Professor Reddi offers a set of policy and investment strategies that can help renew the partnership between universities, industry, and government, before it is too late.

 

Once the pipeline stalls completely, the game is over for America’s technological leadership.


Call to Action

The challenges outlined in this paper require collective action. We’ve created “The Higher Education Pledge: A Commitment to America’s Future” as a platform for stakeholders across sectors to demonstrate their support for sustaining America’s innovation ecosystem.

  • Who should sign: Students, alumni, educators, researchers, entrepreneurs, industry leaders, and anyone who values the role of higher education in driving innovation and economic prosperity.
  • What you’re supporting: Strong research funding, global talent mobility, industry-academic partnerships, and the reaffirmation of universities as engines of innovation.
  • Why it matters: Your voice adds strength to this crucial conversation about America’s technological future. Signatures will be shared with policymakers, university leaders, and industry executives to demonstrate widespread support for action.

➡️ Visit HERE to read The Pipeline Is Stalling whitepaper

➡️ Visit HERE to sign the pledge


The Robots Are Coming – Physical AI and the Edge Opportunity

 

 

By Pete Bernard
CEO, EDGE AI FOUNDATION

We have imagined “robots” for thousands of years, dating back to3000 B.C. when Egyptian water clocks used human figurines to strike hour bells. They have infused our cultural future with movies like Metropolis  in 1927 through C3PO and R2D2 in Star Wars and more.

Practically speaking, today’s working robots are much less glamorous. They have been developed over the past decades to handle dangerous and repetitive tasks and resemble nothing like humans.  They roll through warehouses, mines, and deposit fertilizer on our farms. They also extend our perceptual reach through aerial and ground-based inspection systems, using visual and other sensor input.

Now that edge AI technology has evolved and getting ever more mature, the notion of physical AI is taking hold and it promises to be a critical platform that is fundamentally enabled by edge AI technologies. A generally agreed definition of physical AI is:

A combination of AI workloads running on autonomous robotic systems that include physical actuators.

This is truly “AI in the real world” in that these systems physically interact with the real world through motion, touch, vision, and physical control mechanisms including grasping, carrying and more. It can combine a full suite of edge AI technologies in a single machine. Executing AI workloads where the data is created will be critical for the low latency and low needs of these platforms. These could range from:

  • tinyML workloads running in its sensor networks and sensor fusion
  • Neuromorphic computing for high performance/ultra-low power, fast latency and wide dynamic range scenarios
  • CNN/RNN/DNN models running AI vision on image feeds, LIDAR or other “seeing” and “perceiving” platforms
  • Transformer-based generative AI models (including reasoning) performing context, understanding and human-machine interface functions

These are designed all into one system, with the complex orchestration, safety/security and controls needed for enterprise grade deployment, management and servicing. In addition, as the TOPS/watt and lower power/higher performance edge AI platforms come to the market, this will positively impact the mobility, cost and battery life of these systems.

 

Robotics is where AI meets physics. They require sophisticated physical capabilities to move grasp, extend sense and perform a wide range of tasks, but they are also software platforms that require training and decision making, making them prime candidates for one of the most sophisticated combinations of AI capabilities. The advent of accelerated semiconductor platforms, advanced sensor networks, sophisticated middleware for orchestration, tuned AI models, emerging powerful SLMs, applications and high-performance communication networks are ushering in a new era of physical AI.

Let’s level set with a taxonomy of robots and a definition of terms. There are many ways to describe robots – they can be sliced by environment (warehouse) or by function (payload) or even by mobility (un-manned aerial vehicles). Here is a sample of some types of robots in deployment today:

  • Pre-programmed robots
    • These can be Heavy Industrial robots, used in very controlled environments for repetitive and precise manufacturing tasks. These robots are typically fixed behind protective barriers, costs hundreds of thousands of dollars.
  • Tele-operated robots
    • These are used as “range extenders” for humans to perform inspections, observations, or repairs in challenging human environments – including drones or underwater robots for welding and repair. Perhaps the best-known tele-operated robots were the robots sent to Mars by NASA in the last few decades. There has also been a fish robot named SoFi designed to mimic propulsion via its tail and twin fins, swimming in the Pacific Ocean at depths of up to 18 meters. [1]
  • Autonomous robots
    • You probably have one of these in your house in the form a vacuum cleaner robot navigating without supervision and relying on its sensors for navigation. Recently we have seen a number of “lawnmower” robots introduced to take on this laborious task. In Agriculture, robots are already inspecting and even harvesting crops in an industry with chronic labor shortages[2]. There is also a thriving industry for autonomous warehouse robots – including in Amazon warehouses. [3]
  • Augmenting robots
    • These are designed to aid or enhance human capabilities such as prosthetic limbs or exoskeletons. You probably first were exposed to this category of robots when you watched The Six Million Dollar Man” on TV –but on a more serious note, they are providing incredible capabilities for amputees and enabling safer work environments for physical labor.[4]
  • Humanoid robots
    • Here’s where it gets interesting. We have developed a bi-pedal world – why not develop robots that work in that world as it’s been designed? Humanoid robots represent humans – as bi-pedal (or quad pedal in the case of Boston Dynamics), can communicate in natural language and facial expressions and perform a broad range of tasks using their limbs, hands and human-like appendages. The number of quad-pedal robot have only been deployed in the low thousands worldwide and we are still in the very early stages of development, deployment, and reasonable cost. Companies like Enchanted Tools[5] are demonstrating humanoid robots that can move amongst humans for carry lighter loads, deliver items, and communicate in natural language. Although humanoid robots will catch the bulk of the attention of the media in coming years, and face the most “cultural impact,” the other robot categories will also benefit greatly from generative AI and drive significantly greater efficiencies across industries.

 

How Generative AI on the edge will impact Physical AI

It’s hard to overstate the impact that Generative AI will have on the field of robotics. Beyond the ability for much more natural communication and understanding, Generative AI model architectures like Transformers will be combined with other model architectures like CNNs, Isolated Forests and others to provide context and human machine interfaces for image recognition, anomaly detection and observational learning. It will be a “full stack” of edge AI from metal to cloud.

Let’s take a look at the differences between traditional AI used in robotics and what Generative AI can bring:

Traditional AI Generative AI
Rule-Based Approach: Traditional AI relies on strict rules set by programmers – like an actor following a precise script. These rules dictate how the AI system behaves, processes data, and makes decisions. Learning from Data Examples: Generative AI learns from data examples – essentially “tokenized movement.” It adapts and evolves based on the patterns it recognizes in the training data – like a drummer that watches their teacher and keeps improving. This can be done in the physical world or in a simulated world for safer and more extensive “observational training.”
Focused Adaptability: ML and models such as CNN/RNN/DNN are designed for focused tasks and  operates based on predefined instructions. They run on very resource constrained environments at very low power and cost.

 

Creating New Data: Unlike traditional AI, generative AI can create new data based on experience and can adapt to new surroundings or conditions. However, this requires significant more TOPS/W and RAM, which can drive cost and battery powered applicability.

 

Data Analysis and Prediction: Non-generative AI excels at data analysis, pattern recognition, and making predictions. However, there is no creation of new data; it merely processes existing information. Applications in Robotics: Generative AI can drive new designs and implementations in robotics that leverages their ability to generate new data, whether it’s new communication/conversational techniques (in multiple languages), new movement scenarios or other creative problem solving.

 

 

In summary, while many forms edge AI are excellent and necessary for analyzing existing data and making predictions in resource constrained and low power environments, generative AI at the edge will now add the ability to create new data and adapt dynamically based on experience. The application of Generative AI to robotics will unlock observational learning, rich communication,  and a much broader application of robots across our industries and our lives.

 

Safe and Ethical Robotics

Whenever robots are mentioned, the comparison to
“evil robots’ from our culture are not far behind. The Terminator, Ultron or Gunslinger from Westworld. And at the same time, we have enjoyed anthropomorphized robots like C3PO and R2D2, or Wall-E. And then there are ones in -between, like from the movie The Creator.

As attention has been paid to the scope Generative AI moving to AGI, what guardrails, best practices and outright legislation exists to keep robotic efforts – pared with Generative AI – in the category of good or neutral?

Isaac Asimov famously penned his three laws of robotics back as part of his short story “Runaround” in 1942:[6]

  • A robot shall not harm a human, or by inaction allow a human to come to harm
  • A robot shall obey any instruction given to it by a human
  • A robot shall avoid actions or situations that could cause it to come to harm itself

In 2021, Dr. Kate Darling – a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab – wrote an article in The Guardian proposing that we think about robots more like animals than a rival to humans. Once we make that shift, we can better discuss who are responsible for robot actions and who is responsible for the societal impacts that robots bring, such as transformations in the labor market.[7]

The European Union published “Civil law rules on robotics” back in 2017 that addressed the definition of a robot, where liability lies, the role of insurance and other key items. In 2023 a law was introduced in Massachusetts in the US that would 1) ban the sale and use of weapons-mounted robotic devices, 2) ban the use of robotic devices to threaten or harass, and 3) ban the usage of robotic devices to physically restrain an individual. It’s unclear how or when similar legislation will make it to the federal level.

 

Observational Learning Is a Game Changer

In the world of edge AI, training has happened on “the cloud” or in server-class GPU environments and inferencing has happened on the light edge. With the introduction of reinforcement learning and new work in continuous learning we will see the edge becoming a much more viable area for training.

However, in physical AI platforms, observational learning (sometimes referred to as behavior cloning) in AI allows robots to learn new skills simply by watching humans – in reality or in a simulated physical environment. Instead of being programmed step-by-step, robots can make connections in their neural networks based on observing human behavior and actions. This kind of unstructured training will enable robots to better understand the nuances of a given task and make their interaction with humans much more natural.

There have been a number of key advanced in AI models for observational learning, starting with CNN model types and recently leveraging diffusion model types such as the one presented in the Microsoft Research paper in 2023 – Imitating Human Behaviour with Diffusion Models.[8]

In March of 2024, NVIDIA introduced Gr00t[9], their own foundational model designed for observational learning of their ISAAC/JETSON robotics platforms. It was demonstrated at the NVIDIA GTC keynote by Jensen Huang and also leverages their Omniverse “digital twin” environment to develop virtualized physical environments that can train robots via observational learning in a safe and flexible virtualized environment. This was updated in 2025 to Gr00t N1 as well as a new “Newton” physics engine. We’re now seeing Foundation models tuned for robotics platforms[10] like Gr00t, but also RFM-1 by Covoariant, among others. Expect this area to proliferate with options much like Foundation models for LLMs in the cloud.

Robotics as a “three computer problem” – there is an AI model training in the cloud using generative AI and LLMs, there is model execution and ROS running on a robotics platform itself, and a simulation/digital twin environment to safely and efficiently develop and train.

 

The Edge AI Opportunity for Robotics 

 “Everything That Moves Will Be Robotic” – Jensen Huang

The confluence of generative AI and robotics is swinging the robotic pendulum back into the spotlight. Although Boston Dynamics has only deployed around 1500 Spot robots worldwide so far, expect many more, and in many more configurations, throughout our warehouses, our farms, or manufacturing floor. Expect many more humanoid experiments and expect a hype wave washing over us with plenty of media coverage of every failure.

Running generative AI on these platforms will require significant TOPS horsepower as well as high performance memory subsystems in addition to advanced controls actuators and sensors. We will see “datacenter” class semiconductors moving down into these platforms but just as interesting will be edge native semiconductor platforms moving up into this space, with the kinds of ruggedized thermal and physical properties as well as low power and the integrated communications needed. We will also see many new stand-alone AI acceleration silicon paired with traditional server class silicon. Mainstream platforms like phones and AI PCs will help drive down costs with their market scale.

However, in addition to requiring top end semiconductors and plenty of RAM, robotic platforms – especially humanoid ones – will require very sophisticated sensors, actuators, and electro-mechanical equipment – costing tens of thousands of dollars for the foreseeable future.

To keep things in perspective, Goldman Sachs[11] forecasted a 2035 Humanoid Robot TAM of US$38bn with shipments reaching 1.4m units. That’s not a tremendous unit volume for humanoid robots (PCs ship around 250m units per year, smartphones north of a billion) – we can expect orders of magnitude more “functional form factor robots” in warehouse, vacuuming homes and doing other focused tasks.

These platforms – like the ones now available from Qualcomm, NVIDIA, NXP, Analog Devices and more – are attracting developers that are taking their server class software skills and combining them with embedded computing expertise. Like mobility, robotics and physical AI are challenging developers and designers in new ways and provides a unique opportunity for workforce development, skill enhancement and career growth.

A key challenge here is to avoid the pitfalls of Industry 4.0 and IoT – how do we collaborate as an industry to help standardize on data sharing models, digital twin models, code portability and other elements of the robotics stack? If this area becomes more fractured and siloed we could see significant delays in real deployments of more advanced genAI driven robots.

Developers, designers and scientists are pushing the envelope and closing the gap between our imaginations and reality. Like with cloud-based AI, the use of physical AI will require important guardrails and best practices to keep us not only safe but make this newfound expansion of physical AI capabilities accretive to our society, but the future

We cannot underestimate the impact that new robotics platforms will have on our culture, our labor force, and our existential mindset. We’re at a turning point as edge AI technologies like physical AI are leveraging traditional sensor AI and machine learning with generative AI, providing a call-to-action for all technology providers in the edge AI “stack,” from metal to cloud, as well an opportunity for business across segments to rethink how these new platforms will leverage this new edge AI technology in ways that are still in our imagination.


[1] https://www.csail.mit.edu/research/sofi-soft-robotic-fish

[2] https://builtin.com/robotics/farming-agricultural-robots

[3] https://www.aboutamazon.com/news/operations/amazon-introduces-new-robotics-solutions

[4] https://www.automate.org/robotics/service-robots/service-robots-exoskeleton

[5] https://enchanted.tools/

[6] https://www.goodreads.com/en/book/show/48928553

[7] https://tdwi.org/articles/2021/06/16/adv-all-building-ethical-guardrails-into-ai-driven-robotic-assistants.aspx

[8] https://www.microsoft.com/en-us/research/publication/imitating-human-behaviour-with-diffusion-models/

[9] https://nvidianews.nvidia.com/news/foundation-model-isaac-robotics-platform

[10] Foundation Models in Robotics: Applications, Challenges, and the Future – https://arxiv.org/html/2312.07843v1

[11] https://www.goldmansachs.com/intelligence/pages/gs-research/global-automation-humanoid-robot-the-ai-accelerant/report.pdf

The 2025 Edge AI Technology Report

The guide to understanding the current state of the art in hardware & software for Edge AI.

  • Introduction

  • 1. Industry Trends Driving Edge AI Adoption

  • 2. The Role of Edge AI in Transforming Industry Trends

  • 3. The Technological Enablers of Edge AI

  • 4. Building an Edge AI Ecosystem

  • 5. The Future of Edge AI

Cutting AI Down To Size – SCIENCE MAGAZINE

From SCIENCE MAGAZINE:

“Many artificial intelligence models are power hungry and expensive. Researchers in the GlobalSouth are increasingly embracing low-cost, low-power alternatives.”

The EDGE AI FOUNDATION Scholarship Fund supports these GlobalSouth initiatives.

Revolutionizing Wi-Fi Sensing with Machine Learning and Advanced Radio Frequency Techniques

What if the future of Wi-Fi could pinpoint your location down to 30 centimeters? Join us as Joseph Chueh from National Tsinghua University unveils the astonishing potential of Wi-Fi sensing when integrated with machine learning. Joseph brings his wealth of experience in semiconductor research and business development to the table, discussing the revolutionary application of existing frequencies like 2.4G and 5G for tasks including human activity recognition and intruder detection. This episode unpacks how Channel State Information (CSI) is at the heart of extracting precise data for machine learning, while also addressing the technical hurdles of hardware optimization and interference management.

Discover how increasing the degrees of freedom in Wi-Fi systems can be a game-changer for radio frequency technology. Joseph explains how adding more channels or phase coordination expands the sample space for channel information, paving the way for more efficient decision-making. We explore solutions like transmitter-side coding and the impact of transmission models like OFDM and OFDMA on Wi-Fi sensing capabilities. Joseph paints a vivid picture of a future where Wi-Fi sensing becomes not only more accurate but also more cost-effective and accessible, making it a promising feature in both today’s Wi-Fi technologies and upcoming 6G systems. Whether for robotics or enhancing room-scale environments, the insights shared in this episode offer a glimpse into an exciting wireless frontier.

Edge AI Technology Report: Generative AI Edition