GENERATIVE EDGE AI: Architectures, Agents & Apps – DAY 1

The Edge AI Revolution is Here

The cloud’s dominance is being challenged—and the Generative Edge AI community is leading the charge.

Across three groundbreaking EDGE AI FOUNDATION forums, we’ve witnessed a seismic shift: the cloud must evolve beyond its role as a centralized AI powerhouse. After years of fixed-function AI investments at the edge, a new era of practical innovation is exploding into the gap—and it’s happening faster than anyone predicted.

What We’ve Proven

Forum Three revealed the game-changer: Power-efficient hardware now deploys generative AI directly at the edge, supported by robust on-premises solutions. This isn’t incremental progress—it’s liberation.

What’s Next

Forum Four raises the stakes even higher. We’re convening the brightest minds to unveil:

– Next-generation algorithms operating at both generative and reasoning levels
– Revolutionary power-efficient hardware that brings AI within arm’s reach of users
– Open development suites democratizing innovation
Transformative applications leveraging distributed agentic architectures
– Near-zero latency connectivity making real-time intelligence truly real-time

Tune in to hear these breakthrough thought leaders:

Day 1 — Agentic Systems, Optimization & Toolchains (8:00–12:10 PT)

08:00–08:05 Welcome (organizers)

08:05–08:20 Fireside chat – Generative Edge AI in Automotive – Marin Kellner (McKinsey) / Pete Bernard (EDGE AI FOUNDATION)

Session 1 — Optimization & Co-Design for Efficient Edge GenAI
08:20–08:45 Tinoosh Mohsenin (John Hopkins University) — E2EdgeGenAI

08:45–09:10 José Cano (University of Glasgow) — Accelerating LLMs at the Edge (HW–SW Co-Design)

09:10–09:30 Thomas Ziereis (Roofline AI) — Compiling & Running SLMs on Edge

09:30–09:50 Andrea Basso (MITO Technology) — Small LMs on Resource-Constrained Platforms

Session 2 — Agentic & Distributed Edge Systems
09:50–10:15 SiYoung Jang (Nokia Bell Labs) — Distributed SLM-based Agentic AI for the Edge

10:15–10:35 Marcus Rueb (EnBW) — Agent Systems on the Edge 10:35–10:55 Pratik Sharda (CraftifAI) — Agentic AI in Action

Session 3 — Foundations & Toolchains
10:55–11:15 Mathias Lechner (LiquidAI) — LFM2: Designing the next-generation foundation model architecture for edge AI

11:15–11:40 Parmeet Kohli (Qualcomm) — Backbone Toolchains for GenAI 11:40–12:05 Ashutosh Kumar (Intel) — Edge AI Suites

12:05–12:10 Closing (organizers)

This is the future of AI—decentralized, efficient, and unstoppable. The question isn’t whether to tune in. It’s whether you can afford to miss it.

GENERATIVE EDGE AI: Architectures, Agents & Apps – DAY 2

The Edge AI Revolution is Here

The cloud’s dominance is being challenged—and the Generative Edge AI community is leading the charge.

Across three groundbreaking EDGE AI FOUNDATION forums, we’ve witnessed a seismic shift: the cloud must evolve beyond its role as a centralized AI powerhouse. After years of fixed-function AI investments at the edge, a new era of practical innovation is exploding into the gap—and it’s happening faster than anyone predicted.

What We’ve Proven

Forum Three revealed the game-changer: Power-efficient hardware now deploys generative AI directly at the edge, supported by robust on-premises solutions. This isn’t incremental progress—it’s liberation.

What’s Next

Forum Four raises the stakes even higher. We’re convening the brightest minds to unveil:

– Next-generation algorithms operating at both generative and reasoning levels
– Revolutionary power-efficient hardware that brings AI within arm’s reach of users
– Open development suites democratizing innovation
Transformative applications leveraging distributed agentic architectures
– Near-zero latency connectivity making real-time intelligence truly real-time

Tune in to hear these breakthrough thought leaders:

Day 2 — Sensing, Applications & Platforms (8:00–12:10 PT)
08:00–08:05 Welcome (organizers)

08:05–08:20 Fireside chat – Generative Edge AI in Industry – Jennifer Cooke (IDC) / Pete Bernard (EDGE AI FOUNDATION)

Session 4 — Sensing, Interaction & Health
08:20–08:45 Michele Magno (ETH Zurich) — GenAI at the Edge: Wearables→AVs

08:45–09:05 Xiaofeng Tan (Pison) — Multimodal Hand Gesture Modeling

09:05–09:25 Ritik Shrivastava & Anusha Gopal (Brainchip) — aTENNUate: Real-Time Audio Denoising

09:25–09:50 Luigi Occhipinti (Cambridge) — Artificial Sensor Intelligence & Health

09:50–10:15 Giovanni Scapellato (STMicroelectronics) — GenAI for Biosensors/Cardio

Session 5 — Endpoints, Industrial & Ecosystems

10:15–10:40 Davis Sawyer (NXP) — Industrial Edge: Old Meets New

10:40–11:00 Henrik Flodell (Alif Semiconductor) — GenAI at the Edge for Endpoints

11–11:25 David Cuartielles (Arduino) — Proposal of workflow and software architecture for the development of complex EdgeAI applications

11:25–11:45 Zechun Liu (META) — Advancing Large Language Models in Resource-Constrained Environments

11:45–12:00 Closing (organizers)

This is the future of AI—decentralized, efficient, and unstoppable. The question isn’t whether to tune in. It’s whether you can afford to miss it.

EDGE AI Blueprints: VillageOS™: Generative AI for Sustainable, Autonomous Housing

VillageOS™ machine learning software for the generative design and autonomous operation of climate resilient and self-sustainable housing developments around the world.

EDGE AI Talks: Fully Automated Model Generation for Edge Devices

EDGE AI Talks: Faster Time-To-Device with Embedl Hub

Taking models from development to production at the edge is hard, sometimes really hard. Phones, SoCs, and devboards each introduce their own constraints that make performance unpredictable.

This session presents a practical, vendor-neutral workflow for on-device verification based on remote device execution, and performance profiling. The approach replaces trial-and-error with repeatable processes, enables fair comparisons across hardware, and delivers on-device data that empowers developers to make informed design decisions.

EDGE AI TALKS: Enabling heterogenous compute on edge-AI systems

Local AI model execution is essential for time-critical and data-sensitive edge AI applications. Modern edge-AI chips offer improved performance by integrating heterogeneous compute cores—such as CPUs, GPUs, and NPUs—into a single system. However, current deployment frameworks fall short in effectively leveraging such heterogeneous platforms.

We present a flexible and lightweight SDK designed to address this gap. Our SDK supports ahead-of-time model compilation for a range of formats, including PyTorch (custom or Hugging Face), TensorFlow Lite, and ONNX. It abstracts the complexities of heterogeneous systems, enabling seamless deployment across multiple compute targets without requiring specialized hardware knowledge.

With a Python-friendly API, the SDK empowers developers to switch workloads dynamically between different cores—for example, balancing inference between CPU and GPU on ARM-based SoCs. We demonstrate how this capability impacts model throughput for both image classification tasks and on-device generative AI workloads.

EDGE AI TALKS: Accelerating Edge AI with Intel’s Open Edge Platform

Open Edge Platform from Intel brings together a comprehensive software stack to everyone, royalty free on Github, helping accelerate edge AI solution development, deployment, and management for vision AI, time series, and generative AI applications. The Edge AI Suites within the Open Edge Platform provide curated, industry-specific elements that can help building edge-native and optimized use cases that matter to your organization. In this session, you will learn about some of these key AI suites through and how to leverage them in your application development.

EDGE AI TALKS: Advancing Ultra-Low-Power Edge AI with EMASS & ECS-DoT

As edge devices push the limits of performance and battery life, EMASS’s “Atoms-to-Apps” vision connects every layer of the stack. ECS-DoT is the embodiment of that philosophy: an ultra-efficient, event-driven SoC that fuses tightly interwoven compute-and memory architectures with a lean AI software toolchain—purpose built for intelligent sensing at the very edge.
This talk explores how ECS-DoT collapses the traditional energy intelligence trade-off,delivering always-on sensing, context-aware decision-making, and instant power-gated wake-up. Whether extending drone flight time or enabling wearables to monitor health for days on a single charge, ECS-DoT proves that true edge AI excellence starts at the atom and ends in the app—with no energy wasted in between.

EDGE AI TALKS: Write Once, Deploy Everywhere: The Model-Agnostic AI Deployment

We unveil the principles behind our model-agnostic AI deployment framework, OwLite, that lets developers reuse a single pipeline for any neural network on diverse edge devices. By abstracting away device-specific quirks and conversion scripts, teams can eliminate redundant engineering effort and focus on optimizing core model performance. We’ll discuss how this unified approach accelerates roll-out, simplifies maintenance, and future-proofs your AI infrastructure against evolving AI landscapes.

EDGE AI TALKS: Generative AI at the Edge Challenges and Opportunities

As generative AI moves from the cloud to the edge, balancing safety, efficiency, and performance becomes increasingly complex. This livestream will explore critical innovations—such as federated learning, multi-agent systems, and embodied AI—that are shaping the future of edge-native foundation models. Join us to discuss how engineers and researchers must evolve AI infrastructure into a full-fledged systems discipline to enable adaptive, context-aware intelligence across diverse edge environments.