Archives: Livestream
We’ll talk about sensor-based ML project structure, feature exploration concepts with examples, common evaluation principles of model evaluation frameworks, and then talk about the deployment concept(e.g. how embedded/edge deployment work in the industry, standard considerations and constraints), then closing with conceptual demo (workflow steps)
Highlights and interviews live from the show floor of Embedded World in Nuremburg Germany with industry leaders in edge AI from Qualcomm, Microchip, Intel, Advantech, Avnet, Ceva, EMASS, femto.AI, Dell and more!
Plenty of surprise guests will be dropping by . . . get your questions ready!
A livestream recap of what happened down in San Diego during this amazing EDGE AI event – with Pete Bernard and Ed Doran, plus special guests!
Compiler design has in computing. Early compilers translated human-written assembly into machine code (Compiler 1.0). Optimizing compilers such as GCC and LLVM/Clang automated code transformation and hardware-specific lowering for CPUs (Compiler 2.0). More recent ML compilers, including XLA, TVM, TorchInductor, and MLIR-based stacks, shifted compilation from programs to computational graphs, enabling operator fusion and accelerator-specific kernel generation for GPUs and NPUs (Compiler 3.0). Despite these advances, Compiler 3.0 systems face scalability limits. The growing diversity of model architectures and hardware targets creates a combinatorial optimization problem that cannot be efficiently solved with static heuristics or bounded kernel search. At yasp we build yasp.compile, a Compiler 4.0, an agentic ML compiler paradigm that reasons explicitly about hardware constraints and generates low-level implementations tailored to a specific model–hardware pair. By combining hardware-aware graph optimization with learned code generation and cost modeling, yasp.compile aims to reduce manual kernel engineering and improve adaptability across heterogeneous edge accelerators.
Explore the power of Arduino UNO Q, the new hybrid board combining a microprocessor and a microcontroller in the iconic UNO form factor. In this hands-on session, we’ll dive into its dual-core capabilities for intelligent and connected projects. We will demonstrate how to move from a simple “Blink” to a “Thinking” system, showing how to deploy AI models locally and manage complex workloads directly at the edge.
As edge AI systems scale, the limitations of traditional von Neumann computing—separate memory and processing, high data movement, and power inefficiency—are becoming increasingly apparent. Neuromorphic computing offers a fundamentally different approach, inspired by the structure and operation of the human brain, enabling event-driven, ultra-low-power, real-time intelligence at the edge.
In this inaugural EDGE AI Neuromorphic Livestream, we bring together industry leaders, researchers, and system builders to explore how neuromorphic AI is moving from research into real-world deployment. The session will examine architectures, sensing and control applications, training methods, and benchmarking practices across both small-scale and large-scale systems.
Designed for technologists, researchers, and decision-makers, this livestream will provide practical insights into where neuromorphic AI delivers real value today—and where it is headed next.
MemryX has redefined Edge AI with a revolutionary compute-in-memory dataflow architecture and robust software toolkit. Learn how this innovative approach delivers better than GPU-level performance in a low-cost, scalable platform tailored for Edge applications. With MemryX’s intuitive SDK, developers can easily port and optimize existing AI models, unlocking efficiency and performance for virtually any Edge AI challenge.
