BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal//recurring_events_ical//2.0//EN
BEGIN:VEVENT
UID:54a3796f-ef9b-4786-9b7e-f6ae60e343bc@support.access-ci.org
DTSTAMP:20260331T132652Z
DTSTART:20260415T180000Z
DTEND:20260415T190000Z
SUMMARY:Neocortex Seminar Series: Physics-Aware AI at Scale: Neural Compres
 sion and Vision Transformers for Simulation Data
DESCRIPTION:We are pleased to invite you to a new delivery of the Neocortex
  Seminar Series, highlighting research and applications enabled by the spe
 cialized accelerator featured in the NSF-funded Neocortex project, the Cer
 ebras Wafer Scale Engine. This series showcases user experiences, best pra
 ctices, and emerging workflows from the community.Talk details:Title: Phys
 ics-Aware AI at Scale: Neural Compression and Vision Transformers for Simu
 lation DataSpeaker: Jessica Ezemba, Carnegie Mellon University Date: Apri
 l 15th, 2026Time: 2:00PM, ESTLocation / Format: ZoomAbstract:Modern scient
 ific computing generates terabyte-scale simulation data across physics dom
 ains, yet researchers lack efficient tools for storage, retrieval, and ana
 lysis. This work presents two complementary approaches to addressing this 
 bottleneck, both leveraging wafer-scale computing on the Cerebras CS-3. Fi
 rst, SINCPS (Semantic-aware Implicit Neural Compression for Physics Simula
 tions) compresses physics simulation data by 150× to 25,000× using impli
 cit neural representations, reducing per-dataset training time to 2 to 3 h
 ours while preserving physics-critical conservation laws across 22 dataset
 s from The Well benchmark. Second, PhySiViT, a domain-specific Vision Tran
 sformer trained on approximately 7 million physics simulation images from 
 The Well in just 22 hours, produces embeddings with distinct physics-infor
 med structure that outperforms general-purpose models like CLIP and DINOv2
  on physics-specific tasks, achieving 43% better temporal forecasting (R²
  = 0.33 vs. 0.23) and superior physics clustering (silhouette score = 0.23
  vs. 0.20). Together, these systems form a pipeline for transforming large
  simulation archives into compact, queryable representations suitable for 
 downstream machine learning workflows, demonstrating that domain-focused m
 odels trained efficiently on specialized hardware can outperform general-p
 urpose counterparts on scientific tasks.Speaker Bio:Jessica Ezemba is a Ph
 .D. candidate in Mechanical Engineering at Carnegie Mellon University, whe
 re she is advised by Dr. Christopher McComb and Dr. Conrad Tucker. Her res
 earch sits at the intersection of artificial intelligence and engineering 
 design, with a focus on making the design process faster and less prone to
  errors. She is particularly interested in engineering simulations, a crit
 ical yet time-consuming and expertise-dependent stage of design where prod
 ucts are tested before manufacturing. Her work investigates how AI can acc
 elerate simulation interpretation by enabling faster integration of multid
 isciplinary expertise. Through developing benchmarks, foundation models, a
 nd surrogate modeling approaches, Jessica has demonstrated that current AI
  tools struggle to understand engineering simulations, motivating her ongo
 ing work on alternative paradigms, including agentic workflows that keep h
 umans in the design loop while enabling automated understanding. She has l
 everaged wafer-scale computing on the Cerebras CS-3 to develop physics-foc
 used foundation models and neural compression methods for large-scale simu
 lation data, including PhySiViT, a domain-specific Vision Transformer for 
 physics simulations, and SINCPS, an implicit neural compression framework 
 achieving up to 25,000× compression on terabyte-scale simulation archives
 . She has collaborated with industry partners including Ansys, and her res
 earch has been published in venues including the ASME Journal of Mechanica
 l Design, and the ACM/IEEE Supercomputing Conference (SC). Registration:R
 egister here: https://cmu.zoom.us/meeting/register/s6v2-Zu_QxCeM6eq46VR6wA
 fter registering, you will receive a confirmation email with details on jo
 ining the session.For questions about this event or the Neocortex Seminar 
 Series, please contact us at neocortex@psc.edu.About the Neocortex Seminar
  Series:The Neocortex Seminar Series showcases talks from researchers and 
 practitioners leveraging the specialized accelerator featured in the NSF-f
 unded Neocortex project (the Cerebras Wafer-Scale Engine) to advance AI, m
 achine learning, and data-intensive workloads. The series shares practical
  experiences, fosters community engagement, and encourages researchers, en
 gineers, and members of the academic community to explore and adopt these 
 cutting-edge capabilities.-- The Neocortex team. Links of interest:Neoco
 rtex Slack User SupportNeocortex PortalCalendly AI Office-hoursCalendly SD
 K Office-hours
URL:https://support.access-ci.org/events/9033
END:VEVENT
END:VCALENDAR