Add to Calendar
2025-01-10 13:00:00
2025-01-10 14:00:00
America/New_York
THESIS DEFENSE: Contact-aware and multi-modal robotic manipulation
Access instruction of 32-G882: On the first floor of Building 32, take the elevator located next to the café. Proceed to the 8th floor, turn left, and continue straight ahead.Thesis Committee: Edward Adelson, John Leonard, Kaiming He, Faez AhmedAbstract:Intelligent robotic manipulation has advanced significantly in recent years, driven by progress in foundational cognitive models, sensor-fusion techniques, and improvements in actuators and sensors. However, most contemporary robotic systems still lack the ability to effectively recognize and understand contact dynamics, which are critical for performing manipulation tasks beyond basic pick-and-place operations. This thesis argues and proves that contact awareness is essential for the successful deployment of robotic systems, not only in structured environments such as factories but also in unstructured settings like domestic households. Achieving contact awareness necessitates advancements in three key areas: the development of improved contact-sensing hardware, the creation of more expressive frameworks for representing and interpreting contact information, and the design of efficient modality-fusion algorithms to integrate these capabilities into robotic action planning. This thesis addresses these challenges by (1) proposing novel mechanical designs that enable touch sensors to adopt more compact and versatile forms while enhancing their durability and manufacturability, (2) introducing a foundational representation learning framework capable of learning a shared tactile latent representation, which can be transferred across different sensors and downstream tasks, and (3) developing a compositional diffusion-based approach for action prediction that integrates tactile sensing signals with other perception modalities, thereby enabling learning across diverse environments and promoting policy reuse. Along the way, this thesis demonstrates that tactile sensors can be both compact and versatile, challenging common perceptions to the contrary. It also establishes that tactile sensing is indispensable not only for high-precision tasks, such as electronics assembly, but also for everyday activities, including cooking and tool usage.Zoom: https://mit.zoom.us/j/99109861981
32-G882 (Stata Center, Hewlett Room)
Events
January 10, 2025
January 16, 2025
Add to Calendar
2025-01-16 9:30:00
2025-01-16 11:00:00
America/New_York
THESIS DEFENSE - Beichen Li: Quality-Centric Single-Image Procedural Material Generation
Procedural materials, represented as functional node graphs of texture generation and image processing operators, are ubiquitous in modern computer graphics production for photorealistic material appearance design. They allow users to perform intuitive and precise editing to achieve desired visual appearances. However, even for experienced artists, creating a procedural material to visually match an input image requires professional knowledge and significant effort. Current inverse procedural material modeling approaches employ multi-modal Transformers to automatically generate procedural materials from single flash photos of object surfaces. However, the generated materials are fundamentally limited in visual quality due to: 1) insufficient high-quality training data from artist-created materials; 2) a lack of visual feedback in token-space supervised training; 3) the absence of approximation-free node parameter post-optimization for noise/pattern generator nodes. My thesis proposes advanced dataset augmentation techniques, training methodologies, and parameter post-optimization workflows to address these challenges, significantly improving the perceptual match between the generated procedural material and the input image. Furthermore, the research ideas are applicable to other inverse design problems in procedural graphics to expedite similar artistic creation processes.
32-G882
January 22, 2025
Thesis Defense: Taming Data Movement Overheads in Latency-Critical Cloud Services
Nikita Lazarev
CSAIL
Add to Calendar
2025-01-22 15:00:00
2025-01-22 16:30:00
America/New_York
Thesis Defense: Taming Data Movement Overheads in Latency-Critical Cloud Services
Cloud providers are being urged to enhance the efficiency, performance, and reliability of datacenter infrastructures to support applications across many domains with diverse requirements for quality of service. Data movement is a significant source of overhead in today’s servers, and it is particularly critical for the recent emerging interactive and realtime cloud applications. In this thesis, I investigate and propose a set of novel approaches to mitigate the data movement overheads in general-purpose datacenters. This allows to establish a roadmap towards more efficient and reliable cloud services which are severely bottlenecked by data movement. In particular, I propose, implement, and evaluate three systems for the applications in (1) microservices, (2) serverelss, and (3) realtime cloud-native services on the example of virtualized radio access networks (vRAN), which are known to raise challenges to existing cloud infrastructures.First, we discuss Dagger – a system for mitigating the overheads of remote procedure calls in interactive cloud microservices. Dagger introduces a novel yet practical solution enabling fast and low-latency communication between distributed fine-granular application components. We then present Sabre – a practical and efficient system for mitigating the challenging overhead of cold start in serverless. Sabre relies on emerging tightly-coupled accelerators for compression and allows to dramatically reduce the latency of page movement in serverless microVMs without compromising the CPU cost. Finally, we build Slingshot – the first to the best of our knowledge infrastructure that enables fault tolerance in realtime cloud-native services such as vRAN. With Slingshot, we make substantial progress towards deploying reliable distributed systems working in realtime in the general purpose cloud by addressing the key challenges of fast state migration, realtime fault detection, and low-latency disaggregation.Thesis Committee: Christina Delimitrou (MIT), Zhiru Zhang (Cornell University), Mohammad Alizadeh (MIT)
TBD
[Scale ML + MLSys Reading Group] Hymba: A Hybrid-head Architecture for Small Language Models
Xin Dong
NVIDIA
Add to Calendar
2025-01-22 16:00:00
2025-01-22 17:00:00
America/New_York
[Scale ML + MLSys Reading Group] Hymba: A Hybrid-head Architecture for Small Language Models
Speaker: Xin DongTopic: Hymba: A Hybrid-head Architecture for Small Language ModelsDate: Wednesday, Jan 22Time: 4:00 PM (EST)Zoom: https://mit.zoom.us/j/91697262920 (password: mitmlscale)AbstractWe propose Hymba, a family of small language models featuring a hybrid-head parallel architecture that integrates transformer attention mechanisms with state space models (SSMs) for enhanced efficiency. Attention heads provide high-resolution recall, while SSM heads enable efficient context summarization. Additionally, we introduce learnable meta tokens that are prepended to prompts, storing critical information and alleviating the "forced-to-attend" burden associated with attention mechanisms. This model is further optimized by incorporating cross-layer key-value (KV) sharing and partial sliding window attention, resulting in a compact cache size. During development, we conducted a controlled study comparing various architectures under identical settings and observed significant advantages of our proposed architecture. Notably, Hymba achieves state-of-the-art results for small LMs: Our Hymba-1.5B-Base model surpasses all sub-2B public models in performance and even outperforms Llama-3.2-3B with 1.32% higher average accuracy, an 11.67x cache size reduction, and 3.49x throughput.BioXin Dong is a research scientist at NVIDIA Research and is interested in designing accurate, efficient and trustworthy systems for LLM and foundation models. Xin received PhD in Computer Science from Harvard University in 2023, advised by Professor H.T. Kung.
TBD
January 30, 2025
Charles River Crypto Day @ MIT
Srini Devadas, Salil Vadhan and Nadia Heninger
MIT, Harvard and UC San Diego
Add to Calendar
2025-01-30 9:30:00
2025-01-30 16:30:00
America/New_York
Charles River Crypto Day @ MIT
MIT’s Schwarzman College of Computing is pleased to announcea day-long workshop surveying recent developments incryptography and computer security. The event will featureinvited talks by Florian Tramer (ETH Zurich), Salil Vadhan(Harvard), and Nadia Heninger (UC San Diego). In addition, MITfaculty member Srini Devadas and three MIT PhD students (NoahGolowich, Alexandra Henzinger, and Seyoon Ragavan) will give anoverview of research activity in these areas on campus.The event is open to the public, so please share this invitationwidely. *Please register at the link below so that we can makesure that there is enough coffee and food for everyone.*~~~ Logistical Information ~~~Date: Thursday, January 30, 2025Time: 9:30 am–4:30 pmLocation:MIT Building 45, 8th Floor51 Vassar StreetCambridge, MA 02139Please register here:https://forms.gle/BRjVYFrkuS9ym9VM9~~~ Program ~~~9:30-10:30am:Srini Devadas (MIT): “Designing Hardware for Cryptography and Cryptography for Hardware”10:30-11:00am: Coffee break11.00am-12.00pm: Invited talkSalil Vadhan (Harvard): “Multicalibration: a New Tool for Security Proofs in Cryptography”12.00-1.30pm: Lunch (provided)1.30-3.00pm: MIT Student Talks1.30-2.00pm: Noah Golowich: “Edit Distance Robust Watermarking”2.00-2.30pm: Alexandra Henzinger: “Somewhat Homomorphic Encryption from Sparse LPN”2.30-3.00pm: Seyoon Ragavan: “Factoring with a Quantum Computer: The State of the Art”3.00-3.30pm: Coffee break3.30-4.30pm: Invited talkNadia Heninger (UC San Diego): “Cryptanalynomics”~~~ Abstracts for Hour-Long Talks ~~~Title: “Designing Hardware for Cryptography and Cryptography for Hardware”Speaker: Srini Devadas (MIT)Abstract:There have been few high-impact deployments of hardwareimplementations of cryptographic primitives. We present thebenefits and challenges of hardware acceleration ofsophisticated cryptographic primitives and protocols, anddescribe our past work on accelerating fully homomorphicencryption. We argue the significant potential for synergisticcodesign of cryptography and hardware, where customized hardware accelerates cryptographic protocols that are designed with we present hardware acceleration in mind. As a concrete example, a new design of a zero-knowledge proof (ZKP) accelerator that leverages hardware-algorithm co-design to generate proofs 500 times faster than a 32-core CPU.This work was done in collaboration with Simon Langowski, NikolaSamardzic, and Daniel Sanchez.***Title: "Multicalibration: A New Tool for Security Proofs in Cryptography"Spaker: Salil Vadhan (Harvard)Abstract:In this talk, I will describe how Multicalibration, a newconcept arising from the algorithmic fairness literature, isa powerful tool for security proofs in cryptography. Specifically, the Multicalibration Theorem of[HébertJohnson-Kim-Reingold-Rothblum `18] asserts that everyboolean function g, no matter how complex, is"indistinguishable" from a "simple" randomized function. Specifically, there is a "low-complexity" partition of thedomain of g into a small number of pieces such that on almostevery piece P_i, if we choose an input X uniformly at randomfrom P_i, (X,g(X)) is computationally indistinguishable from(X,Bernoulli(p_i)), where p_i is the expectation of g on P_i. As shown by [Dwork-Lee-Lin-Tankala `23], this isa complexity-theoretic analogue of Szemeredi's Regularity Lemmain graph theory, which partitions the vertex set of every graphG into a small number of pieces P_i, such that on almost allpairs P_i x P_j, the graph G is, in a certain sense,indistinguishable from a random bipartite graph with edgedensity matching that of G on P_i x P_j.The Multicalibration Theorem allows us to reduce many questionsabout computational hardness and computationalindistinguishability to their information-theoretic analogues. Thus, it can be viewed as a qualitative strengthening of severalcomplexity-theoretic results that were already known to havemany applications to security proofs in cryptography, such asImpagliazzo's Hardcore Lemma [Impagliazzo `95, Holenstein `06],the Complexity-Theoretic Dense Model Theorem[Reingold-Trevisan-Tulsiani-Vadhan `08], and the WeakComplexity-Theoretic Regularity/Leakage Simulation Lemma of[Trevisan-Tulsiani-Vadhan `09, Jetchev-Pietrzak `14]. Inparticular, we show that these latter results all follow easilyas corollaries of the Multicalibration Theorem. Furthermore, wealso use it to obtain new results characterizing how manysamples are required to efficiently distinguish twodistributions X and Y in terms of their"pseudo-Hellinger-distance" (or the "pseudo-Renyi-1/2 entropy"of X in case Y is uniform).Joint works with Sílvia Casacuberta and Cynthia Dwork and withCassandra Marcussen and Louie Putterman.***Title: "Cryptanalynomics"Spaker: Nadia Heninger (UC San Diego)Abstract:This talk is a meditation on the current state of cryptanalysisresearch in public-key cryptography. I will explore theincentives for and against cryptanalysis in the academiccommunity, and how this is reflected in the current state ofclassical and post-quantum cryptanalysis research. Thisdiscussion is informed by my own experience, as well asa pseudorandomly chosen selection of unscientific personaldiscussions with a variety of researchers across our community.
MIT Building 45, 8th Floor, 51 Vassar Street
Learning to Reason with LLMs
Noam Brown
OpenAI
Add to Calendar
2025-01-30 14:00:00
2025-01-30 15:30:00
America/New_York
Learning to Reason with LLMs
Large language models (LLMs) have demonstrated remarkable capabilities in generating coherent text and completing various natural language tasks. Nevertheless, their ability to perform complex, general reasoning has remained limited. In this talk, I will describe OpenAI's new o1 model, an LLM trained via reinforcement learning to generate a hidden chain of thought before its response. We have found that the performance of o1 consistently improves with more reinforcement learning compute and with more inference compute. o1 surpasses previous state-of-the-art models in a variety of benchmarks that require reasoning, including mathematics competitions, programming contests, and advanced science question sets. I will discuss the implications of scaling this paradigm even further.
Seminar Room G449 (Patil/Kiva)
February 05, 2025
Relational Diagrams and the Pattern Expressiveness of Relational Languages
Khoury College of Computer Sciences, Northeastern University
Add to Calendar
2025-02-05 13:00:00
2025-02-05 14:00:00
America/New_York
Relational Diagrams and the Pattern Expressiveness of Relational Languages
ABSTRACTComparing relational languages by their logical expressiveness is well understood. Less well understood is how to compare relational languages by their ability to represent relational query patterns. Indeed, what are query patterns other than “a certain way of writing a query?” And how can query patterns be defined across procedural and declarative languages, irrespective of their syntax? To the best of our knowledge, we provide the first semantic definition of relational query patterns by using a variant of structure-preserving mappings between the relational tables of queries. This formalism allows us to analyze the relative pattern expressiveness of relational language fragments and create a hierarchy of languages with equal logical expressiveness yet different pattern expressiveness. Our language-independent definition of query patterns opens novel paths for assisting database users. For example, these patterns could be leveraged to create visual query representations that faithfully represent query patterns, speed up interpretation, and provide visual feedback during query editing. As a concrete example, we propose Relational Diagrams, a complete and sound diagrammatic representation of safe relational calculus that is provably (i) unambiguous, (ii) relationally complete, and (iii) able to represent all query patterns for unions of non-disjunctive queries. Among all diagrammatic representations for relational queries that we are aware of, ours is the only one with these three properties. Furthermore, our anonymously preregistered user study shows that Relational Diagrams allow users to recognize patterns meaningfully faster and more accurately than SQL.RELATED WORK* SIGMOD 2024: "On the Reasonable Effectiveness of Relational Diagrams: Explaining Relational Query Patterns and the Pattern Expressiveness of Relational Languages"paper: https://dl.acm.org/doi/pdf/10.1145/3639316full version: https://arxiv.org/pdf/2401.04758project page: https://relationaldiagrams.com/* ICDE'24: "A Comprehensive Tutorial on over 100 years of Diagrammatic Representations of Logical Statements and Relational Queries"tutorial proposal: https://arxiv.org/pdf/2404.00007tutorial slides: https://northeastern-datalab.github.io/diagrammatic-representation-tutorial/ICDE_2024-Diagrammatic-Representations-Tutorial.pdf* "A Principled Solution to the Disjunction Problem of Diagrammatic Query Representations "https://arxiv.org/pdf/2412.08583
32-G449
Add to Calendar
2025-02-05 15:00:00
2025-02-05 16:30:00
America/New_York
Thesis Defense: Designing Hardware Accelerators for Solving Sparse Linear Systems - Axel Feldman
Solving systems of linear equations with sparse coefficient matrices is a key primitive that sits at the heart of many important numeric algorithms. Because of this primitive's importance, algorithm designers have spent many decades optimizing linear solvers for high performance hardware. However, despite their efforts, existing hardware has let them down. State-of-the-art linear solvers often utilize <1% of available compute throughput on existing architectures such as CPUs and GPUs.There are many different algorithms used to solve sparse linear systems. These algorithms are diverse and often have very different computational bottlenecks. These include low arithmetic intensity, fine-grained parallellism, common control dependences, and sparsity-induced load imbalance.This thesis studies the problem of designing hardware accelerators for sparse linear solvers. We propose three novel architectures that explore different parts of the design space. First, we introduce Spatula, an architecture designed to accelerate direct solvers. Then, we propose Azul, a hardware accelerator targeted at iterative solvers. Taken together, Spatula and Azul demonstrate significant speedups on both of the main classes of sparse linear solver algorithms. Finally, to show that our techniques are useful for end-to-end applications, we present Ōmeteōtl, an accelerator targeted at applications that use iterative solvers in their inner loop. Ōmeteōtl also shows that the techniques in this thesis generalize to sparse matrix computations beyond linear solvers.https://mit.zoom.us/j/98122373906 (no password)
TBD
February 12, 2025
NECSTLab Technical Talks
NECSTLab
Politecnico di Milano
Add to Calendar
2025-02-12 9:00:00
2025-02-12 11:00:00
America/New_York
NECSTLab Technical Talks
NECSTLab (Politecnico di Milano, Italy), led by Prof. Marco D. Santambrogio, will be visiting MIT and giving a series of technical talks as part of the NECST Group Conference (NGC), an initiative offering participants a unique opportunity to present their work at leading companies' headquarters and engage with research groups and laboratories at top-tier universities. Here is the list of talks (abstracts and bios are reported below): ALVEARE: A Full-Stack Domain-Specific Framework for Regular ExpressionsSpeaker: Filippo CarloniGrOUT: A Modular Framework for Scalable Multi-GPU Systems in Oversubscribed ScenariosSpeaker: Ian Di Dio LavoreA Quantum Method to Match Vector Boolean Functions Using Simon’s SolverSpeaker: Marco VenereMoyogi: Exploiting Random Forest Parallelism for Low-Latency Inference on Embedded DevicesSpeaker: Alessandro VerosimileFood will be providedThe NECSTLab is a laboratory inside DEIB department of Politecnico di Milano (Dipartimento di Elettronica, Informazione e Bioingegneria). It is a place where research meets teaching, and teaching meets research, also through academics and industrial events.------------ALVEARE: A Full-Stack Domain-Specific Framework for Regular ExpressionsSpeaker: Filippo CarloniAbstract: Regular Expressions (REs) represent one of the most pervasive but challenging computational kernels to execute. Indeed, RE matching enables the identification of functional data patterns in heterogeneous fields ranging from personalized medicine to computer security. However, such applications require massive data analysis that, combined with the high data dependency of the REs, leads to long computational times and high energy consumption. Currently, RE engines rely on either flexibility in run-time RE adaptability and broad operators to support impairing performance or fixed high-performing accelerators implementing few simple RE operators. This talk describes ALVEARE: a hardware-software approach combining a Domain-Specific Language (DSL) with a RE-tailored Domain-Specific Architecture (DSA), constituting a full-stack framework. Specifically, ALVEARE exploits REs as a DSL by translating them into executables via the proposed compiler, while the DSA performs the RE-matching efficiently through a speculation-based RISC microarchitecture. The microarchitecture is based on the proposed Instruction Set Architecture to effectively express RE operators, from standard and simple to advanced primitives widely employed in real benchmarks. Our RE-centric optimized compiler lifts part of the RE-matching complexity from the hardware to the software, simplifying the architecture design to keep high performance and better flexibility. ALVEARE showcases attractive results in execution times and energy efficiency against existing CPU-based and ASIC-based solutions in low-latency and near-data scenarios.Bio: Filippo Carloni is a PhD candidate in Information Technology (Computer Science and Engineering) at Politecnico di Milano. His PhD research focuses on domain-specific architectures and compilers, with a particular emphasis on the regular expressions domain. He works extensively with hardware description languages and FPGAs to address architectural challenges and implement advanced solutions. During the final year of his PhD, he was a visiting student at the COMMIT lab at MIT, where he began exploring SmartNIC hardware acceleration. His broader research interests include RISC-V architecture and ISA design. GrOUT: A Modular Framework for Scalable Multi-GPU Systems in Oversubscribed ScenariosSpeaker: Ian Di Dio LavoreAbstract: Hardware accelerators are vital in modern computing but face significant challenges when handling workloads that exceed available memory capacity. Unified Virtual Memory (UVM) offers a promising solution by enabling oversubscription, allowing the end- user to handle datasets larger than the hardware’s physical memory. However, the page- faulting mechanism used by oversubscription often introduces severe performance overheads, particularly for large-scale workloads. This work presents GrOUT, a language- and domain-agnostic framework designed to address these challenges in oversubscribed multi-GPU systems. GrOUT employs a modular architecture with GraalVM-based and C++ header-only frontends, providing flexibility and ease of integration. The framework optimizes memory usage by eliminating redundant data copies through inter-process communication (IPC) and enhances scalability by transitioning to an MPI-based communication model. These advancements enable developers to scale out workloads efficiently, mitigating the impact of oversubscription and ensuring robust performance across distributed GPU systems.Bio: Ian Di Dio Lavore is a Ph.D. student in Information Technology - Computer Science and Engineering at Politecnico di Milano. He holds an M.Sc. (2022) and a B.Sc. (2020) in Computer Science and Engineering from Politecnico di Milano. Ian worked in the HPC Team of the Scalable Computing and Data group at the Pacific Northwest National Laboratory (PNNL). His research mainly focuses on parallel and distributed computing and programming models, with a particular interest in high-level abstractions for heterogeneous HPC systems. A Quantum Method to Match Vector Boolean Functions Using Simon’s SolverSpeaker: Marco VenereAbstract: The Boolean Matching Problem is a fundamental step in modern Electronic Design Automation toolchains, which allow for the efficient design of large classical computers. In particular, the equivalence under negation-permutation-negation of two n-to-n vector Boolean functions requires the exploration of a super-exponential number of possible negations and permutations of input and output variables, and is widely regarded as a daunting challenge. Its classical complexity (O(n!2^(2n))), where n is the number of input and output variables, is rarely tolerated by EDA tools, which are typically solving small instances of the Boolean Matching Problem for n-to-1 Boolean functions. In this work, we present a method to exploit the solver for Simon’s problem to speed-up the matching of n-to-n vector Boolean functions, as we show that, despite its higher complexity, it is friendlier to a quantum solver than matching single-output Boolean functions. Our solution allows for saving a factor 2^n in the overall worst-case computational effort, and is amenable to combined approaches such as the so-called Grover-meets-Simon, which have the potential of reducing it below the cost of classical n-to-1 matching. We provide a fully detailed quantum circuit implementing our proposal, and compute its cost, both counting the required amount of qubits and quantum gates. Furthermore, our experimental evaluation employs the ISCAS benchmark suite, a de-facto standard for classical EDA to derive our sample Boolean functions.Bio: Marco Venere is a second-year PhD student in Information Technology. His research focuses on the design of quantum algorithms to achieve superpolynomial speedup w.r.t. classical computation, and on quantum error correction accelerated on FPGAs. Besides his main research topics, he also works on the compilation of quantum circuits. He was also a TPC member for IEEE QCE 2024, a member of the Quantum Open-Source Foundation, a reviewer for IEEE TCAD, and was awarded a microgrant from UnitaryFund. Moyogi: Exploiting Random Forest Parallelism for Low-Latency Inference on Embedded DevicesSpeaker: Alessandro VerosimileAbstract: The convergence of Artificial Intelligence (AI) and Internet of Things (IoT) is driving the need for real-time, low-latency architectures to trust the inference of complex Machine Learning (ML) models in critical applications like autonomous vehicles and smart healthcare. While traditional cloud-based solutions introduce latency due to the need to transmit data to and from centralized servers, edge computing offers lower response times by processing data locally. In this context, Random Forests (RFs) are highly suited for building hardware accelerators over resource-constrained edge devices due to their inherent parallelism. Nevertheless, maintaining a low latency as the size of the RF grows is still critical for state- of-the-art (SoA) approaches. To address this challenge, this paper proposes Moyogi, a hardware-software codesign framework for memory-centric RF inference that optimizes the architecture for the target ML model, employing RFs with Decision Trees (DTs) of multiple depths and exploring several architectural variations to find the best-performing configuration. We propose a resource estimation model based on the most relevant architectural features to enable effective Design Space Exploration. Moyogi achieves a geomean latency reduction of 3.88x on RFs trained on relevant IoT datasets, compared to the best-performing SoA memory-centric architecture.Bio: Alessandro Verosimile is a second-year PhD student in Information Technology at Politecnico di Milano. He worked for 6 months as a research intern in the RAD team of Advanced Micro Devices (AMD). His research focuses on HW-SW co-design techniques that aim to co-optimize the training of large Machine Learning models and the design of the hardware architecture for their inference on embedded devices, with a focus on both Deep learning models and Decision Tree based ensemble models.
TBD
February 19, 2025
Scalable Parallel Algorithms
UC Riverside
Add to Calendar
2025-02-19 14:00:00
2025-02-19 15:00:00
America/New_York
Scalable Parallel Algorithms
Abstract:Recent hardware advances have brought multicore parallel machines to the mainstream. Although the progress in hardware advance seems promising, it does not provide "free" performance improvement as the increasing number of cores. For many applications, including seemingly straightforward ones in the sequential setting, state-of-the-art parallel implementations can be slower than a sequential algorithm. Many reasons contribute to the performance issues, such as the "parallel" overhead in space and thread synchronization, and the unparallelizable components in conventional sequential algorithms (such as specific data structures and iterative processes).In this talk, we share our recent work in improving the scalability of parallel graph algorithms. We select three examples: Strongly Connected Components (SCC), Bi-connected Components (BCC), and Influence Maximization (IM). For all three applications, we observe that existing parallel solutions can have poor scalability and can be slower than a sequential solution on certain graphs. Some of the applications suffer from multiple of the aforementioned challenges. Our solution tackles these issues by designing theoretically-efficient algorithms with practical considerations. Tested on more than 20 graphs with various sizes and degree distributions, our algorithms outperform the state-of-the-art solutions for each problem on almost all graphs due to better parallelism and strong theoretical guarantees.Brief Bio:Yihan Sun is an Assistant Professor in the Computer Science and Engineering (CSE) Department at the University of California, Riverside (UCR). She received her Ph.D. degree from Carnegie Mellon University in 2019, advised by Guy Blelloch. Prior to that, she received her Bachelor’s degree in Computer Science from Tsinghua University in 2014.Her research interests include broad topics in the theory and practice of parallel computing, including algorithms, data structures, frameworks, implementations, programming tools and their applications. Much of her work aims at bridging the gap between theory and practice of parallel computing. She is a recipient of the NSF CAREER Award in 2023, and the Google Research Scholar program in 2024. Her work has won the Best Paper Awards at PPoPP'23 and ESA'23, and the Outstanding Paper Awards of SPAA'20 and SPAA'24.
32-D463
Dertouzos Distinguished Lecture: Deborah Estrin, Transforming longitudinal care with digital biomarkers and therapeutics
Deborah Estrin
Cornell Tech
Add to Calendar
2025-02-19 17:00:00
2025-02-19 18:00:00
America/New_York
Dertouzos Distinguished Lecture: Deborah Estrin, Transforming longitudinal care with digital biomarkers and therapeutics
Abstract:This talk explores how patient-generated data from wearables, ambient devices, and digital health tools can transform the delivery and quality of individualized clinical care. Digital Biomarkers (DBx) and Digital Therapeutics (DTx) leverage AI to convert raw data into action, helping clinicians adjust treatments, patients manage conditions, and researchers understand differentiated outcomes. Successes in Parkinson’s management and metabolic interventions demonstrate the value of integrating these technologies into specific care pathways. However, to realize scalable and affordable benefits for patients, providers, and payers will require implementing hybrid care systems that optimize patient-clinician collaboration across conditions and episodes of care. Bio: Deborah Estrin is a Professor of Computer Science at Cornell Tech in New York City where she holds The Robert V. Tishman Founder's Chair, serves as the Associate Dean for Impact, and is an Affiliate Faculty at Weill Cornell Medicine. Her research interests are in digitally-enabled innovations that support patients and providers in optimizing clinical outcomes and quality of life. Estrin founded the Public Interest Technology Initiative (PiTech) at Cornell Tech, which promotes public impact as a component of students' training and future careers. Estrin was previously the Founding Director of the NSF Center for Embedded Networked Sensing (CENS) at UCLA; pioneering the development of mobile and wireless systems to collect and analyze real time data about the physical world. Estrin's honors include: the IEEE Internet Award (2017), MacArthur Fellowship (2018), and the IEEE John von Neumann Medal (2022). She is an elected member of the American Academy of Arts and Sciences (2007), the National Academy of Engineering (2009), and the National Academy of Medicine (2019).
32-123
- Dertouzos Distinguished Lecture Series
- ML Tea
- Biomedical Imaging and Analysis 2024 - 2025
- Boston IEEE/ACM Joint Seminar Series 2024 -2025
- Brains, Minds and Machines Seminar Series 2024 - 2025
- CIS Seminar Series 2024 - 2025
- CSAIL Security Seminar Series 2024 - 2025
- Embodied Intelligence Seminar Series 2024-2025
- Algorithms and Complexity Seminar 2024
- Theory of Computation (ToC) Seminar 2024
- HCI Seminar Series 2024
- Brains, Minds and Machines Seminar Series 2023 - 2024
- Boston IEEE/ACM Joint Seminar Series 2023 - 2024
- CIS Seminar Series 2023 - 2024
- Theory of Computation (ToC) Seminar 2023
- Biomedical Imaging and Analysis 2023 - 2024
- Bioinformatics Seminar Series 2023
- Machine Learning and Health Seminar Series, Fall 2023
- CSAIL Security Seminar Series 2023 - 2024
- Algorithms and Complexity Seminar 2023
- Brains, Minds and Machines Seminar Series 2022 - 2023
- Biomedical Imaging and Analysis 2022 - 2023
- Boston IEEE/ACM Joint Seminar Series 2022 - 2023
- CSAIL Security Seminar Series 2022-2023
- Cryptography and Information (CIS) Seminar 2022
- HCI Seminar Series 2022 - 2023
- CSAIL Security Seminar Series 2020
- IEEE Computer Society and GBC/ACM 2019-2020
- Brains, Minds and Machines Seminar Series 2019 - 2020
- Algorithms and Complexity Seminar 2019-2020
- Biomedical Imaging and Analysis 2019 - 2020
- Fast Code Seminar 2019
- Machine Learning Seminar Series 2019
- Robotics@MIT Seminar Series 2019
- CSAIL Security Seminar Series 2019
- EECS Special Seminar Series 2019
- Bioinformatics Seminar Series 2019
- HCI Seminar Series 2019
- Theory of Computation Seminar (ToC) 2019
- Cryptography and Information Security (CIS) Seminar 2019
- CSAIL Alliances Tech Talk 2018 - 2019
- Programming Languages & Software Engineering Seminar 2018-2019
- HCI Seminar Series 2018
- Algorithms & Complexity Seminars 2018-2019
- Biomedical Imaging and Analysis 2018 - 2019
- IEEE Computer Society and GBC/ACM 2018-2019
- Brains, Minds and Machines 2018/2019
- Thesis Defense
- Machine Learning Seminar Series 2018
- Theory and Beyond
- CSAIL Security Seminar 2018/2019
- Robotics@MIT Seminar Series 2018
- Bioinformatics Seminar Series 2018
- Theory of Computation (TOC) 2018
- Cryptography and Information Seminar (CIS) 2018
- Hot Topics in Computing
- Brains, Minds and Machines Seminar Series 2017/2018
- IEEE Computer Society and GBC/ACM 2017/2018
- Machine Learning Seminar Series
- CSAIL Security Seminar 2017/2018
- Algorithms and Complexity Seminar Series 2017/2018
- Biomedical Imaging and Analysis 2017/2018
- Brains, Minds and Machines Seminar Series 2017
- Machine Learning Seminar Series
- Vision Seminar Series 2017
- Robotics@MIT Seminar Series 2017
- Bioinformatics Seminar Series 2017
- EECS Special Seminar Series 2017
- Cryptography and Information Seminar (CIS) 2017
- Theory of Computation (TOC) 2017
- HCI Seminar Series
- Biomedical Imaging and Analysis 2016/2017
- PL/SE Serminar Series 2016/2017
- Algorithms and Complexity Seminar Series 2016/2017
- CSAIL Security Seminar 2016/2017
- Boston IEEE/ACM Joint Seminar Series 2016/2017
- Robotics@MIT Seminar Series 2016