Add to Calendar
2024-04-18 14:00:00
2024-04-18 15:00:00
America/New_York
THESIS DEFENSE: Advancing Equity & Reliability in Machine Learning
The data we collect are often not the data we wish we had. Healthcare data reflects patterns of underdiagnosis, demographic data is shaped by evolving social norms, and benchmark data can be unrepresentative of deployment settings. For domains in which flawed data is common, these systematic differences present a barrier to the widespread adoption of machine learning. In this talk, we aim to characterize and mitigate the impact of imperfect data on machine learning models. We address three ways in which data can be flawed: imperfect labels, coarse demographics, and limited evaluation datasets. First, we develop a method to correct for imperfect labels in the form of underdiagnosis between demographic cohorts. We then show how coarse race data obscures disparities across more granular race groups, suggesting existing algorithmic audits may significantly underestimate racial disparities in performance. Finally, we present a method to select between multiple machine learning models in the absence of abundant labeled data. In sum, we discuss work that represents a step towards a machine learning methodology that is robust to systematic errors in data collection across domains.Zoom link: https://mit.zoom.us/j/92314938542
32-G449 (Patil/Kiva)
Events
April 18, 2024
Improving data efficiency and accessibility for general robotic manipulation
Hao-Shu Fang
CSAIL MIT
Add to Calendar
2024-04-18 16:00:00
2024-04-18 16:30:00
America/New_York
Improving data efficiency and accessibility for general robotic manipulation
Abstract: How can data-driven approaches endow robots with diverse manipulative skills and robust performance in unstructured environments? Despite recent progress, many open questions remain in this area, such as: (1) How can we define and model the data distribution for robotic systems? (2) In light of data scarcity, what strategies can algorithms employ to enhance performance? (3) What is the best way to scale up robotic data collection? In this talk, Hao-Shu Fang will share his research on enhancing the efficiency of robot learning algorithms and democratizing access to large-scale robotic manipulation data. He will also discuss several open questions in data-driven robotic manipulation, offering insights to the challenges posed.Bio: Hao-Shu Fang is a postdoctoral researcher collaborating with Pulkit Agrawal and Edward Adelson. His research focuses on general robotic manipulation. Recently, he has been investigating how to integrate visual-tactile perception for improved manipulation and how to train a multi-task robotic foundation behavioral model.
Room 32-370
Optimal Sample Complexity of Contrastive Learning
Northwestern Uniersity
Add to Calendar
2024-04-18 16:00:00
2024-04-18 17:00:00
America/New_York
Optimal Sample Complexity of Contrastive Learning
Abstract: Contrastive learning is a highly successful technique for learning representations of data from labeled tuples, specifying the distance relations within the tuple. We study the sample complexity of contrastive learning, i.e. the minimum number of labeled tuples sufficient for getting high generalization accuracy. We give tight bounds on the sample complexity in a variety of settings, focusing on arbitrary distance functions, both general ℓp-distances, and tree metrics. Our main result is an (almost) optimal bound on the sample complexity of learning ℓp-distances for integer p. For any p≥1 we show that Θ̃ (min(nd,n2)) labeled tuples are necessary and sufficient for learning d-dimensional representations of n-point datasets. Our results hold for an arbitrary distribution of the input samples and are based on giving the corresponding bounds on the Vapnik-Chervonenkis/Natarajan dimension of the associated problems. We further show that the theoretical bounds on sample complexity obtained via VC/Natarajan dimension can have strong predictive power for experimental results, in contrast with the folklore belief about a substantial gap between the statistical learning theory and the practice of deep learning.
32-D507
EI Seminar - Lawson Wong - High-level guidance for generalizable reinforcement learning
Northeastern University
Add to Calendar
2024-04-18 16:00:00
2024-04-18 17:00:00
America/New_York
EI Seminar - Lawson Wong - High-level guidance for generalizable reinforcement learning
Title: High-level guidance for generalizable reinforcement learningAbstract: Reinforcement learning (RL) is a compelling framework forrobotics and embodied intelligence when the environment/task is notfully known. However, it is difficult to make RL work. My thesis is thatRL is difficult because it is too general. We need to, and often can,provide RL a helping hand by providing a modicum of task-relevanthigh-level information. In this talk, I will discuss various thrusts inmy research group on this theme: (1) Using symmetry to quickly learn toplan and navigate; (2) Following a single high-level trajectory such asa path on a coarse map; (3) Integrating a wider range of guidance intothe RL loop.Bio: Lawson L.S. Wong is an assistant professor in the Khoury College ofComputer Sciences at Northeastern University. At Northeastern, he leadsthe Generalizable Robotics and Artificial Intelligence Laboratory(GRAIL). The group's research focuses on learning, representing,estimating, and using knowledge about the world that an autonomous robotmay find useful. His research agenda is to identify and learnintermediate state representations that enable effective robot learningand planning, and therefore enable robot generalization. Prior toNortheastern, Lawson was a postdoctoral fellow at Brown University,working with Stefanie Tellex. He completed his PhD at the MassachusettsInstitute of Technology, advised by Leslie Pack Kaelbling and Tomás Lozano-Pérez.
32-D463(Star)
April 19, 2024
Add to Calendar
2024-04-19 15:00:00
2024-04-19 16:00:00
America/New_York
ParlayANN: Scalable and Deterministic Parallel Graph-Based Approximate Nearest Neighbor Search Algorithms
Abstract: Approximate nearest-neighbor search (ANNS) algorithms are a key part of the modern deep learning stack due to enabling efficient similarity search over high-dimensional vector space representations (i.e., embeddings) of data. Among various ANNS algorithms, graph-based algorithms are known to achieve the best throughput-recall tradeoffs. Despite the large scale of modern ANNS datasets, existing parallel graph based implementations suffer from significant challenges to scale to large datasets due to heavy use of locks and other sequential bottlenecks, which 1) prevents them from efficiently scaling to a large number of processors, and 2) results in nondeterminism that is undesirable in certain applications.In this paper, we introduce ParlayANN, a library of deterministic and parallel graph-based approximate nearest neighbor search algorithms, along with a set of useful tools for developing such algorithms. In this library, we develop novel parallel implementations for four state-of-the-art graph-based ANNS algorithms that scale to billion-scale datasets. Our algorithms are deterministic and achieve high scalability across a diverse set of challenging datasets. In addition to the new algorithmic ideas, we also conduct a detailed experimental study of our new algorithms as well as two existing non-graph approaches. Our experimental results both validate the effectiveness of our new techniques, and lead to a comprehensive comparison among ANNS algorithms on large scale datasets with a list of interesting findings. This work is joint with Zheqi Shen, Guy Blelloch, Laxman Dhulipala, Yan Gu, Harsha Vardhan Simhadri, and Yihan Sun and appeared in PPoPP 2024.Bio: Magdalen Dobson Manohar is a 5th year PhD student in the Computer Science Department at Carnegie Mellon University advised by Guy Blelloch. She is interested in designing parallel and concurrent algorithms for solving problems related to similarity search, information retrieval, and computing nearest neighbors, with a particular focus on similarity search in high dimensions. She will be joining Microsoft as a Senior Researcher in Summer 2024. She completed her undergraduate degree in mathematics at MIT in Spring 2019.
32-G575
April 22, 2024
Thesis Defense: Towards Object-based SLAM
Yihao Zhang
MIT MechE
Add to Calendar
2024-04-22 10:00:00
2024-04-22 11:30:00
America/New_York
Thesis Defense: Towards Object-based SLAM
Abstract:Simultaneous localization and mapping (SLAM) is a fundamental capability for a robot to perceive its surrounding environment. The research area has developed for more than two decades from the original sparse landmark-based SLAM to dense SLAM, and now there is a demand for semantic understanding of the environment beyond pure geometric understanding. This thesis makes a number of contributions to help realize object-based SLAM, in which the map consists of a set of objects with their semantic categories recognized and their poses and shapes estimated. Such a map provides vital object-level semantic and geometric perception for applications such as augmented reality (AR), mixed reality (MR), mobile manipulation, and autonomous driving.In order to perform object-based SLAM, the sensor measurements have to undergo a series of processes. First, objects are semantically segmented in the sensor measurements. This step is typically done by a neural network. As robots are often required to bootstrap from some initial labeled datasets and adapt to different environments where labeled data are unavailable, it is important to enable semi-supervised learning to improve the robot performance with the unlabeled data collected by the robot itself. Second, after the objects are segmented, measurements for each object across different viewpoints have to be associated together for downstream processing. Lastly, the robot must be able to extract the object pose and shape information from the measurements without access to the detailed CAD models of the objects. This thesis studies these three aspects of object-based SLAM, namely semi-supervised learning of semantic segmentation in a robotics context, data association for object-based SLAM, and category-level object pose and shape estimation.For category-level object pose and shape estimation, we developed ShapeICP (ICP: iterative closest point), an algorithm that does not use pose-annotated data and generates meshes as the object shape representation. For data association, we developed DAF-SLAM (DAF: data association free) to estimate the associations in the back-end instead of relying on sensor-dependent front-end methods. For semi-supervised learning, we applied temporal semantic consistency inspired by the photometric consistency technique in the traditional SLAM methods. Each contribution is evaluated via experimental datasets, demonstrating improvements over previous techniques.Committee Members:John J. Leonard (Advisor), Department of Mechanical EngineeringFaez Ahmed, Department of Mechanical EngineeringNicholas Roy, Department of Aeronautics and Astronautics
32-D463 (https://mit.zoom.us/j/6524251299)
April 23, 2024
Add to Calendar
2024-04-23 12:00:00
2024-04-23 13:00:00
America/New_York
Visual Computing Seminar | Tim Brooks - Sora: Video Generation Models as World Simulators
Virtual session of MIT Visual Computing Seminar, Spring 2024 featuring invited speaker (remote) Tim Brooks from OpenAI.The format is ~25 min of talk followed by Q&A. Considering the potential capacity of the talk, we use slido for live Q&A and answer top questions from the upvote queue. [live Q&A link] https://tinyurl.com/TimBrooksMITPlease DO NOT record this talk by any means. Thanks for your understanding. TitleSora: Video Generation Models as World SimulatorsAbstractWe explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of high fidelity video. Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.Bio Tim Brooks is a research scientist at OpenAI where he co-leads Sora, their video generation model. His research investigates large-scale generative models that simulate the physical world. Tim received a PhD at Berkeley AI Research advised by Alyosha Efros, where he invented InstructPix2Pix. He previously worked on AI that powers the Pixel phone's camera at Google and on video generation models at NVIDIA.
https://mit.zoom.us/j/95167636032?pwd=U0dyaEx1a3A3QkZrbmIvMkcvUFkyUT09 (password: mitvc)
Cindy Hsin-Liu Kao - Designing Hybrid Skins
Cornell University
Add to Calendar
2024-04-23 16:00:00
2024-04-23 17:00:00
America/New_York
Cindy Hsin-Liu Kao - Designing Hybrid Skins
Abstract:Hybrid Skins are an emerging form of conformable interface situated at all scales of the human experience. These conformable interfaces are hybrid in their integration of technological function with social and cultural perspectives, blending historical craft with miniaturized robotics, machines, and materials in their development. The resulting skins also serve social, cultural, and technological purposes while supporting the construction of individual identities. This seminar examines recent work from the Hybrid Body Lab in designing Hybrid Skins through under-explored approaches of textile robotics, bio-fluid sensing, modular flexible electronics, and sustainable materials exploration. With their seamless and conformable form factor, Hybrid Skins afford unprecedented intimacy to the human experience and an opportunity for us to carefully rethink and redesign how our relationship with technology can, should (or should not) be. By blending engineering, design, and committed engagement with diverse communities, Kao and her lab’s research aims to foster inclusive design for future wearable technology that can celebrate (instead of constrict) the diversity of the human experience. Bio:Cindy Hsin-Liu Kao is an assistant professor at Cornell University. She directs the Hybrid Body Lab, which focuses on integrating cultural and social perspectives into the design of on-body interfaces. Through her research, she aims to foster inclusive designs for soft wearable technologies, like smart tattoos and textiles and develops novel digital fabrication methods. Kao, honored with a National Science Foundation CAREER Award, has received accolades in major ACM Human-Computer Interaction venues and media attention from Forbes, CNN, WIRED, and VOGUE. Her work has been showcased internationally, including at the Pompidou Centre in Paris and New York Fashion Week, earning multiple design awards. Kao holds a Ph.D. from MIT Media Lab.This talk will also be streamed over Zoom: https://mit.zoom.us/j/99183558682.
Star (D463)
On some Recent Dynamic Graph Algorithms
Yang Liu
IAS
Add to Calendar
2024-04-23 16:15:00
2024-04-23 17:15:00
America/New_York
On some Recent Dynamic Graph Algorithms
Abstract: We discuss some recent algorithms for problems in dynamic graphs undergoing edge insertions and deletions. In the first part of the talk, we will discuss connections between the approximate bipartite matching problem in fully dynamic graphs, and versions of the online matrix-vector multiplication conjecture. These connections will lead to faster algorithms in online and offline settings, as well as some conditional lower bounds.In the second part of the talk, we will briefly discuss how interior point methods can be used to design algorithms for partially dynamic graphs: those undergoing only edge insertions (incremental) or only edge deletions (decremental). This leads to almost-optimal algorithms for problems including incremental cycle detection, and decremental s-t distance.
32-G882
April 24, 2024
Add to Calendar
2024-04-24 11:00:00
2024-04-24 12:00:00
America/New_York
Unblocking CPU Performance Bottlenecks for Data Center Workloads
Abstract:Modern complex data center applications exhibit unique characteristicssuch as extensive data and instruction footprints, complex controlflow, and hard-to-predict branches that are not adequately served byexisting microprocessor architectures. In particular, these workloadsexceed the capabilities of microprocessor structures such as theinstruction cache, BTB, branch predictor, and data caches, causingsignificant degradation of performance and energy efficiency.In my talk, I will provide a characterization of data centerapplications, highlighting the importance of addressing frontend andbackend performance issues. I will then introduce new techniques toaddress these challenges by improving the branch predictor, datacache, and instruction scheduler. I will make the case forprofile-guided optimizations that amortize overheads across the fleet,which have been successfully deployed at Google and Intel, servingmillions of users daily.Bio:Heiner Litz is an Associate Professor at UC Santa Cruz, a visitingProfessor at MIT, and a consulting CPU architect at ARM. His researchfocuses on improving the performance, cost, and efficiency of datacenter systems. Heiner is the recipient of the NSF CAREER award,Intel's Outstanding Researcher award, a MICRO Best Paper award, twoIEEE MICRO Top Pick awards, and multiple Google Faculty Awards. Beforejoining UCSC, Heiner Litz was a researcher at Google and apostdoctoral research fellow at Stanford University with Prof.Christos Kozyrakis and David Cheriton. He received his Diplom andPh.D. from the University of Mannheim, Germany, advised by Prof.Bruening.Headshot:https://people.ucsc.edu/~hlitz/hlitz.jpegweb:https://people.ucsc.edu/~hlitz/
32-G882 Hewlett Room
April 25, 2024
Constrained Pseudorandom Functions from Weaker Assumptions
Sacha Servan-Schreiber
MIT
Add to Calendar
2024-04-25 12:00:00
2024-04-25 13:00:00
America/New_York
Constrained Pseudorandom Functions from Weaker Assumptions
In this talk, I will present a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack (RKA) security. I will show three instantiations of the framework:1. an adaptively-secure construction in the random oracle model;2. a selectively-secure construction under the DDH assumption; and3. a selectively-secure construction under the assumption that one-way functions exist.All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which is shown via an implementation.
D-463 (Star)
April 30, 2024
Privacy-Preserving ML with Fully Homomorphic Encryption
Jordan Frery and Benoit Chevallier-Mames
Zama
Add to Calendar
2024-04-30 14:00:00
2024-04-30 15:00:00
America/New_York
Privacy-Preserving ML with Fully Homomorphic Encryption
Abstract:In the rapidly evolving field of artificial intelligence, the commitment to data privacy and intellectual property protection during Machine Learning operations has become a foundational necessity for society and businesses handling sensitive data. This is especially critical in sectors such as healthcare and finance, where ensuring confidentiality and safeguarding proprietary information are not just ethical imperatives but essential business requirements.This presentation goes into the role of Fully Homomorphic Encryption (FHE), based on the open-source library Concrete ML, in advancing secure and privacy-preserving ML applications.We begin with an overview of Concrete ML, emphasizing how practical FHE for ML was made possible. This sets the stage for discussing how FHE is applied to ML inference, demonstrating its capability to perform secure inference on encrypted data across various models. After inference, we speak about another important FHE application, the FHE training and how encrypted data from multiple sources can be used for training without compromising individual user's privacy.FHE has lots of synergies with other technologies, in particular Federated Learning: we show how this integration strengthens privacy-preserving features of ML models during the full pipeline, training and inference.Finally, we address the application of FHE in generative AI and the development of Hybrid FHE models (which are the subject of our RSA 2024 presentation). This approach represents a strategic balance between intellectual property protection, user privacy and computational performance, offering solutions to the challenges of securing one of the most important AI applications of our times.BiosJordan Frery is a research scientist and engineer in machine learning at Zama. As a researcher, he published in different application domains such as fraud detection, author verification, and risk prediction. He holds a PhD in machine learning and has worked in the field for 8+ years, as a data and research scientist. His current work at Zama focuses on bridging the gap between machine learning and fully homomorphic encryption, with the goal of applying machine learning techniques to encrypted data.Benoit Chevallier-Mames is a security engineer and researcher serving as VP of Cloud & Machine Learning at Zama. He has spent 20+ years between cryptographic research and secure implementations in a wide range of domains such as side-channel security, provable security, whitebox cryptography, and fully homomorphic encryption. Prior to Zama, he securely implemented public-key algorithms on smartcards in Gemplus for seven years, worked for the French governmental ANSSI agency for one year, and then designed and developed whitebox implementations at Apple for 12 years. Benoit has co-written 15+ peer-reviewed papers and is the co-author of 50+ patents. He holds a PhD from Ecole Normale Superieure / Paris University and a master's degree from CentraleSupelec.
32-G882
May 07, 2024
Quest | CBMM Seminar Series: Invariance and equivariance in brains and machines
Bruno Olshausen
UC Berkeley
Add to Calendar
2024-05-07 16:00:00
2024-05-07 17:30:00
America/New_York
Quest | CBMM Seminar Series: Invariance and equivariance in brains and machines
Abstract: The goal of building machines that can perceive and act in the world as humans and other animals do has been a focus of AI research efforts for over half a century. Over this same period, neuroscience has sought to achieve a mechanistic understanding of the brain processes underlying perception and action. It stands to reason that these parallel efforts could inform one another. However recent advances in deep learning and transformers have, for the most part, not translated into new neuroscientific insights; and other than deriving loose inspiration from neuroscience, AI has mostly pursued its own course which now deviates strongly from the brain. Here I propose an approach to building both invariant and equivariant representations in vision that is rooted in observations of animal behavior and informed by both neurobiological mechanisms (recurrence, dendritic nonlinearities, phase coding) and mathematical principles (group theory, residue numbers). What emerges from this approach is a neural circuit for factorization that can learn about shapes and their transformations from image data, and a model of the grid-cell system based on high-dimensional encodings of residue numbers. These models provide efficient solutions to long-studied problems that are well-suited for implementation in neuromorphic hardware or as a basis for forming hypotheses about visual cortex and entorhinal cortex.Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu).Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.
Singleton Auditorium (46-3002)
Parallel Derandomization for Chernoff-like Concentrations
Mohsen Ghaffari
CSAIL MIT
Add to Calendar
2024-05-07 16:15:00
2024-05-07 17:15:00
America/New_York
Parallel Derandomization for Chernoff-like Concentrations
Abstract: Randomized algorithms frequently use concentration results such as Chernoff and Hoeffding bounds. A longstanding challenge in parallel computing is to devise an efficient method to derandomize parallel algorithms that rely on such concentrations. Classic works of Motwani, Naor, and Naor [FOCS'89] and Berger and Rompel [FOCS'89] provide an elegant parallel derandomization method for these, via a binary search in a k-wise independent space, but with one major disadvanage: it blows up the computational work by a (large) polynomial. That is, the resulting deterministic parallel algorithm is far from work-efficiency and needs polynomial processors even to match the speed of single-processor computation. This talk overviews a duo of recent papers that provide the first nearly work-efficient parallel derandomization method for Chernoff-like concentrations.Based on joint work with Christoph Grunau and Vaclav Rozhon, which can be accessed via https://arxiv.org/abs/2311.13764 and https://arxiv.org/abs/2311.13771.
32-G882
May 10, 2024
Pseudorandom Error-Correcting Codes
Miranda Christ (Columbia University)
Add to Calendar
2024-05-10 10:30:00
2024-05-10 12:00:00
America/New_York
Pseudorandom Error-Correcting Codes
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key.We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either 2^{O(\sqrt{n})}-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density.As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors.Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.This is based on joint work with Sam Gunn: https://eprint.iacr.org/2024/235
32-G882 Hewlett Room
May 24, 2024
Dynamic Adaptive Optimization: Recovering from Hardware Errors and Software Crashes in a Distributed Virtual Machine
University of California, Santa Cruz
Add to Calendar
2024-05-24 14:00:00
2024-05-24 15:00:00
America/New_York
Dynamic Adaptive Optimization: Recovering from Hardware Errors and Software Crashes in a Distributed Virtual Machine
Abstract: TidalScale was a startup aquired by HPE in December 2022. TidalSale developed a software architecture called distributed virtual machines. Today's virtual machines in widespread use today allows multiple operating systems to share a server. TidalScale inverts this paradigm. A single virtual machine running on TidalScale runs a single operating system instance across a cluster of standard servers. This virtual machine sits between an operating system and a cluster of servers. It runs on premise or in the cloud. Because they are virtual, resources like processors and memory can migrate among nodes in the cluster. The virtual machine dynamically self-optimizes resource placement in real time under contol of a set of machine learning algorithms. Servers can automatically and dynamically be added and removed depending on fluctuationg workloads, allowing for dynamic hardware scalability, but also increasing reliability and resiliency. In this talk, we specifically show how these servers automatically, without any human intervention, recover from most hardware failures, and and provide excellent restart performance should OS failures occur.Bio: Ike Nassi is a consultant and an Adjunct Professor of Computer Science at UC Santa Cruz, a Founding Trustee at the Computer History Museum and an advisory board member of TTI/Vanguard. Ike was the founder of TidalScale, sold to HPE Dec. 2022. Previously, he was an Executive Vice President and Chief Scientist at SAP. Ike started or helped to start four companies: Encore Computer Corporation building hierarchical strongly coherent shared memory symmetric multiprocessors (1984); InfoGear Technology, which developed both Internet appliances (including the first iPhone) (1996); Firetide, an early wireless mesh networking company (2000), and TidalScale (2012). He was SVP for Software at Apple Computer and a Corporate Officer. He worked at Visual Technology, and Digital Equipment Corporation. In the past, Dr. Nassi was a Visiting Scholar at Stanford University, twice a Research Scientist at MIT, and a Visiting Scholar at University of California, Berkeley. He has served on the board of the Anita Borg Institute for Women and Technology, and the IEEE Computer Society Industry Advisory Board. He holds a PhD in Computer Science from Stony Brook University.He was awarded two certificates for Distinguished Service from the Department of Defense, one for his work on the design of the programming language Ada and one for his work on DARPA ISAT. He is a Life Fellow of IEEE and a Life member of ACM. He is named on over 35 patents.
32-G575
June 07, 2024
Add to Calendar
2024-06-07 9:00:00
2024-06-07 18:00:00
America/New_York
CSAIL + Imagination in Action Symposium 2024
The symposium will showcase the extraordinary and substantive contributions CSAIL research groups have made, and highlight the remarkable impacts of our work.
Kirsch Auditorium