2022-2024 Board of Governors
Saurabh Bagchi is a Professor in the School of Electrical and Computer Engineering and the Department of Computer Science at Purdue University
in West Lafayette, Indiana. His research interest is in dependable computing and distributed systems. He is the founding Director of a university-wide resilience center at Purdue called CRISP (2017-present) and a PI of the Army’s Artificial Intelligence Innovation Institute (A2I2) (2020-25) that spans 9 universities. He is the recipient of the Alexander von Humboldt Research Award (2018), the Adobe Faculty Award (2017, 2020), the AT&T Labs VURI Award (2016), the Google Faculty Award (2015), and the IBM Faculty Award (2014). He is an IEEE Golden Core member (2018), an ACM Distinguished Scientist (2013), and a Distinguished Speaker for ACM (2012). He was selected to be a member of the International Federation for Information Processing (IFIP) in 2020.
Saurabh is proudest of the 21 PhD students and 50 Masters thesis students who have graduated from his research group and who are in various stages of building wonderful careers in industry or academia. In his group, he and his students have way too much fun building and breaking real systems. Along the way this has led to 12 best paper awards or runners-up awards at IEEE/ACM conferences. Saurabh received his MS and PhD degrees from the University of Illinois at Urbana-Champaign and his BS degree from the Indian Institute of Technology Kharagpur, all in Computer Science. He serves as the inaugural International Visiting Professor at IIT Kharagpur in 2018.
DVP term expires December 2023
Dependability: Meet Data Analytics
We live in a data-driven world as everyone around has been telling us for some time. Everything is generating data, in volumes and at high rates, from the sensors embedded in our physical spaces to the large number of machines in data centers which are being monitored for a wide variety of metrics. The question that we pose is:
Dependability is the property that a computing system continues to provide its functionality despite the introduction of faults, either accidental faults (design defects, environmental effects, etc.) or maliciously introduced faults (security attacks, external or internal). We have been addressing the dependability challenge through large-scale data analytics applied end-to-end from the small (networked embedded systems, mobile and wearable devices) [e.g., NeurIPS-20, Sensys-20, UsenixSec-20, NDSS-20, DSN-19, UsenixSec-18, S&P-17] to the large (edge and cloud systems, distributed machine learning clusters) [e.g., DSN-20, UsenixATC-20, UsenixATC-19, ICS-19, TDSC-18]. In this talk, I will first give a high-level view of how data analytics has been brought to bear on dependability challenges, and key insights arising from work done by the technical community broadly. Then I will do a deep dive into the problem of configuring complex systems to meet dependability and performance requirements, using data-driven decisions.
Data Analytics Becomes Secure
Relief and rescue operations of the near and the far future will involve autonomous operations among multiple cyber, physical, and kinetic assets, together with interactions with humans. Such autonomous operation will rely on a pipeline of machine learning (ML) algorithms executing in real-time on a distributed set of heterogeneous platforms, both stationary and maneuverable. The algorithms will have to deal with both adversarial control and data planes. The former means that some of the nodes on which the algorithms will execute cannot be trusted and have been compromised for leaking information or violating the integrity of the results. An adversarial data plane means that the algorithms will have to operate with uncertain, incomplete, and potentially, maliciously manipulated data sources. This talk will show the basics of how to design secure algorithms that can provide probabilistic guarantees on security and latency, under powerful, rigorously quantified adversary models. It will cover three pillars that are needed to achieve the above desired outcome — robust adversarial algorithms, interpretable algorithms aiding the trust of the humans on the results of the autonomous algorithms, and secure, distributed execution of the autonomy pipeline among multiple platforms.
Learning about protecting distributed infrastructure from behavioral economists
Many of our critical distributed infrastructures (transportation, distributed manufacturing, power grid, etc.) comprise multiple interdependent assets, and a set of defenders, each responsible for securing a subset of the assets against an attacker. The practical question that arises is how should the defenders make their security investments and if they should cooperate. While prior work has answered these questions, they have been under the assumption of perfect rationality of the decision makers. In this talk, we will show that these answers can be dangerously sub-optimal when the defenders exhibit characteristics of human decision-making that have been identified by the behavioral psychology and economics communities. In particular, humans have been shown to perceive probabilities in a nonlinear manner, typically overweighting low probabilities and underweighting high probabilities. By applying results from two Nobel Prize winning economists (Kahneman-2002 and Thaler-2017), we get glimpses of where a biased defender can be beneficial for the other defenders in the network. We want to spur discussion of where we should, and should not, learn from behavioral economists in securing our distributed infrastructures.
Recent Volunteer Positions
2020 Board of Governors
2017-2019 Board of Governors
Learn more about volunteering
- Dependability: Meet Data Analytics
- Data Analytics Becomes Secure
- Learning about protecting distributed infrastructure from behavioral economists