Talks by visitors to the Department

2020 talks

  • Towards automated debugging and optimization


    Speaker:

    Dr. Abhilash Jindal, Co-founder and CTO of Mobile Enerlytics, San Francisco, CA

    Date:2020-02-20
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Debugging and optimization are largely ad-hoc manual processes taking up 35-75 percent of programmers' time costing more than 0B annually. This process becomes further exacerbated in the modern programming paradigm where programmers stand on the "shoulder of giant" software frameworks to quickly program complex systems but have a limited understanding of the intricacies of the underlying system.

    In the first part of this talk, we will see how new power management APIs on smartphones seriously encumber programming. These APIs, when not used correctly, give rise to a whole new class of energy bugs called sleep disorder bugs. These bugs plague the complete smartphone software stack including apps, framework, kernel, and device drivers. I will present a taxonomy of sleep disorder bugs with precise bug definitions which enabled us to create a suite of automated tools to identify different flavors of sleep disorder bugs. I will then present an automated static analysis tool KLOCK that identified 63 sleep-induced time bugs, a subclass of sleep disorder bugs, in the Android Linux kernel.

    In the second part of the talk, we will study how the traditional profilers fall short in giving actionable diagnosis for optimizing programs. As even after being presented with performance hotspots, developers still need significant manual inspection and reasoning of the source code to proceed with the remaining optimization tasks: 1. Is there a more efficient implementation? 2. How to come up with a more efficient implementation? I will present a new optimization methodology called differential profiling that automates answering these two questions. In particular, differential profiling automatically uncovers more efficient API usage by leveraging existing implementations of similar apps which are bountiful in the app marketplace. Our differential energy profiling tool, DIFFPROF employs a novel tree-matching algorithm for comparing energy profiler outputs of pairs of similar apps and found 12 energy improvements in popular Android apps.


    Bio:

    Abhilash Jindal received B.Tech. in Electrical Engineering from IIT Kanpur. He received his Ph.D. in Computer Science from Purdue University where he researched software-based approaches for extending mobile battery life. He has published papers in top system conferences including OSDI, ATC, Eurosys, Mobisys, Sigmetrics, and Mobicom. Currently, he is commercializing his research work serving as the CTO of Mobile Enerlytics, a silicon valley startup. His research interests include mobile systems, software engineering, and operating systems.



  • Social media-based epidemic intelligence


    Speaker:

    Dr. Aditya Joshi is a Postdoctoral Fellow at CSIRO, the national science agency of Australia.

    Date:2020-02-06
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Epidemic intelligence is the use of technology to detect and manage outbreaks of diseases. In this talk, I will present our work on early detection of epidemics using social media posts. This work is divided into two parts: (a) health mention classification (the use of natural language processing to detect illness reports in a social media post), and (b) health event detection (the use of time series monitoring to detect epidemics from a social media stream). For health mention classification, we propose a linguistically motivated deep learning-based architecture. For health event detection, we experiment with an asthma outbreak in Melbourne in 2016. I will end the talk with a discussion on opportunities for social media-based epidemic intelligence in India: the second mobile phone market and a richly multilingual country with a peculiar social media usage pattern.


    Bio:

    Aditya Joshi is a Postdoctoral Fellow at CSIRO, the national science agency of Australia. He holds a joint PhD degree from IIT Bombay and Monash University, Australia. He has worked for a database MNC, an artificial intelligence startup and a government research organisation, each of which provided a valuable perspective. His teaching talks span diverse forums: NLP schools organised by IITB and IIIT-H; a tutorial at EMNLP, a leading NLP conference; and a TEDx talk. His research was covered in media stories in the Indian Express, the Times of London and the Australian. He received the best PhD thesis award from the IITB-Monash Research Academy.



  • A Wakeup Call: Databases in an Untrusted Universe


    Speaker:

    Prof. Amr El Abbadi, University of California, Santa Barbara (UCSB)

    Date:2020-01-31
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Once upon a time databases were structured, one size fit all and they resided on machines that were trustworthy and even when they failed, they simply crashed. This era has come and gone
    as eloquently stated by Mike Stonebraker. We now have key-value stores, graph databases, text databases, and a myriad of unstructured data repositories. However, we, as a database community still cling to our 20th-century belief that databases always reside on trustworthy, honest servers. This notion has been challenged and abandoned by many other Computer Science communities, most notably the security and the distributed systems communities. The rise of the cloud computing paradigm, as well as the rapid popularity of blockchains, demand a rethinking of our naïve, comfortable beliefs in an ideal benign infrastructure. In the cloud, clients store their sensitive data in remote servers owned and operated by cloud providers. The Security and Crypto Communities have made significant inroads to protect both data and access privacy from malicious untrusted storage providers using encryption and oblivious data stores. The Distributed Systems and the Systems Communities have developed consensus protocols to ensure the fault-tolerant maintenance of data residing on untrusted, malicious infrastructure. However, these solutions face significant scalability and performance challenges when incorporated in large scale data repositories. Novel database designs need to directly address the natural tension between performance, fault-tolerance, and trustworthiness. This is a perfect setting for the database community to lead and guide. In this talk, I will discuss the state of the art in terms of data management in malicious, untrusted settings, its limitations, and potential approaches to mitigate these shortcomings. As examples, I will use cloud and distributed databases that reside on untrustworthy malicious infrastructure and discuss specific approaches for standard database problems like commitment and replication. I will also explore blockchains, which can be viewed as asset management databases in untrusted infrastructures.


    Bio:

    Amr El Abbadi is a Professor of Computer Science at the University of California, Santa Barbara. He received his B. Eng. from Alexandria University, Egypt, and his Ph.D. from Cornell University. His research interests are in the fields of fault-tolerant distributed systems and databases, focusing recently on Cloud data management and blockchain-based systems. Prof. El Abbadi is an ACM Fellow, AAAS Fellow, and IEEE Fellow. He was Chair of the Computer Science Department at UCSB from 2007 to 2011. He has served as a journal editor for several database journals, including, The VLDB Journal, IEEE Transactions on Computers and The Computer Journal. He has been Program Chair for multiple database and distributed systems conferences. He currently serves on the executive committee of the IEEE Technical Committee on Data Engineering (TCDE) and was a board member of the VLDB Endowment from 2002 to 2008. In 2007, Prof. El Abbadi received the UCSB Senate Outstanding Mentorship Award for his excellence in mentoring graduate students. In 2013, his student, Sudipto Das received the SIGMOD Jim Gray Doctoral Dissertation Award. Prof. El Abbadi is also a co-recipient of the Test of Time Award at EDBT/ICDT 2015. He has published over 300 articles in databases and
    distributed systems and has supervised over 35 Ph.D. students.



  • Fairness in Algorithmic Decision Making


    Speaker:

    Dr. Abhijnan Chakraborty is a Post-doctoral Researcher at the Max Planck Institute for Software Systems (MPI-SWS), Germany.

    Date:2020-01-28
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking job applicants), judiciary (offender profiling), healthcare (identifying high-risk patients who need additional care) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have uncovered the potential for bias (unfairness) in algorithmic decisions in different contexts, and proposed mechanisms to control (mitigate) such biases. However, the emphasis of existing works has largely been on fairness in supervised classification or regression tasks, and fairness issues in other scenarios remain relatively unexplored. In this talk, I will cover our recent works on incorporating fairness in recommendation and matching algorithms in multi-sided platforms, where the algorithms need to fairly consider the preferences of multiple stakeholders. I will discuss the notions of fairness in these contexts and propose techniques to achieve them. I will conclude the talk with a list of open questions and directions for future work.


    Bio:

    Abhijnan Chakraborty is a Post-doctoral Researcher at the Max Planck Institute for Software Systems (MPI-SWS), Germany. He obtained his PhD from the Indian Institute of Technology (IIT) Kharagpur under the supervision of Prof. Niloy Ganguly (IIT Kharagpur) and Prof. Krishna Gummadi (MPI-SWS). During PhD, he was awarded the Google India PhD Fellowship and the Prime Minister's Fellowship for Doctoral Research. Prior to joining PhD, he spent two years at Microsoft Research India, working in the area of mobile systems. His current research interests fall under the broad theme of Computing and Society, covering the research areas of Social Computing, Information Retrieval and Fairness in Machine Learning. He has authored several papers in top-tier computer science conferences including WWW, KDD, AAAI, CSCW, ICWSM and MobiCom. His research works have won the best paper award at ASONAM'16 and best poster award at ECIR’19. He is one of the recipients of internationally competitive research grant from the Data Transparency Lab to advance his research on fairness and transparency in algorithmic systems. More details about him can be found at

    https://people.mpi-sws.org/~achakrab



  • Scalable algorithms for rapidly advancing DNA sequencing technologies


    Speaker:

    Dr. Chirag Jain is currently a postdoctoral fellow with Dr. Adam Phillippy in the Genome Informatics Section at the National Institutes of Health (NIH), USA.

    Date:2020-01-27
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Genomics continues to have an immense impact in life sciences, from elucidating fundamental genetic mechanisms to providing new keys to understand diseases. Catalyzed by breakthroughs in genomic technologies, high-throughput DNA sequencing has become a major generator of data, catching up with the most prominent big-data
    applications such as astronomy and social media. As a result, it has become challenging to design scalable DNA sequence algorithms with suitable accuracy guarantees.


    Bio:

    Dr. Chirag Jain is currently a postdoctoral fellow with Dr. Adam Phillippy in the Genome Informatics Section at the National Institutes of Health (NIH), USA. He got his doctorate in Computational Science at Georgia Tech under the supervision of Prof. Srinivas Aluru, and bachelors in Computer Science from IIT Delhi.  He is a recipient of the Best GPU
    Paper Award at IEEE HiPC’14, Best Poster awards at RECOMB’19, CRIDC’19, IITD-OpenHouse’14, and Reproducibility-Initiative Award at ACM Supercomputing’16.



  • Security and Privacy of Connected Autonomous Vehicles


    Speaker:

    Dr. Vireshwar Kumar, postdoc at Purdue University

    Date:2020-01-22
    Time:00:00:00 (IST)
    Venue:Bharti-101
    Abstract:

    The upcoming smart transportation systems which consist of connected autonomous vehicles, are poised to transform our everyday life. The sustainability and growth of these systems to their full potential will significantly depend on the robustness of these systems against security and privacy threats. Unfortunately, the communication protocols employed in these systems lack mainstream network security capabilities due to energy constraints of the deployed platforms and bandwidth constraints of the communication medium. In this talk, I will present the results of my efforts in anatomizing the two vital communication protocols employed in the smart transportation: (1) vehicle-to-everything (V2X) communication protocol which is utilized to facilitate wireless communication among connected vehicles, and (2) controller area network (CAN) protocol which is utilized within an autonomous vehicle to enable real-time control of critical automotive components including brakes. For each of these two protocols, I will first describe the inquisitive approach which led to the discovery of the new security vulnerabilities. Then, through the experiments on real-world systems, I will demonstrate how these vulnerabilities can be exploited to launch malicious attacks which evade the state-of-the-art defense mechanisms employed in these systems. I will conclude the talk by discussing novel countermeasures which are required to mitigate these fundamental vulnerabilities and prevent their exploitation.


    Bio:

    Dr. Vireshwar Kumar is a Postdoctoral Research Associate in the Department of Computer Science at Purdue University. Vireshwar earned his B.Tech. in Electrical Engineering at Indian Institute of Technology Delhi in 2009, and Ph.D. degree in Computer Engineering at Virginia Tech in 2016. He was the recipient of the outstanding Ph.D. student award by the Center for Embedded Systems for Critical Applications at Virginia Tech. He also had a short stint as a Project Assistant in the Department of Electrical Communication Engineering at Indian Institute of Science in 2010. His research interests include discovering and mitigating security vulnerabilities in the communication protocols employed in cyber-physical systems, e.g., smart home, smart
    transportation and smart city. Vireshwar’s research work has featured in top-tier security venues including ACM Conference on Computer and Communications Security (CCS) and IEEE Transactions on Information Forensics and Security (TIFS). He has also served on the TPC of flagship conferences including IEEE Conference on Communications and Network Security (CNS) and IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN).



  • Redesigning how networks work to make the net"work"


    Speaker:

    Dr. Praveen Tammana, Postdoctoral researcher at Princeton University

    Date:2020-01-16
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Solving network-management problems in real-time becomes increasingly difficult with a human in the middle and ever-growing demands of modern applications for high performance, high availability, and high security. An alternative for making network management easier while meeting the demands is to work on an ambitious goal of self-managing networks that are able to manage themselves with minimal human involvement.

    Practically deployable self-managing networks heavily rely on fine-grained telemetry data to understand what is going on in the network (visibility) and then make appropriate network management decisions.  However, it is extremely challenging and technically expensive to build a telemetry system that provides necessary visibility; mainly because of the large resource overhead in monitoring and collecting the telemetry data from massive networks that can have up to millions of links and thousands of high-speed switches. Today, due to limited resources, network infrastructure offers poor visibility, as a consequence, operators have to do a lot of manual work or a lot of estimations, which oftentimes lead to inaccurate decisions.

    In this talk, I will highlight a scalable network telemetry system, SwitchPointer, which obtains high visibility by making architectural changes across the network entities (e.g., switches, servers) in a large-scale data center. The main challenge is how to effectively use limited resources at switches and obtain high visibility. I will discuss in detail how SwitchPointer addresses this challenge and enables taking fast and accurate management decisions.


    Bio:

    Praveen Tammana is currently a Postdoctoral researcher at Princeton University. His research interests are in Systems and Networking. He develops scalable, fast, and easy-to-use real-world networked systems. His recent work focuses on two aspects of network management: network telemetry and traffic engineering. Prior to Princeton, he received his Ph.D. from The University of Edinburgh, UK, and his Master's degree from IIT-Madras, India. He has over three years of industrial experience working with Intel technology and Juniper Networks, at Bangalore, and Cisco Systems, San Jose, USA.



  • Text is not Text: Challenges in deep text understanding in professional domains


    Speaker:

    Vijay Saraswat, Global Head of AI R&D, Goldman Sachs

    Date:2020-01-14
    Time:16:30:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Thanks to Big Data, Compute, Code (and, surprisingly, People), NLU research has entered a golden period. Hitherto, much of the impetus for this work has come from the desire to computationally understand "mass content" -- content from the web, social media, news sources. Here, relatively shallow meaning extraction techniques have worked reasonably well, without needing to use linguistically motivated, deep, constrained-based NL systems such as LFG and HPSG.

    Significant challenges arise, however, when one starts to work with text in professional domains (e.g., financial or legal). Here documents such as regulations, contracts, agreements (e.g., loan, credit, master service), financial prospectuses, company and analyst reports must be addressed.

    A contract (e.g. commercial line of credit) may involve multiple agreements with (typically, overriding) amendments, negotiated over many years. References are used at multiple semantic levels, and written using genre-specific conventions (e.g., |Casualty Event pursuant to Section 2.05(b)(ii)(B)|, |the meaning specified in Section 5.14|). Documents (e.g. EU regulations) may contain editing instructions that specify amendments to previous documents by referencing their clauses and supplying (quoted) replacement text.

    Such documents typically contain highly complex text (very low Flesch scores), with single sentences spreading over multiple paragraphs, with named sub-clauses. They may use specific conventions (e.g. parenthetical structures) to present parallel semantic propositions in a syntactically compact way. They may use technical terms, whose meaning is well-understood by professionals, but may not be available in formalized background theories. Moreover, documents typically contain definitional scopes so that the same term can have various meanings across documents.

    Further, documents usually define hypothetical or potential typical events (e.g. |events of default|), rather than actual (or fake) concrete events (e.g. |Barack Obama was born in Kenya|); text may be deontic, not factual. Text may specify complex normative definitions, while carving out a series of nested exceptions. It may include sophisticated argumentation structures (e.g. about company valuations) that capture critical application-specific distinctions. Ironically, in some cases we see rather contorted text (e.g. defining contingent interest rates) which is essentially a verbalization of mathematical formulas.

    In short: professional text has an enormously rich structure, refined over centuries of human business interactions. This structure is distant from the news (WSJ, NYT) corpora used for "broad domain" NL research, and even the "English as a Formal Language" approach of traditional linguists.

    We outline a long-term research agenda to computationalize such documents. We think of language processors as compilers, operating on the input document at varying levels of abstraction (abstract syntax tree, intermediate representation) and using a variety of techniques (partial evaluation, abstract interpretation) to generating meaning representations (rather than object code), intended primarily for use with back-end reasoners. We hypothesize the need to extend the highly sophisticated, large-parameter pattern matching characteristic of today's deep learning systems with linguistically rigorous analyses, leveraging logical representations.


    Bio:

    Vijay Saraswat graduated from IIT Kanpur in 1982 with a B Tech in Electrical Engineering, and from Carnegie-Mellon University in 1989 with a PhD in Computer Science. Over thirty years, he has been a Member of the Research Staff at Xerox PARC, a Technology Consultant at AT&T Research, and a Distinguished Research Staff Member and Chief Scientist at IBM TJ Watson Research Center.

    Vijay's research interests span a number of areas in Computer Science, across AI, logic and programming systems. He is particularly known for his work on concurrent constraint programming, (with Mary Dalrymple and colleagues) on "glue semantics", and on the X10 programming language for high performance computation. His work has won numerous awards.

    Vijay joined Goldman Sachs in 2017 to help found corporate R&D. He currently leads the AI R&D work at GS, with team members in New York, Bangalore, Frankfurt, Hong Kong and other locations. The team is focused on natural language understanding, knowledge extraction, representation and reasoning. The team is looking to establish relationships with key academic and industrial partners, with engagements geared towards creation of public data-sets and associated publications.



  • Attention and its (mis)interpretation


    Speaker:

    Danish Pruthi, CMU

    Date:2020-01-14
    Time:14:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful for explaining why a model makes its decisions to stakeholders. In the talk, I will discuss a simple technique for training models to produce deceptive attention masks, calling into question the use of attention for interpretation. Further the talk will characterize the cost of deception (in terms of accuracy), and shed light into alternative mechanisms that the models use to cheat.


    Bio:

    Danish Pruthi is a Ph.D. student at School of Computer Science in Carnegie Mellon University. Broadly, his research aims to enable machines to understand and explain natural language phenomenon. He completed his bachelors degree in computer science from BITS Pilani, Pilani in 2015. He has also spent time doing research at Facebook AI Research, Microsoft Research, Google, and Indian Institute of Science. He is a recipient of the Siebel Scholarship, and CMU Presidential Fellowship.



  • Models for Human-AI Decision Making


    Speaker:

    Prof. Ambuj Singh, University of California, Santa Barbara (UCSB)

    Date:2020-01-14
    Time:15:30:00 (IST)
    Venue:Bharti Building #101
    Abstract:

    Teams of the future will likely consist of humans and AI agents. To this purpose, we conducted experiments to explore how such teams integrate their individual decisions into a group response. We propose a set of models to explain team decision making using appraisal dynamics, Prospect Theory, Bayes rule, and eigenvector centrality. The first two models encode how individuals in a group appraise one other's expertise and the risk-rewards of their opinions. The second two models encode how groups integrate their members' preferences to form a group decision. Decision-making in the experiments proceeds consists of two sequential tasks: a first task in which the teams decide to report one of the presented options to a multiple-choice question or to choose one of the agents. If the teams decide to use an agent, the second decision-making task consists of integrating the agent's response with their previous responses and reporting an answer. Our findings suggest that the proposed models explain decision making in teams. Furthermore, the teams exceed the performance of the best human member most of the time and the AI agents in the first decision task but not so in the second decision task.  


    Bio:

    Ambuj K. Singh is a Professor of Computer Science at the University of California, Santa Barbara, with part-time appointments in the Biomolecular Science and Engineering Program and the Technology Management Program. He received a B.Tech. degree from the Indian Institute of Technology, Kharagpur, and a Ph.D. degree from the University of Texas at Austin. His research interests are broadly in the areas of network science, machine learning, and bioinformatics. He has led a number of multidisciplinary projects including UCSB's Interdisciplinary Graduate Education Research and Training (IGERT) program on Network Science funded by the National Science Foundation (NSF). He is currently directing a Multidisciplinary University Research Initiative (MURI) on the Network Science of Teams. He has graduated over 25 Ph.D. students over his career.



  • A Regularization view of Dropout in Neural Networks


    Speaker:

    Ambar (PHD)

    Date:2020-01-13
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Dropout is a popular training technique used to improve the performance of Neural Networks. However, a complete understanding of the theoretical underpinnings behind this success remains elusive. In this talk, we will take a regularization view of explaining the empirically observed properties of Dropout. In the first part, we will investigate the case of a single layer linear neural network with Dropout applied to the hidden layer, and observe how the Dropout algorithm can be seen as an instance of Gradient Descent applied to a changing objective. Then we will understand how training with Dropout can be seen to be equivalent to adding a regularizer to the original network. With these tools we would be able to show that Dropout is equivalent to a nuclear-norm regularized problem, where the nuclear-norm is taken on the product of the weight matrices of the network. Inspired by the success of Dropout, several variants have been proposed recently in the community. In the second part of the talk, we will analyze some of these variants (DropBlock and DropConnect), and obtain theoretical reasons for their success over vanilla Dropout. Finally, we will end with a unified theory to analyze Dropout variants, and understand some of the implications.


    Bio:

    Ambar is a PhD student in the Computer Science Department at the Johns Hopkins University. He is advised by René Vidal, and affiliated with the Mathematical Institute for Data Science and the Vision Lab at JHU. Previously he has obtained his Bachelor's degree in Computer Science from IIIT Delhi. His current research interest lies in the theory of deep learning, specifically, to theoretically understand the properties induced by common deep learning techniques on the optimization of deep architectures. He is currently working on understanding the regularization properties induced by common tricks used in training DNNs. His other interests include understanding adversarial examples generated for computer vision systems. Ambar is MINDS Data Science Fellow and his research is also supported by the IARPA DIVA and the NSF Optimization for Deep Learning grants.



  • Top Ten Challenges to Realizing Artificial Intelligence (AI)


    Speaker:

    Dr. Ravi Kothari, Ashoka U.

    Date:2020-01-13
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The many successes of deep-learning notwithstanding, there remain fundamental impediments to realizing true Artificial Intelligence (AI). In this widely accessible talk, we discuss the top ten challenges to realizing true AI and highlight some of the ongoing efforts towards resolving these challenges.


    Bio:

    Dr. Ravi Kothari started his professional career as an Assistant Professor in the Department of Electrical and Computer Engineering of the University of Cincinnati, OH, USA where he later became a tenured Associate Professor and Director of the Artificial Neural Systems Laboratory. After about 10 years in academia, he joined IBM Research and held several positions over the years including that of Chief Scientist of IBM Research, India and the Global Chief Architect of the IBM-Airtel outsourcing account where he introduced the first-ever stream based processing of telecom data (Airtel is one of the world's largest full service telecom providers and the Chief Architect is responsible for the IT strategy, design and realization across more than 15 countries). He also became IBM's first Distinguished Engineer from India. After about 15 years in IBM, he joined Accenture for a short stint as a Fellow, Technical Labs prior to returning to academia as a Professor of Computer Science at Ashoka University.

    Dr. Kothari's expertise lies in Machine Learning, Pattern Recognition, AI, Data Mining, Big Data and other data-driven initiatives. His present research focuses on multiple aspects of Artificial Intelligence including "creative" machines.

    Dr. Kothari has served as an Associate Editor of the IEEE Transactions on Neural Networks, IEEE Transactions on Knowledge and Data Engineering, Pattern Analysis and Applications (Springer) as well as on the program committees of various conferences. He has been an IEEE Distinguished Visitor for many years, a member of the IBM Academy of Technology and the IBM Industry Academy. He was a recipient of the Gerstner Award (IBM's highest team award), the Best of IBM award (IBM's highest individual award) and is a fellow of the Indian National Academy of Engineering.



  • Parameterized Complexity of Network Design Problems


    Speaker:

    Dr. Pranabendu Misra, postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany

    Date:2020-01-06
    Time:12:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Network Design Problems, which concern designing minimum cost networks that satisfy given set of ``connectivity constrains'', are very well studied in computer science and combinatorial optimization. Almost all these problems are NP-hard, and a number of results are known about them in the realm of approximation algorithms. Parameterized Complexity is a different framework for dealing with computational intractability, where typically we try to design fast algorithms to solve the problem on those instances which admit a ``small cost solution''. In this talk we will look at some recent results on the parameterized complexity of network design problems, and future directions for research.


    Bio:
    Pranabendu Misra is a postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany.
    Before that, he was a researcher at the Department of Informatics, University of Bergen, Norway.
    He obtained his PhD in Computer Science from the Institute of Mathematical Sciences, Chennai, India, advised by Saket Saurabh.
    He did his undergraduate studies at the Chennai Mathematical Institute in Mathematics and Computer Science. 
    His primary research interests lie in graph theory and algorithms focusing on Parameterized Complexity. 

    He has also worked on problems in approximation algorithms, matroids, fault tolerant graphs, matching under preferences, de-randomization.



  • Fault-Tolerant Reachability and Strong-connectivity Structures


    Speaker:

    Dr. Pranabendu Misra, postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany

    Date:2020-01-06
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Network Design Problems, which concern designing minimum cost networks that satisfy given set of ``connectivity constrains'', are very well studied in computer science and combinatorial optimization. Almost all these problems are NP-hard, and a number of results are known about them in the realm of approximation algorithms. Parameterized Complexity is a different framework for dealing with computational intractability, where typically we try to design fast algorithms to solve the problem on those instances which admit a ``small cost solution''. In this talk we will look at some recent results on the parameterized complexity of network design problems, and future directions for research.


    Bio:

    Pranabendu Misra is a postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany.
    Before that, he was a researcher at the Department of Informatics, University of Bergen, Norway.
    He obtained his PhD in Computer Science from the Institute of Mathematical Sciences, Chennai, India, advised by Saket Saurabh.
    He did his undergraduate studies at the Chennai Mathematical Institute in Mathematics and Computer Science.
    His primary research interests lie in graph theory and algorithms focusing on Parameterized Complexity.
    He has also worked on problems in approximation algorithms, matroids, fault tolerant graphs, matching under preferences, de-randomization.



  • Congruence Closure Algorithms for Uninterpreted and Interpreted Symbols


    Speaker:

    Prof. Deepak Kapur

    Date:2020-01-02
    Time:15:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Algorithms for computing congruence closure and conditional congruence closure of conditional ground equations over uninterpreted and interpreted symbols are presented. The algorithms are based on a recently developed framework for computing (conditional) congruence closure by abstracting nonflat terms by constants as proposed first in Kapur's congruence closure  algorithm (RTA97). This framework is general and flexible. It is used also to develop (conditional) congruence closure algorithms for cases when associative-commutative function symbols can have additional properties including idempotency, nilpotency and/or have identities, as well as their various combination. It also works for Horn properties including extensionality of function symbols. The concept of a Horn closure of ground equations with uninterpreted as well as interpreted symbols is proposed to decide Horn equations directly.


    Bio:

    Professor, Department of Computer Science The University of New Mexico, Albuquerque, NM USA



  • Novel and Efficient Techniques for Guaranteeing Routing and Protocol Level Deadlock Freedom in Interconnection Networks


    Speaker:

    Mayank Parasar (Georgia Tech)

    Date:2020-01-02
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Interconnection networks are the communication backbone for any system. They occur at various scales: from on-chip networks between processing cores, to supercomputers between compute nodes, to data centers between high-end servers. One of the most fundamental challenges in an interconnection network is that of deadlocks. Deadlocks can be of two types: routing level deadlocks and protocol level deadlocks. Routing level deadlocks occur because of cyclic dependency between packets trying to acquire buffers, whereas protocol level deadlock occurs because the response message is stuck indefinitely behind the queue of request messages. Both kinds of deadlock render the forward movement of packets impossible leading to complete system failure.

    Prior work either restricts the path that packets take in the network or provisions an extra set of buffers to resolve routing level deadlocks. For protocol level deadlocks, separate sets of buffers are reserved at every router for each message class. Naturally, proposed solutions either restrict the packet movement resulting in lower performance or require higher area and power.

    In my work, I propose a new set of efficient techniques for providing both routing and protocol level deadlock freedom. My techniques provide periodic forced movement to packets in the network, which breaks any cyclic dependency of packets. Breaking this cyclic dependency results in resolving routing level deadlocks. Moreover, because of periodic forced movement, the response message is never stuck indefinitely behind the queue of request messages; therefore, my techniques also resolve protocol level deadlocks.


    Bio:

    Mayank Parasar is a fifth-year Ph.D. candidate in the School of Electrical and Computer Engineering at Georgia Institute of Technology. He received an M.S. in Electrical and Computer Engineering from Georgia Tech in 2017 and a B.Tech. in Electrical Engineering department from Indian Institute of Technology (IIT) Kharagpur in 2013.

    He works in computer architecture with the research focus on proposing breakthrough solutions in the field of interconnection networks, memory system and system software/application layer co-design. His dissertation, titled Novel and Efficient Techniques for Guaranteeing Routing and Protocol Level Deadlock Freedom in Interconnection Networks, formulates techniques that guarantee deadlock freedom with a significant reduction in both area and power budget.

    He held the position of AMD Student Ambassador at Georgia Tech in the year 2018-19. He received the Otto & Jenny Krauss Fellow award in the year 2015-16.



  • Groebner Bases: Universality, Parametricity and Canonicity


    Speaker:

    Prof. Deepak Kapur,

    Date:2020-01-01
    Time:11:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The talk will integrate the concepts of a universal Groebner basis which serves as a Groebner basis for all admissible term orderings, with parametric (more popularly called comprehensive) Groebner basis which serves as a Groebner basis for all possible specializations of parameters. Three different but related approaches will be presented. First one extends Kapur's algorithm for computing a parametric Groebner basis in which along with branching based on making the head coefficient nonzero or not, branching on ordering constraints is also done in order first to choose a term that could serve as the head term. The second one is based on the Kapur, Sun and Wang's algorithm for computing comprehensive Groebner basis and system but uses a reduced universal Groebner basis to generate a universal parametric Groebner basis. The third one starts with a reduced Groebner basis using one arbitrary ordering and then generate a universal comprehensive Groebner basis by incrementally changing the orderings along with partitioning specializations. The result of these algorithm is a mega Groebner basis that works for every admissible ordering as well as for any specialization of parameters.


    Bio:

    Professor, Department of Computer Science The University of New Mexico, Albuquerque, NM USA




2019 talks

  • Randomized Algorithms in Graph Connectivity: New Takes on Old Ideas


    Speaker:

    Prof. Debmalya Panigrahi (Duke University)

    Date:2019-12-10
    Time:14:30:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    In this talk, I will describe new interpretations and extensions of classic randomized algorithms in graph connectivity from the 1990s. These new insights lead to better algorithmic and combinatorial results in graph and hypergraph connectivity including min-cut algorithms, sparsification techniques, and reliability analysis. The talk will be self-contained.


    Bio:

    Prof. Debmalya Panigrahi (Duke University)



  • O(d)-competitive Convex Body Chasing


    Speaker:

    Prof. Anupam Gupta (Carnegie Mellon University

    Date:2019-12-10
    Time:15:45:00 (IST)
    Venue:Bharti Building #501
    Abstract:

     We study the problem of chasing convex bodies online: given a sequence of convex bodies K_t in R^d the algorithm must respond with points x_t in K_t in an online fashion (i.e., x_t is chosen before K_{t+1} is revealed). The objective is to minimize the total distance between successive points in this sequence. We discuss how this problem generalizes well-studied problems in online algorithms, and give an algorithm that is O(d)-competitive.


    Bio:

    Prof. Anupam Gupta (Carnegie Mellon University



  • Polynomial Pass Lower Bounds for Graph Streaming Algorithms


    Speaker:

    Prof. Sanjeev Khanna (Univ. of Pennsylvania)

    Date:2019-12-10
    Time:16:45:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    We present new lower bounds that show that a polynomial number of passes are necessary for solving some fundamental graph problems in the streaming model of computation. For instance, we show that any streaming algorithm that finds a minimum s-t cut in a weighted undirected graph needs essentially quadratic space unless it makes a polynomial number of passes over the graph stream.

    To prove our lower bounds, we introduce and analyze a new four-player communication problem that we refer to as the hidden-pointer chasing problem. The hidden-pointer chasing problem appears flexible enough to find other applications and is therefore interesting in its own right. To showcase this, we present an interesting application of this problem beyond streaming algorithms. Using a reduction from hidden-pointer chasing, we prove that any algorithm for submodular function minimization needs to make essentially a quadratic number of value queries to the function unless it has a polynomial degree of adaptivity.

    This is joint work with Sepehr Assadi and Yu Chen.


    Bio:

    Prof. Sanjeev Khanna (Univ. of Pennsylvania)



  • Applying Predicate Detection to Discrete Optimization Problems


    Speaker:

    Prof. Vijay Garg, Department of Electrical & Computer Engineering , The University of Texas, Austin

    Date:2019-11-15
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    We present a method to design parallel algorithms for constrained combinatorial optimization problems. Our method solves and generalizes many classical combinatorial optimization problems including the stable marriage problem, the shortest path problem and the market clearing price problem. These three problems are solved in the literature using Gale-Shapley algorithm, Dijkstra's algorithm, and Demange, Gale, Sotomayor algorithm. Our method solves all these problems by casting them as searching for an element that satisfies an appropriate predicate in a distributive lattice. Moreover, it solves generalizations of all these problems  --- namely finding the optimal solution satisfying additional constraints called lattice-linear predicates. For stable marriage problems, an example of such a constraint is that Peter's regret is less than that of Paul. Our algorithm, called Lattice-Linear Predicate Detection (LLP) can be implemented in parallel with lock-free synchronization. It just assumes atomicity of reads and writes. In addition to finding the optimal solution, our method is useful in enumerating all constrained stable matchings, and all constrained market clearing price vectors.


    Bio:

    Vijay Garg is a Cullen Trust Endowed Professor in the Department of Electrical & Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Computer Science at the University of California at Berkeley and B. Tech. in computer science at IIT, Kanpur. His research interests are in distributed computing, discrete event systems and lattice theory. He is the author of "Elements of Distributed Computing" (Wiley, 2002), "Introduction to Lattice Theory with Computer Science Applications" (Wiley, 2015), and "Modeling and Control of Logical Discrete Event Systems" (Springer, 2012). He is an IEEE Fellow.



  • Synthesis and Validation of Robust Cyber Physical Systems


    Speaker:

    Prof. Soumyajit Dey (http://cse.iitkgp.ac.in/~soumya/)

    Date:2019-11-14
    Time:10:30:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Cyber Physical Systems (CPS) define the complex interaction of sensing, control law computation and actuation in real time for complex network of physical `plants'. While control theorists have dealt with CPS modeling from a theoretical point of view, platform non-idealities play their role in determining the actual control performance. Also, platform vulnerabilities and communication interfaces can be exploited by attackers to degrade control performance of such systems. We explore the usefulness of Formal Methods tools in computing reliability guarantees for such systems. We also explore how intelligent scheduling of workloads can effect CPS performance in automotive context.


    Bio:

    Soumyajit Dey joined the dept. of CSE, IIT Kgp in May 2013. He worked at IIT Patna as assistant professor in CSE dept. from beginning of Spring 2012 to end of Spring 2013. He received a B.E. degree in Electronics and Telecommunication Engg. from Jadavpur University, Kolkata in 2004. He received an M.S. followed by PhD degree in Computer Science from Indian Institute of Technology, Kharagpur in 2007 and 2011 respectively. His research interests include 1) Synthesis and Verification of Safe, Secure and Intelligent Cyber Physical Systems, 2) Runtime Systems for Heterogeneous Platforms.



  • Investigations on Adaptive Invisible Watermarking in Transform Domain


    Speaker:

    Dr. Prasanth Vaidya Sanivarapu

    Date:2019-11-11
    Time:11:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    In the talk I am going to highlight the research work done by me. In our research work, we focused on copyright protection and ownership identification of multimedia data. Watermarking is the best approach to overcome the challenges faced during the distribution of digital data. Digital watermarking is an effective technology of embedding data (watermark signal) into multimedia data. The authentic owner can exhibit his ownership by inserting a robust watermark in an image and later can extract it for verification. Since spatial domain methods are fragile and cannot withstand to the attacks, transform domain is utilized for embedding the watermark to provide robustness. Also, factor values are calculated dynamically in most of the methods which makes the scheme fragile. To overcome the drawbacks, we have proposed adaptive blind,
    semi-blind and non-blind watermarking schemes with robustness, imperceptibility, security and fidelity.  A summary of the proposed watermarking schemes is given below:

    1. A blind watermarking scheme based on Binary Robust Invariant Scalable
    Key points (BRISK) features, contourlet transform (CT) and QR decomposition is proposed.
    2. An adaptive non-blind watermarking in wavelet domain is proposed for gray-scale images where scaling and embedding factors are calculated using the Bhattacharyya distance and kurtosis.
    3. By extending the previous work, a robust blind watermarking scheme on color images is designed. To make the scheme more robust against attacks, Bhattacharyya distance is used to calculate the embedding factor values adaptively. A bit manipulation gives the scheme an additional advantage to protect the watermark from attacks.
    4. A semi-blind watermarking scheme using multiple decompositions is designed. To provide security to the watermark, Arnold transform is utilized in watermark encryption by generating a secret key. The embedding is done by applying Wavelet Transform, CT, Schur decomposition and SVD in sequence and vice-versa for extraction.
    5. A robust non-blind watermarking method with game-theory techniques is designed. In this method, optimal solution is provided to the decision makers. To attain the better trade-off between the embedder and the extractor an optimal solution is calculated by using Iterative Elimination of Strictly Dominant Strategies (IESDS).
    6. A multipurpose color image semi-blind watermarking in wavelet domain using multiple decomposition techniques is utilized, three gray-scale watermarks are embedded in the single color image to provide better
    authenticity.
    7. Extending the image watermarking to video watermarking, a robust and blind watermarking for color videos using redundant wavelet transform (RDWT) and SVD is proposed.
    8. In medical watermarking, the patient data is hidden into ECG signal using watermarking in transform domain
    Finally, the watermarking technology can be applied by providing security and authentication to the multimedia data in different sectors.


    Bio:

    Dr. Prasanth Vaidya Sanivarapu



  • Learning the Koopman Operator for Dynamic Data


    Speaker:

    Prof. Chandrajit Bajaj

    Date:2019-11-11
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Recent work in the study of dynamic systems has focussed on data driven decomposition techniques that approximate the action of the Koopman operator on observable functions of the underlying phenomena.
    In particular, the data driven method of dynamic mode decomposition (DMD) has been explored, with multiple variants of the algorithm in existence, including extended DMD, DMD in reproducing kernel Hilbert spaces, a Bayesian framework, a variant for stochastic dynamical systems, and a variant that uses deep neural networks.  In this talk, I shall briefly summarize the large existing work on  data driven learning of Koopman operator models, and then describe new sampling based sketching approaches (SketchyCoreSVD, SketchyCoreTucker)  together with matrix-valued Kernels, to achieve  a deep learning architecture for accelerated Koopman operator approximations of dynamic observable data. Examples are drawn from remote sensing, bio-medical cardiac magnetic resonance  video, and time series reactive flow simulations of a single ejector  combustion process.


    Bio:

    Prof. Chandrajit Bajaj is a professor at the Department of Computer Science and Oden Institute of Engineering and Sciences, University of Texas at Austin.  He is a Distinguished Alumnus of IIT Delhi, and a very well-known and respected researcher in computer graphics and vision and applications in medicine, etc.

    http://www.cs.utexas.edu/~bajaj



  • "Solar + Storage + LED + IoT = Trillion"


    Speaker:

    Srinivasan Keshav, University of Cambridge

    Date:2019-11-01
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Recent technological advances in the areas of solar photovoltaics, Lithium-Ion-based energy storage, light emitting diodes, and the Internet of Things will substantially change the energy, electrical grid, building, and transportation sectors. Some experts estimate that these will result in economic activity of about Trillion over the next two decades. In this talk, I will touch upon the recent advances in these technical areas and speculate on their economic impacts. I will also present some of my recent results that attempt to address these
    changes.


    Bio:

    Srinivasan Keshav is the Robert Sansom Professor of Computer Science at the University of Cambridge. He received a B.Tech in Computer Science and Engineering from IIT Delhi in 1986 and Ph.D. in Computer Science from the University of California, Berkeley in 1991. He was subsequently an MTS at AT&T Bell Labs and an Associate Professor at Cornell. In 1999 he left academia to co-found Ensim Corporation and GreenBorder Technologies Inc. He has been at the University of Waterloo since 2003, holding first a Canada Research Chair and then the Cisco Chair in Smart Grid. His honours include being named an ACM and IEEE Fellow, two Test of Time awards from ACM SIGCOMM, and best paper awards at ACM SIGCOMM, MOBICOM, and eEnergy. He is the author of two graduate textbooks on computer networking and has served as Chair of ACM SIGCOMM and as Associate Dean of Graduate Studies in the Faculty of Mathematics, University of Waterloo.



  • Google Research India: A Peek at Research Activities and Plans


    Speaker:

    Dr Manish Gupta

    Date:2019-10-31
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    We provide an overview of research activities underway and planned at the newly established AI research lab by Google in India. Some of the fundamental problems in machine learning (ML) we are investigating include a better understanding of the limitations of deep learning systems and improving their robustness. We describe plans to conduct research at the intersection of software engineering, programming languages, and ML to make software systems built using ML methodology more robust, secure, and fair. We describe Google products like GPay and Assistant, which are being used by tens of millions of users in India, and plans to apply ML in areas like natural language understanding and social networks to improve both the unique challenges in the Indian context (e.g., code mixing, diversity of languages, dialects and accents) as well as their overall capabilities (e.g., improving the ability of the Assistant to handle conversations that require a combination of knowledge, reasoning and personalization, and improving fraud detection in GPay). We describe a critical need and an opportunity to improve health outcomes globally at lower costs, and specific challenges in countries like India of an acute shortage of doctors. We describe a convolutional neural network based solution developed by our researchers for more effective screening of diabetic retinopathy, which is being deployed at hospitals in India. Finally, we describe our AI for Social Good program through which we are addressing issues like public health, education and wildlife conservation, in partnership with NGOs.


    Bio:

    Dr. Manish Gupta is the Director of Google Research India, a new AI research lab recently announced by Google. He holds an additional appointment as Infosys Foundation Chair Professor at IIIT Bangalore.  Previously, Manish has led VideoKen, a video technology startup, and the research centers for Xerox and IBM in India. As a Senior Manager at the IBM T.J. Watson Research Center in Yorktown Heights, New York,  Manish led the team developing system software for the Blue Gene/L supercomputer. IBM was awarded a National Medal of Technology and Innovation for Blue Gene by US President Barack Obama in 2009. Manish holds a Ph.D. in Computer Science from the University of Illinois at Urbana Champaign. He has co-authored about 75 papers, with more than 7,000 citations in Google Scholar, and has been granted 19 US patents. While at IBM, Manish received two Outstanding Technical Achievement Awards, an Outstanding Innovation Award and the Lou Gerstner Team Award for Client Excellence. Manish is a Fellow of ACM and the Indian National Academy of Engineering, and a recipient of a Distinguished Alumnus Award from IIT Delhi.



  • Are you Happy, Angry or Sad: Identifying Emotions from Walking Using Affective and Deep Features


    Speaker:

    Aniket Bera, University of Maryland

    Date:2019-10-31
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    A new data-driven model and algorithm to identify the perceived emotions of individuals based on their walking styles. Given an RGB video of an individual walking, we extract his/her walking gait in the form of a series of 3D poses. Our goal is to exploit the gait features to classify the emotional state of the human into one of four emotions: happy, sad, angry, or neutral. Our perceived emotion recognition approach uses deep features learned via LSTM on labeled emotion datasets. Furthermore, we combine these features with affective features computed from gaits using posture and movement cues. We show that our mapping between the combined feature space and the perceived emotional state provides 80.07% accuracy in identifying the perceived emotions. In addition to classifying discrete categories of emotions, our algorithm also predicts the values of perceived valence and arousal from gaits.


    Bio:

    Dr. Aniket Bera is an Assistant Research Professor at the Department of Computer Science and the University of Maryland Institute for Advanced Computer Studies (UMIACS). Prior to this, he was a Research Assistant Professor at the University of North Carolina at Chapel Hill. He received his Ph.D. in 2017 from the University of North Carolina at Chapel Hill. His core research interests are in Computer Graphics, AI, Social Robotics, Data-Driven Crowd Simulations, Cognitive modeling: Knowledge, reasoning, and planning for intelligent characters. He is working with the Geometric Algorithms for Modeling, Motion, and Animation (GAMMA) group. His current research focuses on the social perception of intelligent agents and robotics.
    Webpage: https://www.cs.umd.edu/~ab/



  • An Information Session on Working with Indian Political Data by TCPD


    Speaker:

    Mohit Kumar (Data Scientist), Priyamvada Trivedi (Associate Director ,TCPD)

    Date:2019-10-18
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    The Trivedi Centre for Political Data (TCPD) is the product of a joint venture between the Political Science and Computer Science departments at Ashoka University. In this session, drawing from projects currently underway, we will discuss the data extraction, mining and warehousing techniques that are used to get, clean and store data on Indian politics in a scientific form. Using two web applications - SURF which is an entity resolution tool for Indian names to resolve names of candidates in order to create a unique identifier for each candidate and LokDhaba which is a portal for both Vidhan Sabha and Lok Sabha elections results post 1962 - that have been developed at the Centre, we will show how our data is disseminated in the public domain. And lastly, we will show how this data can be used to analyze and understand spatial and temporal patterns of change in Indian politics. We will end the session with a short discussion on some new projects and their associated challenges.


    Bio:

    Mohit Kumar is a data scientist and GIS engineer. He completed his undergraduate and MS by Research from the International Institute of Information Technology (IIIT) Hyderabad in computer science engineering with specialization in spatial informatics. He has been working with TCPD since July 2017 and loves trekking. He can be contacted at mohit.kumar@ashoka.edu.in.

    Priyamvada Trivedi is the Associate Director of TCPD and a Visiting Faculty in the Department of Political Science at Ashoka. She completed her undergraduate in electrical engineering from Purdue University and her PhD in political science from the University of Michigan. She has been working with TCPD since June 2018 and loves trees. She can be contacted at priyamvada.trivedi@ashoka.edu.in



  • Fault-Tolerant Reachability and Strong-connectivity Structures


    Speaker:

    Keerti Choudhary,  Weizmann Institute of Science

    Date:2019-10-09
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Reachability and strong-connectivity are fundamental graph problems. In the static setting, both problems can be solved in O(m+n) time for any directed graph G with n vertices and m edges. However, most real-world networks are prone to failures. The failures are transient and occur for a small duration of time. Under such a setting, we would like to preprocess the graph to compute data-structures that can achieve robustness by being functional even after the occurrence of failures. In recent years, researchers have studied various questions pertaining to connectivity, distances, routing, min-cuts, etc. in the domain of fault tolerance networks.

    In this talk, I will discuss the following questions:

    1. Can we compute a reachability tree in just linear time, after the occurrence of a small number of failures?
    2. How fast can we compute the strongly connected components, again on the occurrence of failures?
    3. Can we find a certificate for these structures that remains valid even after a bounded number of failures?

    Our solutions will employ some extremely basic tools like max-flows, min-cuts, and heavy-path-decomposition.


    Bio:

    Weizmann Institute of Science



  • Streaming Algorithms for Matching Problems


    Speaker:

    Dr. Sagar Kale

    Date:2019-09-30
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The advent of big data necessitated design of algorithms that could cope with it.  Streaming algorithms are viable for big data because they process their input sequentially and use a small amount of working memory.  In this talk, I will present my research on streaming algorithms for matching problems.  Matching problems have played a significant historical role in not just combinatorial optimization specifically but computer science at large and are our benchmark problems for development and understanding of a computational model.  Indeed, studying them in the streaming model has led me to fastest algorithms for some combinatorial optimization problems---streaming or otherwise.

    In the maximum matching problem, edges of the input graph arrive one-by-one in the stream and the goal is to compute a matching of large size.  Weighted matching is a well-studied generalization of maximum matching where the graph is edge-weighted, and, in a further generalization, a submodular function is defined on the edges.  Submodular functions have received a lot of attention in theoretical computer science as well as, more recently, in machine learning.

    I will discuss a reduction from submodular matching to weighted matching that also extends to streaming algorithms for very general constraints such as matroids.  Then I will give an overview of how to obtain almost optimal weighted matchings in constant number of passes over the stream and also in the MapReduce framework.  I will conclude with future research directions.


    Bio:

    Sagar Kale did his Ph.D. at Dartmouth College where his advisor was Prof. Amit Chakrabarti.  He is currently a postdoc at EPFL where his host is Prof. Ola Svensson.  His research is mainly on streaming algorithms for combinatorial optimization problems related to matchings and matroids---which are fundamental structures in computer science.



  • A hackers journey of exploiting your IoT Cloud Applications


    Speaker:

    Mr Ananda Krishna (Co-founder & CTO of Astra Security)

    Date:2019-09-24
    Time:17:15:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    IoT devices are everywhere. From cars and ACs to monitoring devices in factories. Objects around us are increasingly being Internet-enabled. Similar to the early days of the Internet, everyone has rushed into building IoT solutions but often neglected security. Since IoT devices are connected to the internet, they are vulnerable to hacking just like any other device/servers on the Internet. In this session, we will deconstruct how hackers hack IoT solutions, common security vulnerabilities (OWASP Top 10), testing methodology, attack surface area, and what can be done to prevent it.


    Bio:

    Co-founder & CTO of Astra Security



  • Speed Scaling for Parallel and Tandem Servers


    Speaker:

    Rahul Vaze, TIFR

    Date:2019-09-23
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Scheduling jobs with adjustable server speeds with corresponding speed/power cost is referred to as the speed scaling problem. Speed scaling with multiple parallel servers has been an attractive object of interest, however, the tandem setting has been relatively less explored. In this talk, we will discuss both the parallel and the tandem speed scaling problem, in the context of competitive ratio of online algorithms with worst case guarantees.

    In the parallel setting, we consider the popular shortest remaining processing time (SRPT) algorithm and show that it can achieve a constant competitive ratio. In prior work, more complicated algorithms were shown to be constant competitive.

    In the tandem setting, there is a series of servers and each job has to be processed by each of the servers in sequence, and as far as we know, no work has been reported on bounding the competitive ratio in the worst case setting. When all job sizes are equal, we propose an algorithm that is shown to have a constant competitive ratio. The proposed algorithm, at all times, uses the same speed on all active servers, such that the total power consumption equals the total number of outstanding jobs. The general problem remains open.


    Bio:

    Rahul Vaze is a Reader at the School of Technology and Computer Science at the Tata Institute of Fundamental Research, Mumbai. His research interests are in Stochastic Geometry, Wireless Networks, and Combinatorial Optimization and he is an author of a book Random Wireless Networks, CUP, 2015.



  • Faster algorithms for p-norm regression


    Speaker:

    Sushant Sachdeva, U. Toronto

    Date:2019-09-03
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Linear regression in L_p-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Standard linear regression corresponds to p=2, and p=1 or infinity is equivalent to linear programming.

    In this talk, we will discuss new algorithms for obtaining high-accuracy (1+1/poly(n))-approximate solutions to such problems. Our algorithms are based on an ‘iterative refinement’ approach for p-norms, where an algorithm for computing an approximate solution can be converted into a high-accuracy algorithm. Previously, iterative
    refinement was known only for L_2-regression (or equivalently, solving linear systems).

    Based on this approach, we give several new high-accuracy algorithms, including 1) an algorithm for p-norm regression by solving only m^{1/3} linear systems, 2) a provably convergent and fast iterative reweighted least squares (IRLS) algorithm for p-norm regression, and 3) an algorithm for p-norm flows in undirected unweighted graphs in almost-linear time.

    This talk will be based on joint works with Deeksha Adil, Richard Peng, Rasmus Kyng, and Di Wang.


    Bio:

    Sushant Sachdeva is a faculty member at the CS dept. at the University of Toronto. He is interested in Algorithms, and its connections to optimization, machine learning, and statistics. His recent research focus has been the design of fast algorithms for graph problems.

    Before joining UofT, he was a research scientist at Google. He completed his postdoc at Yale with Dan Spielman (2016), his PhD from Princeton (2013) under Sanjeev Arora, and his BTech from IIT Bombay (2008). He is the recipient of Google Faculty Research Award (2017), Simons Berkeley Research Fellowship (2013), and the IITB President of India Gold Medal (2008).



  • Faster algorithms for p-norm regression


    Speaker:

    Sushant Sachdeva, U. Toronto

    Date:2019-09-03
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Linear regression in L_p-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Standard linear regression corresponds to p=2, and p=1 or infinity is equivalent to linear programming.

    In this talk, we will discuss new algorithms for obtaining high-accuracy (1+1/poly(n))-approximate solutions to such problems. Our algorithms are based on an ‘iterative refinement’ approach for p-norms, where an algorithm for computing an approximate solution can be converted into a high-accuracy algorithm. Previously, iterative
    refinement was known only for L_2-regression (or equivalently, solving linear systems).

    Based on this approach, we give several new high-accuracy algorithms, including 1) an algorithm for p-norm regression by solving only m^{1/3} linear systems, 2) a provably convergent and fast iterative reweighted least squares (IRLS) algorithm for p-norm regression, and 3) an algorithm for p-norm flows in undirected unweighted graphs in almost-linear time.

    This talk will be based on joint works with Deeksha Adil, Richard Peng, Rasmus Kyng, and Di Wang.


    Bio:

    Sushant Sachdeva is a faculty member at the CS dept. at the University of Toronto. He is interested in Algorithms, and its connections to optimization, machine learning, and statistics. His recent research focus has been the design of fast algorithms for graph problems.

    Before joining UofT, he was a research scientist at Google. He completed his postdoc at Yale with Dan Spielman (2016), his PhD from Princeton (2013) under Sanjeev Arora, and his BTech from IIT Bombay (2008). He is the recipient of Google Faculty Research Award (2017), Simons Berkeley Research Fellowship (2013), and the IITB President of India Gold Medal (2008).



  • An introduction to Text Simplification


    Speaker:

    Horacio Saggion

    Date:2019-08-30
    Time:14:15:00 (IST)
    Venue:SIT Building #001
    Abstract:

    In this presentation, we aim to provide an extensive overview of automatic text simplification systems proposed so far, the methods they used and discuss the strengths and shortcomings of each of them, providing direct comparison of their outputs. We aim to break some common misconceptions about what text simplification is and what it is not. We believe that understanding of initial motivations, and an in-depth analysis of existing TS methods would help researchers new to ATS propose novel systems, bringing fresh ideas from other related NLP areas. We will describe and explain some of the most influential methods used for automatic simplification of texts so far, with the emphasis on their strengths and weaknesses sometimes showing systems outputs. We will present some of the existing resources for TS for various languages, including parallel manually produced TS corpora, comparable automatically aligned TS corpora, paraphrase- and synonym- resources, TS-specific sentence-alignment tools, and several TS evaluation resources. Finally, we will discuss the existing evaluation methodologies for TS, and necessary conditions for using each of them.

     


    Bio:

    Horacio Saggion is an Associate Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra (UPF), Barcelona. He is the head of the Large Scale Text Understanding Systems Lab, associated to the Natural Language Processing group (TALN) where he works on automatic text summarization, text simplification,  information extraction,  sentiment analysis and related topics.  Horacio obtained his PhD in Computer Science from Universite de Montreal, Canada in 2000. He obtained his BSc  in Computer Science from Universidad de Buenos Aires in Argentina, and his MSc in Computer Science  from UNICAMP in Brazil.  He was the Principal Investigator for UPF  in the EU projects Dr Inventor and Able-to-Include and is currently principal investigator of  the national project TUNER and the Maria de Maeztu project Mining the Knowledge of Scientific Publications.  Horacio has published over 150 works in leading scientific journals, conferences, and books in the field of human language technology.  He organized four international workshops in the areas of text summarization and information extraction and was scientific Co-chair of STIL 2009 and scientific Chair of SEPLN 2014. He is a regular programme committee member for international conferences such as ACL, EACL, COLING, EMNLP, IJCNLP,  IJCAI and is an active reviewer for international journals in computer science, information processing,  and human language technology.  Horacio has given courses, tutorials, and invited talks at a number of international events including LREC, ESSLLI, IJCNLP, NLDB, and RuSSIR.



  • Mining and Enriching Multilingual Scientific Text Collections


    Speaker:

    Prof. Horacio Saggion from Spain

    Date:2019-08-29
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Scientists worldwide are confronted with an exponential growth in the number of scientific documents being made available, for example: Elsevier publishes over 250K scientific articles per year (or one every two minutes) and has over 7 million publications; MedLine, the most important source in biomedical research, contains 21 million scientific references, and the World Intellectual Patent Organization (WIPO) contains some 70 million records. All this unprecedented volume of information complicates the task of researchers who are faced with the pressure of keeping up-to-date with discoveries in their own disciplines and with the challenge of searching for innovation, new interesting problems to solve,  checking already solved problems or hypothesis, or getting information on past and current available methods, solutions or techniques.   At the same time  and with the rise of open science initiatives and social media,  research is  more connected and open creating new opportunities  but also challenges for the scientific community.

     

    In this scenario of scientific information overload, natural language processing has a key role to play. Over the past few years we have  seen a number of  tools for the analysis of the structure of scientific documents (e.g. transforming PDF to XML), methods for extracting keywords, or classifying sentences into argumentative categories being developed.  However,  deep analysis of scientific documents such as: finding key claims, assessing the argumentative quality and strength of the research, or summarizing the key contributions of a piece of work are less common. Besides, most research in scientific text processing is being carried out for  the English language, neglecting both the share of scientific information available in other languages and the fact that scientific publications are many times bilingual.

     

    In this talk, I will  present work carried out in our laboratory towards the development of a system for “deep” analysis and annotation of scientific text collection. Originally for the English language, it has now being adapted to Spanish.   After a brief overview of the system and its main components,  I will present the development of a bi-lingual (Spanish and English) fully annotated text resource in the field of natural language processing that we have  created with  our system together with  a faceted-search and visualization system to explore the created resource.

    With this scenario in mind I will speculate on the challenges and opportunities that the scientific field brings to our community not only in terms of language but also from the point of view of social media and  science education.


    Bio:

    Horacio Saggion is an Associate Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra (UPF), Barcelona. He is the head of the Large Scale Text Understanding Systems Lab, associated to the Natural Language Processing group (TALN) where he works on automatic text summarization, text simplification,  information extraction,  sentiment analysis and related topics.  Horacio obtained his PhD in Computer Science from Universite de Montreal, Canada in 2000. He obtained his BSc  in Computer Science from Universidad de Buenos Aires in Argentina, and his MSc in Computer Science  from UNICAMP in Brazil.  He was the Principal Investigator for UPF  in the EU projects Dr Inventor and Able-to-Include and is currently principal investigator of  the national project TUNER and the Maria de Maeztu project Mining the Knowledge of Scientific Publications.  Horacio has published over 150 works in leading scientific journals, conferences, and books in the field of human language technology.  He organized four international workshops in the areas of text summarization and information extraction and was scientific Co-chair of STIL 2009 and scientific Chair of SEPLN 2014. He is a regular programme committee member for international conferences such as ACL, EACL, COLING, EMNLP, IJCNLP,  IJCAI and is an active reviewer for international journals in computer science, information processing,  and human language technology.  Horacio has given courses, tutorials, and invited talks at a number of international events including LREC, ESSLLI, IJCNLP, NLDB, and RuSSIR.



  • Algorithms for Robust Clustering


    Speaker:

    Ravishankar Krishnaswamy, Microsoft Research India Banglore

    Date:2019-08-28
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Clustering is one of the fundamental problems in optimization: given a set of points P in a metric space (i.e., any distance function which satisfies the triangle inequality d(u,v) + d(v,w) >= d(u,w)), partition them into k clusters to minimize some objective function (either sum of distances of each point to cluster center, which becomes the k-median problem, or sum of squared distances, which becomes the k-means problem, etc.). While these problems are very classical and widely studied, they all suffer from the drawback of not being "robust" to outliers in the data. That is, one can add a few outlier points to the data-set and completely change the structure of the optimal clustering.

    In this talk, we will describe some of our recent works on designing algorithms for "robust clustering", i.e., given a set of points P, and two targets k and m, identify m points to throw out as "outliers", and cluster the remaining points to minimize the clustering objective. We present the first non-trivial approximation algorithms for these problems.  We also consider robust versions of another classical clustering problem called correlation clustering.  Based on joint works with Deeparnab Chakrabarty, Devvrit, Shi Li, Nived Rajaraman, and Sai Sandeep.


    Bio:

    Ravishankar Krishnaswamy, Microsoft Research India Banglore



  • Can machine learning be leveraged to develop combinatorial optimization algorithms?


    Speaker:

    Dr. Deepak Ajwani

    Date:2019-08-23
    Time:16:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Combinatorial optimization problems are ubiquitous across a range of industries. Many of these problems are NP-hard and it is difficult to create practical solutions that can even solve input instances with a few thousand elements, exactly.  Considerable time and effort is usually required to develop efficient algorithms for these problems. So, a natural question to ask is, can we automate the process of developing efficient optimization solutions. In the past, this has led to the development of nature-based meta-heuristics, such as Genetic algorithms, Tabu search, Ant colony optimization etc. Recently, machine learning techniques such as reinforcement learning and deep learning have been explored to learn the output directly from the features of the problem instances. In this talk, I will discuss some of the limitations of these approaches and propose a new framework for leveraging machine learning to scale-up optimization algorithms. In contrast to the existing deep-learning based approaches, I will argue that the proposed framework has the potential to automatically develop simple, intuitive optimization algorithms with local features and interpretable models, that can potentially be analyzed for worst-case guarantees on specific instance classes.


    Bio:

    Dr. Deepak Ajwani is an Assistant Professor at the School of Computer Science, University College Dublin. His research is at the confluence of algorithms and data structures (with a focus on scalable graph algorithms), algorithm engineering, combinatorial optimization, semantic analysis and machine learning.

    Prior to this appointment, he worked as a research scientist at Nokia Bell Labs. As part of his work at Nokia Bell Labs, he co-lead the design and development of a cognitive computing tool for interpreting, organizing and navigating unstructured content. He received his Ph.D. from Max Planck Institute for Informatics in 2008. He is an alumnus of IIT Delhi, having obtained his B.Tech + M.Tech degrees in Computer Science and Engineering in 2003.



  • Optimization on manifolds and its application to learning multilingual word embeddings


    Speaker:

    Pratik Jawanpuria

    Date:2019-08-19
    Time:11:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Riemannian geometry is a generalization of the Euclidean geometry. It includes several non-Euclidean spaces such as set of symmetric positive-definite matrices and set of orthogonal matrices, to name a few. Numerous machine learning problems can be cast as an optimization problem on Riemannian manifolds. Examples include principal component analysis, matrix/tensor completion, learning taxonomy, among others.

    In this talk, I will discuss our recent work on cross-lingual mapping of word embeddings in supervised setting. Our approach decouples the source-to-target language transformation into various geometrical components, all residing on Riemannian manifolds. Our framework allows seamless generalization from bilingual to multilingual setting as well. Empirical results illustrate the efficacy of the proposed approach in bilingual lexicon induction tasks. The talk includes a brief introduction to optimization on manifolds.

    Link to the work: https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00257


    Bio:

    Pratik Jawanpuria received the B.Tech. and Ph.D. degrees in Computer Science from the Indian Institute of Technology Bombay, in 2009 and 2014, respectively. He was a postdoctoral researcher at the Saarland University, Germany, in 2015. He is currently a senior applied scientist at Microsoft, India. Prior to this, he worked as an applied scientist in the India Machine Learning Team at Amazon. His primary research interests span optimization for machine learning applications.  

    Homepage: https://pratikjawanpuria.com/



  • A quantum wave in computing


    Speaker:

    Prof. Umesh Vazirani

    Date:2019-08-19
    Time:15:30:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The field of quantum computation is entering an exciting new period, with small to medium scale machines around the corner. The potential impact of fully scaled quantum computers includes revolutionary new ways of designing new materials, quantum chemistry, optimization and cryptography. Before this can happen great challenges in quantum algorithms, error-correction and quantum science have to be overcome. In this talk I will give a survey of both the potential impact and the challenges.


    Bio:

    Umesh Vazirani is the Strauch Distinguished Professor of Computer Science at University of California, Berkeley, and director of the Berkeley Quantum Information and Computation Center. He has wide interests in the algorithmic foundations of quantum computing, and his 1993 paper with Ethan Bernstein laid the foundations of  quantum complexity theory. Besides quantum computation, he has worked on the computational foundations of randomness and on the theory of classical algorithms: he co-invented with Mehta, Saberi and his brother Vijay Vazirani, the bid scaling algorithm for the AdWords auction which is widely used by Internet search companies, and in 2012 he won the Fulkerson Prize with Arora and Rao for an algorithm for graph partitioning. Vazirani is member of the National Academy of Science, and the author of two books: An Introduction to Computational Learning Theory with Michael Kearns (MIT Press) and Algorithms with Sanjoy Dasgupta and Christos Papadimitriou (McGraw Hill).



  • Understanding Visual Appearance from Micro Scale to Global Scale


    Speaker:

    Kavita Bala & Chair of the Computer Science Department at Cornell University.

    Date:2019-08-13
    Time:16:00:00 (IST)
    Venue:4-5pm, Bharti 501,13 Aug 2019
    Abstract:

    Mixed reality environments require understanding scenes, and then seamlessly blending the rich visual appearance of real and virtual materials to create a compelling user experience. In this talk I will describe our work on modeling and recognizing complex materials, and fine-grained recognition. Using these algorithms as core building blocks we can further understand appearance at a global scale by mining social media to discover patterns across geography and time.


    Bio:

    Kavita Bala is the Chair of the Computer Science Department at Cornell University. She received her S.M. and Ph.D. from the Massachusetts Institute of Technology (MIT), and her B.Tech. from the Indian Institute of Technology (IIT, Bombay). She co-founded GrokStyle (acquired by Facebook), and is a faculty Fellow with the Atkinson Center for a Sustainable Future.

    Bala specializes in computer vision and computer graphics, leading research in recognition and visual search; material modeling and acquisition using physics and learning; and material and lighting perception. Bala's work on scalable rendering, Lightcuts, is the core production rendering engine in Autodesk's cloud renderer; and her instance recognition research was the core technology of GrokStyle's visual search engine. Bala has co-authored the graduate-level textbook "Advanced Global Illumination".

    Bala has served as the Editor-in-Chief of Transactions on Graphics (TOG), papers Chair of SIGGRAPH Asia 2011, and is onthe Papers Advisory Group for SIGGRAPH.



  • Deceptive Media Manipulation An arms race: Learning to defend & deceive


    Speaker:

    Dr. Maneesh Kumar Singh & Head R&D, Verisk AI and the Director of the Human and Computation Intelligence Lab

    Date:2019-08-13
    Time:14:15:00 (IST)
    Venue:CSE Department, Bharti Building - 501
    Abstract:

     As digital manipulation techniques get better, it is becoming increasingly difficult to distinguish between real and fake data. Being able to model the real world and generate realistic data has many exciting applications. However, it poses a serious challenge when such data is generated by malicious actors with an intent to deceive the user (or the computer system) into believing that the fake data being generated is real. While such attacks on images have garnered much attention, they span across media types including speech & audio, text and, finally, fake news. To counter these attacks, intensive efforts have been undertaken by the ML community to generate effective defense strategies which malicious adversaries, in turn, seek to circumvent. This has led to a virtual arms race.

    In this talk, I will provide an overview of the recent efforts in my group at Verisk. We have focused on developing algorithms to defend against expert manipulation of images as well as adversarial attacks that can bypass such defenses. Most of this work is hot off the press and under review at various conferences.


    Bio:

    Dr. Maneesh Kumar Singh is Head R&D, Verisk AI and the Director of the Human and Computation Intelligence Lab. His lab is working on creating collaborative (HMC, MMC) systems for efficient extraction of information & knowledge from unstructured data, capable of closed-loop & explainable reasoning and continuous learning, with applications in vision, NLP, speech and fintech. Verisk Analytics delivers data analytic tools and services for risk assessment, risk forecasting and decision analytics in a variety of sectors including insurance, financial services, energy, government and human resources.

    Dr. Singh has over 15 years of customer-focused industrial R&D experience with stints at Siemens CT and SRI International building and delivering image and video analytics technologies in the areas of multi-camera security and surveillance, aerial surveillance, advanced driver assistance and intelligent traffic control; industrial inspection; and, medical image processing and patient diagnostics systems. Dr. Singh received his Ph.D. in Electrical and Computer Engineering from the University of Illinois at in 2003. He has authored over 35 publications (1 best paper award), and has 15+ U.S. and International patents.



  • AI For Societally Relevant Problems: Influence Maximization in an Uncertain World


    Speaker:

    Amulya Yadav, Assistant Professor,Penn State University

    Date:2019-06-03
    Time:10:30:00 (IST)
    Venue:SIT Building #001
    Abstract:

    The potential of Artificial Intelligence to tackle challenging problems that afflict society is enormous, particularly in the areas of healthcare, conservation and public safety and security. Many problems in these domains involve harnessing social networks of under-served communities to enable positive change, e.g., using social networks of homeless youth to raise awareness about HIV (and other STDs). Unfortunately, most of these real-world problems are characterized by uncertainties about social network structure and influence models, and previous research in AI fails to sufficiently address these uncertainties, as they make several unrealistic simplifying assumptions for these domains. In this talk, I will describe my research on algorithmic interventions in social networks. In the first part of my talk, I will describe my work on developing new influence maximization algorithms which can handle various uncertainties in social network structure, influence models, etc., that commonly exist in real-world social networks. I will discuss how my algorithms utilize techniques from sequential planning problems and computational game theory to develop new kinds of algorithms in the sub-fields of multi-agent systems and reasoning under uncertainty. In the second part of my talk, I will discuss the real-world deployment of my algorithms to spread awareness about HIV among homeless youth in Los Angeles. This represents one of the first-ever deployments of computer science based influence maximization algorithms in this domain. I will discuss the challenges that I faced, and the lessons that can be gleaned for future deployment of AI systems. Finally, I will also talk about other kinds of societally relevant problems that I have worked on, e.g., raising grievances of low literate farmers to government agencies in emerging market countries, fake news detection, etc. All these problems illustrate the enormous potential of AI in addressing societally relevant problems.


    Bio:

    Amulya Yadav is an Assistant Professor in the College of Information Sciences and Technology at Penn State University. He also has an affiliate faculty appointment with the USC Center for Artificial Intelligence in Society. His research interests include Artificial Intelligence, Multi-Agent Systems, Computational Game-Theory and Applied Machine Learning. His work in the field of Artificial Intelligence for Social Good focuses on developing theoretically grounded approaches to real-world problems that can have an impact in the field. His algorithms have been deployed in the real-world, particularly in the field of public health and wildlife protection. Amulya is a recipient of the AAMAS 2016 Best Student Paper Award, the AAAI 2017 Best Video and Best Student Video Award, the IDEAS 2016 Most Visionary Paper Award, and the AAMAS 2017 Best Paper Award nomination. His work has also been highlighted by Mashable.com as one of 26 incredible innovations that improved the world in 2015.

    Amulya holds a Ph.D. in Computer Science from the University of Southern California, and a B. Tech. in Computer Science & Engineering from Indian Institute of Technology (IIT), Patna.



  • Spatial and spatio-temporal data analytics and their applications in urban cities


    Speaker:

    Prof. Cheng Long from NTU

    Date:2019-05-24
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    It is expected that by 2050, more than 2.5 billion people would reside in cities. While the urbanization has modernized many peoples’ lives, it causes big challenges such as traffic congestion, air pollution, energy consumption, etc. As part of the effort for solving these problems, people have been collecting and analyzing data that is being generated in the urban space, e.g., traffic data, mobility data, POI data, etc., for finding insights into the problems and/or serving citizens better decision making. Since the majority of the data involves the spatial and/or temporal dimensions, techniques for spatial and spatio-temporal data analytics are playing critical roles. In this talk, we will overview this data-driven process, introduce some of its interesting applications, and also present some of our recent work on spatial and spatio-temporal data analytics, including online bipartite matching, co-location pattern mining, and spatiotemporal representation learning, etc.


    Bio:

    LONG Cheng is currently an Assistant Professor at the School of Computer Science and Engineering (SCSE), Nanyang Technological University (NTU). From 2016 to 2018, he worked as a lecturer (Asst Professor) at Queen's University Belfast, UK. He got the PhD degree from the Department of Computer Science and Engineering, The Hong Kong University of Science and Technology (HKUST) in 2015. His research interests are broadly in data management and data mining with his vision to achieve scalable spatial computing, to make sense of urban related data for smarter cities, and to manage and analyze emerging big data such as IoT data for richer knowledge. His research has been recognized with one "Best Research Award" provided by ACM-Hong Kong, one "Fulbright-RGC Research Award" provided by Research Grant Council (Hong Kong), two "PG Paper Contest Awards" provided by IEEE-HK, and one "Overseas Research Award" provided by HKUST. He has served as a Program Committee member/referee for several top data management and data mining conferences/journals (TODS, VLDBJ, TKDE, ICDM, CIKM, etc.). He is member of ACM and IEEE.



  • Resource-Aware Session Types for Digital Contracts


    Speaker:

    Dr. Ankush Das, CMU


    Date:2019-05-21
    Time:00:00:00 (IST)
    Venue:CSE Seminar Room, Bharti 501
    Abstract:

    While there exist several successful techniques for supporting programmers in deriving static resource bounds for sequential code, analyzing the resource usage of message-passing concurrent processes poses additional challenges. To meet these challenges, this talk presents two analyses for statically deriving worst-case bounds on the sequential (work) and parallel (span) complexity of message-passing processes, respectively. The work analysis is based on novel resource-aware session types that describe resource contracts by allowing both messages and processes to carry potential and amortize cost. The span analysis enriches session types with temporal modalities to additionally prescribe the timing of the exchanged messages in a way that is precise yet flexible. The talk finally applies session types to implement digital contracts. Digital contracts are protocols that describe and enforce execution of a contract, often among mutually distrusting parties. Programming digital contracts comes with its unique challenges, which include describing and enforcing protocols of interaction, analyzing resource usage and tracking linear assets. The talk presents the type-theoretic foundations of Nomos, a programming language for digital contracts whose type system addresses the aforementioned domain-specific requirements. To express and enforce protocols, Nomos is based on shared binary session types rooted in linear logic. To control resource usage, Nomos uses resource-aware session types and automatic amortized resource analysis, a type-based technique for inferring resource bounds. To track assets, Nomos employs a linear type system that prevents assets from being duplicated or discarded. The talk concludes with future work directions on designing an efficient implementation for Nomos and making it robust to attacks from malicious entities.


    Bio:

    Ankush Das is a fourth year PhD student at Carnegie Mellon University. He is advised by Prof. Jan Hoffmann. He is broadly interested in programming languages with a specific focus on analysis of resource consumption of programs. In the past, he has worked jointly with Prof. Frank Pfenning and his advisor on designing resource-aware session types for parallel complexity analysis of concurrent programs. Recently, he has been working with Stephanie Balzer, along with Prof. Frank Pfenning and Prof. Jan Hoffmann on designing Nomos, a type safe domain-specific programming language based on resource-aware session types for implementing smart contracts.

    Before joining CMU, he worked as a Research Fellow at Microsoft Research, India with Akash Lal where he developed an efficient method to perform precise alias analysis for C and C++ programs for Windows driver modules to automatically infer safe null pointer dereferences.

    He completed his undergraduate at IIT Bombay, India in 2015 where he worked with Prof. Supratik Chakraborty and Prof. Akshay S on deciding termination of linear loop programs.



  • Resource-Aware Session Types for Digital Contracts


    Speaker:

    Dr. Ankush Das, CMU

    Date:2019-05-21
    Time:00:00:00 (IST)
    Venue:IIA- 501 bharti
    Abstract:

    While there exist several successful techniques for supporting programmers in deriving static resource bounds for sequential code, analyzing the resource usage of message-passing concurrent processes poses additional challenges. To meet these challenges, this talk presents two analyses for statically deriving worst-case bounds on the sequential (work) and parallel (span) complexity of message-passing processes, respectively. The work analysis is based on novel resource-aware session types that describe resource contracts by allowing both messages and processes to carry potential and amortize cost. The span analysis enriches session types with temporal modalities to additionally prescribe the timing of the exchanged messages in a way that is precise yet flexible. The talk finally applies session types to implement digital contracts. Digital contracts are protocols that describe and enforce execution of a contract, often among mutually distrusting parties. Programming digital contracts comes with its unique challenges, which include describing and enforcing protocols of interaction, analyzing resource usage and tracking linear assets. The talk presents the type-theoretic foundations of Nomos, a programming language for digital contracts whose type system addresses the aforementioned domain-specific requirements. To express and enforce protocols, Nomos is based on shared binary session types rooted in linear logic. To control resource usage, Nomos uses resource-aware session types and automatic amortized resource analysis, a type-based technique for inferring resource bounds. To track assets, Nomos employs a linear type system that prevents assets from being duplicated or discarded. The talk concludes with future work directions on designing an efficient implementation for Nomos and making it robust to attacks from malicious entities.


    Bio:

    Bio :-
    Ankush Das is a fourth year PhD student at Carnegie Mellon University. He is advised by Prof. Jan Hoffmann. He is broadly interested in programming languages with a specific focus on analysis of resource consumption of programs. In the past, he has worked jointly with Prof. Frank Pfenning and his advisor on designing resource-aware session types for parallel complexity analysis of concurrent programs. Recently, he has been working with Stephanie Balzer, along with Prof. Frank Pfenning and Prof. Jan Hoffmann on designing Nomos, a type safe domain-specific programming language based on resource-aware session types for implementing smart contracts.

    Before joining CMU, he worked as a Research Fellow at Microsoft Research, India with Akash Lal where he developed an efficient method to perform precise alias analysis for C and C++ programs for Windows driver modules to automatically infer safe null pointer dereferences.

    He completed his undergraduate at IIT Bombay, India in 2015 where he worked with Prof. Supratik Chakraborty and Prof. Akshay S on deciding termination of linear loop programs.



  • Resource-Aware Session Types for Digital Contracts


    Speaker:

    Ankush Das, CMU

    Date:2019-05-21
    Time:12:15:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    While there exist several successful techniques for supporting programmers in deriving static resource bounds for sequential code, analyzing the resource usage of message-passing concurrent processes poses additional challenges. To meet these challenges, this talk presents two analyses for statically deriving worst-case bounds on the sequential (work) and parallel (span) complexity of message-passing processes, respectively. The work analysis is based on novel resource-aware session types that describe resource contracts by allowing both messages and processes to carry potential and amortize cost. The span analysis enriches session types with temporal modalities to additionally prescribe the timing of the exchanged messages in a way that is precise yet flexible. The talk finally applies session types to implement digital contracts. Digital contracts are protocols that describe and enforce execution of a contract, often among mutually distrusting parties. Programming digital contracts comes with its unique challenges, which include describing and enforcing protocols of interaction, analyzing resource usage and tracking linear assets. The talk presents the type-theoretic foundations of Nomos, a programming language for digital contracts whose type system addresses the aforementioned domain-specific requirements. To express and enforce protocols, Nomos is based on shared binary session types rooted in linear logic. To control resource usage, Nomos uses resource-aware session types and automatic amortized resource analysis, a type-based technique for inferring resource bounds. To track assets, Nomos employs a linear type system that prevents assets from being duplicated or discarded. The talk concludes with future work directions on designing an efficient implementation for Nomos and making it robust to attacks from malicious entities.


    Bio:

    Ankush Das is a fourth year PhD student at Carnegie Mellon University. He is advised by Prof. Jan Hoffmann. He is broadly interested in programming languages with a specific focus on analysis of resource consumption of programs. In the past, he has worked jointly with Prof. Frank Pfenning and his advisor on designing resource-aware session types for parallel complexity analysis of concurrent programs. Recently, he has been working with Stephanie Balzer, along with Prof. Frank Pfenning and Prof. Jan Hoffmann on designing Nomos, a type safe domain-specific programming language based on resource-aware session types for implementing smart contracts.

    Before joining CMU, he worked as a Research Fellow at Microsoft Research, India with Akash Lal where he developed an efficient method to perform precise alias analysis for C and C++ programs for Windows driver modules to automatically infer safe null pointer dereferences.

    He completed his undergraduate at IIT Bombay, India in 2015 where he worked with Prof. Supratik Chakraborty and Prof. Akshay S on deciding termination of linear loop programs.



  • Debugging and Optimizing Code in a world of Software Frameworks


    Speaker:

    Dr Abhilash Jindal

    Date:2019-05-20
    Time:00:00:00 (IST)
    Venue:IIA-501 Bharti
    Abstract:

    Modern programmers stand on "the shoulders of giant" software frameworks. The programmers are able to build complicated applications fairly quickly by just stringing together many intricate framework APIs. But, due to the limited understanding of underlying frameworks, programmers often make mistakes in using these APIs. Moreover, this limited understanding has made debugging software systems and improving their performance significantly harder.

    In this talk, we first take a close look at wakelock APIs, a set of APIs for explicitly managing the power states of hardware components, that was introduced by the Android framework. I will present the complexity of correctly using wakelock APIs and present a taxonomy of most prevalent energy bugs in the Android ecosystem— sleep disorder bugs which arise from incorrect usage of wakelock APIs. I will then present KLOCK that makes use of a set of static analyses to systematically identify sleep-induced time bugs, a subclass of sleep disorder bugs.

    In the second part, we study performance optimization in programming high-level framework APIs. The state of the art for finding general optimizations is through profiling that can help developers identify hotspots, i.e, method calls that contribute to a large portion of the resource consumption in their source code. In Android apps, the hotspots identified by the profiler are almost always framework API calls. Since developers did not author framework APIs, even after they are presented with the API hotspots, they typically do not have any guidance on how to proceed with the remaining optimization process: (1) Is there a more efficient way of using framework APIs to implement the same app task? (2) How to come up with more efficient implementation? To help developers tackle these challenges, we developed a new optimization methodology called differential profiling that automatically uncovers more efficient API usage by leveraging existing implementations of similar apps which are bountiful in the app marketplace.


    Bio:

    Abhilash Jindal received B.Tech. in Electrical Engineering from IIT Kanpur. He received his Ph.D. in Computer Science from Purdue University where he researched software-based approaches for extending mobile battery life. He has published papers in top system conferences including OSDI, ATC, Eurosys, Mobisys, Sigmetrics, and Mobicom. Currently, he is commercializing his research work serving as the CTO of Mobile Enerlytics, a silicon valley startup. His research interests include mobile systems, software engineering, and operating systems.



  • Computing in the age of low memory


    Speaker:

    Dr. Amitabh Trehan

    Date:2019-04-26
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    We look at the problem of computing in the setting of large networks of low memory! - as may possibly be in setting such as the Internet of Things. We discover a connection between distributed computing and streaming. We formalise this by introducing the Compact Local Streaming model - nodes in an arbitrary network have low memory (at most O(log^2 n)) even though they support O(n) neighbours and computation happens by `local streaming' at nodes.
    We look at some simple variants of the model and then propose the first self-healing compact routing protocol. Our scheme, based on Thorup and Zwick [SPAA 2001] tree routing scheme and the ForgivingTree [PODC' 2008] self-healing scheme is a compact scheme (using at most O(log^2 n) memory per node) that functions despite node failures with only a small additional overhead.
    This research was supported by the EPSRC first grant COSHER: Compact Self-Healing Routing

    References:
    [1] Armando Castañeda [1], Jonas Lefèvre [2], Amitabh Trehan[3],
    Some Problems in Compact Message Passing -
    https://arxiv.org/abs/1803.03042,
    [2] Armando Castañeda [4], Danny Dolev [5], Amitabh Trehan: Compact routing messages in self-healing trees. Theoretical Computer Sci [6]ence, 2018,

    https://www.sciencedirect.com/science/article/pii/S0304397516306818?via%3Dihub [7]


    Bio:

    Dr. Amitabh Trehan is an assistant professor of Computer Science at Loughborough University, specailising in algorithms, in partcular, distributed and reliable (self-healing) algorithms. He received his PhD from University of New Mexico, USA, followed by postdocs in Canada (Victoria) and Israel(Technion, Hebrew University). Before going to the US, he did a BSc from Punjab University and Master's from IGNOU and IIT Delhi.




  • Opinion Dynamics in Social Networks: Modeling and Inference


    Speaker:

    Dr. Abir De

    Date:2019-04-25
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Social media and social networking sites have become a global pinboard for exposition and discussion of news, topics, and ideas, where the users often update their opinions about a particular topic by learning from the opinions shared by their friends.  Understanding such networked opinion formation process has a large spectrum of applications, for example, poll prediction, brand estimation, etc. Beyond prediction, news and information are often curated by editorial boards, professional journalists who post feeds on the users' walls in order to steer their opinion to a desired state. Therefore, the underlying opinion dynamical process also involves the influence of external sources--- which are difficult to identify. In this talk, I shall present these challenging tasks related to opinion dynamics from all these perspectives--- (a) learning a data-driven model of spontaneous opinion exchange; and (b) demarcating endogenous and exogenous opinion diffusion process in social networks. Finally, I will present some interesting directions for future research.


    Bio:

    Dr. Abir De , Professor



  • Designing intelligent machines via reactive systems


    Speaker:

    Suguman Bansal, Rice University, Houston

    Date:2019-04-24
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Intelligent machines, such as IoT devices, are fundamentally reactive systems that interact with the outer physical environment to achieve a goal. Asynchronous interactions are ubiquitous in reactive systems and complicate the design and programming of reactive systems. Automatic construction of asynchronous reactive programs from specifications ("synthesis") could ease the difficulty, but known methods are complex, and intractable in practice.
    In this talk, I will formulate a direct, exponentially more compact automaton construction for the reduction of asynchronous to synchronous synthesis. Experiments with a prototype implementation of the new method demonstrate practical feasibility. Furthermore, it is shown that for several useful classes of temporal properties, automaton-based methods can be avoided altogether and replaced with simpler Boolean constraint solving.


    Bio:

    Suguman Bansal is a PhD candidate in the Department of Computer Science at Rice University. Her research interest spans formal methods, specifically the application of quantitative verification and reactive synthesis. She was a visiting graduate student at the Simons Institute Spring Program on "Real-Time Decision Making" at the University of California, Berkeley in 2018, and has spent the summers of 2017 and 2018 interning at Nokia Bell Labs. She is the recipient of the Andrew Ladd Fellowship, and a Gold Medal at the Association for Computing Machinery (ACM) Student Research Competition at POPL.



  • Designing intelligent machines via reactive systems


    Speaker:

    Suguman Bansal, Rice University, Houston

    Date:2019-04-24
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Intelligent machines, such as IoT devices, are fundamentally reactive systems that interact with the outer physical environment to achieve a goal. Asynchronous interactions are ubiquitous in reactive systems and complicate the design and programming of reactive systems. Automatic construction of asynchronous reactive programs from specifications ("synthesis") could ease the difficulty, but known methods are complex, and intractable in practice.

    In this talk, I will formulate a direct, exponentially more compact automaton construction for the reduction of asynchronous to synchronous synthesis. Experiments with a prototype implementation of the new method demonstrate practical feasibility. Furthermore, it is shown that for several useful classes of temporal properties, automaton-based methods can be avoided altogether and replaced with simpler Boolean constraint solving.


    Bio:

    Suguman Bansal is a PhD candidate in the Department of Computer Science at Rice University. Her research interest spans formal methods, specifically the application of quantitative verification and reactive synthesis. She was a visiting graduate student at the
    Simons Institute Spring Program on "Real-Time Decision Making" at the University of California, Berkeley in 2018, and has spent the summers of 2017 and 2018 interning at Nokia Bell Labs. She is the recipient of the Andrew Ladd Fellowship, and a Gold Medal at the Association for Computing Machinery (ACM) Student Research Competition at POPL.



  • Static Analysis and Automated Verification for Replicated Systems


    Speaker:

    Dr Kartik Nagar ,  Purdue University

    Date:2019-04-15
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Replicated systems provide a number of desirable properties such as scalability, always-on availability, fault tolerance and low latency in a geo-distributed setting. However, writing correct applications for such systems is inherently hard due to their non-sequential nature and unbounded concurrency. To make such systems easier to program, a number of so-called weakly consistent replicated data stores have emerged in the last few years, providing a tradeoff between consistency and performance. It is still a monumental task for programmers to find the weakest consistency level under which their applications can run correctly. In this talk, I will present some of my recent work which addresses this problem by proposing static analysis techniques to reason about the correctness of programs in replicated systems. Notably, the proposed techniques are parametric in the weak consistency model, and hence given an application and a specification of the replicated store, they can automatically find either a correctness bug if it exists, or verify that the application will run correctly. Different techniques are proposed targeting different correctness criteria used by programmers, such as preservation of state-based invariants, serializability for database applications and convergence for high-level data types.


    Bio:

    Kartik Nagar is a postdoctoral research associate at Purdue University working with Prof. Suresh Jagannathan His research interests are in Program Analysis and Automated Verification. He completed his PhD from Indian Institute of Science under the guidance of Prof. Y N Srikant, and his PhD thesis was on static timing analysis of programs for Real-time systems.



  • On optimal ε-nets for set systems


    Speaker:

    Dr. Kunal Dutta

    Date:2019-04-03
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    This talk shall have three main parts. First, I shall give a brief overview of my research areas and contribution. Next, I shall present some recent results on optimal bounds  ε-nets for set systems in terms of their shallow cell complexity, using the shallow packing lemma of D.-Ezra-Ghosh and Mustafa. Finally, I shall conclude with some slides on my research and teaching interests.

    {em ε-nets}, introduced by Haussler and Welzl (1987), are compact representations of set systems. In the three decades since their introduction, they have found  applications in several areas, ranging from statistical inference and machine learning, to algorithms, databases and combinatorial optimization. Finding optimal-sized ε-nets for various set systems, therefore, has been one of the most widely investigated areas in Computational Geometry. A culminating point in this direction, are the recent results of several authors, bounding the size of ε-nets for set systems in terms of their shallow cell complexity. In a related direction, the celebrated {em packing lemma} of Haussler (1991) states that given a set system of bounded VC dimension, any sub-collection of sets with large pairwise symmetric difference, cannot have too many sets. Recently it was generalised by D.-Ezra-Ghosh (2015) and by Mustafa (2016) to the {em shallow packing lemma}.

    I shall present a very short proof of optimal ε-nets for set systems in terms of their shallow cell complexity, using the shallow packing lemma. This implies all known cases of results on unweighted nets studied for the past 30 years, starting from the result of Matouv{s}ek, Seidel and Welzl (1990) to that of Clarkson and Varadarajan (2007) to that of Varadarajan (2010) and Chan, Grant, K"{o}nemann and Sharpe (2012) for the unweighted case, as well as the technical and intricate paper of Aronov, Ezra and Sharir (2010).  Based on joint works with Arijit Ghosh, Nabil Mustafa and Esther Ezra.


    Bio:

    -



  • "Dimensional reduction and molecular signature extraction from large-scale genomics datasets"


    Speaker:

    Dr. Carl HERRMANN, University Clinics Heidelberg

    Date:2019-03-29
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Recent advances in high-throughput sequencing have led to a spectacular increase in the amount of molecular sequencing data available. These techniques allow to track genome-wide gene expression (RNA-seq), as well as sequence features (whole genome sequencing) or epigenetic aspects such as histone modifications or DNA methylation. Together with phenotypic or clinical information, these data represent a comprehensive picture of the cellular processes, for example in disease. However, there are numerous computational challenges, related to dimension of these datasets, and to the diversity of the molecular readouts. Hence, there is a challenge of (1) dimensionality reduction in order to be able to capture essential aspects and visualize those; and (2) data integration in order to combine these distinct views into a unified representation. The goal is to extract lower-dimensional ,molecular signatures” which might correspond for example to specific subtypes in a disease cohort.


    Bio:

    Prof. University Clinics Heidelberg



  • A Talk by


    Speaker:

    Dr. Pushpendra Singh

    Date:2019-03-25
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    ith the increasing use of mobile phones and applications by every section of society, the mobile phone has emerged as a truly ubiquitous computing platform. Similarly, with the emerging IoT devices, we are entering the era of ubiquitous computing and sensing.  Today, the data collected/generated from the sensors, IoT devices, and mobile phones is being used in different domains e.g. energy, environment, transport, and health.  To collect, manage, and analyze this data, we need middleware solutions. Similarly, mobile applications, supported by middleware, have a huge potential in filling the technology gap that is prevalent in rural India especially in the areas of public health and education.  I will provide a brief overview of Middleware research in my group in the area of Energy. Then, I will discuss one project from middleware and mobile crowdsensing aspect and another project, a mobile phone-based platform,  for training community health workers.
     
    Our mobile crowdsensing work provides deep insight into the impact of battery consumption pattern on task allocation algorithms. We show that different battery usage patterns affect the performance of the task allocation algorithms. Our work provides an important insight into factors affecting the performance of allocation algorithms and advocates incorporating battery usage patterns for future development of these algorithms. None of the existing algorithms currently incorporate battery usage patterns.  I will also discuss future research directions in this area.
     
    Inadequate training of community health workers (CHW) has known to affect the outcomes of health interventions. Given the increasingly important role of CHWs in rural areas, there is a need to provide a low-cost and scalable technology platform for training them while compensating for lack of instructors/trainers. Our platform - Sangoshthi - provides a technical platform to support large-scale training of CHWS using a combination of feature phones and a single smartphone. The system has been deployed and is being used for training CHWs. The training has also resulted in statistically significant gains of knowledge. This research is now funded by the Bill & Melinda Gates Foundation. I will present the system and also explain the future research plans in this area. I will conclude with other technology-led initiatives in the area of public health.


    Bio:

    Pushpendra Singh is an associate professor in the dept. of Computer Science and Engineering and the dept. of Human-Centered Design. His research interests are in the areas of mobile middleware systems and HCI for development.




  • Opportunities and Challenges in Middleware Systems and Mobile Applications


    Speaker:

    Dr. Pushpendra Singh

    Date:2019-03-25
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    With the increasing use of mobile phones and applications by every section of society, the mobile phone has emerged as a truly ubiquitous computing platform. Similarly, with the emerging IoT devices, we are entering the era of ubiquitous computing and sensing.  Today, the data collected/generated from the sensors, IoT devices, and mobile phones is being used in different domains e.g. energy, environment, transport, and health.  To collect, manage, and analyze this data, we need middleware solutions. Similarly, mobile applications, supported by middleware, have a huge potential in filling the technology gap that is prevalent in rural India especially in the areas of public health and education.  I will provide a brief overview of Middleware research in my group in the area of Energy. Then, I will discuss one project from middleware and mobile crowdsensing aspect and another project, a mobile phone-based platform,  for training community health workers.
     
    Our mobile crowdsensing work provides deep insight into the impact of battery consumption pattern on task allocation algorithms. We show that different battery usage patterns affect the performance of the task allocation algorithms. Our work provides an important insight into factors affecting the performance of allocation algorithms and advocates incorporating battery usage patterns for future development of these algorithms. None of the existing algorithms currently incorporate battery usage patterns.  I will also discuss future research directions in this area.
     
    Inadequate training of community health workers (CHW) has known to affect the outcomes of health interventions. Given the increasingly important role of CHWs in rural areas, there is a need to provide a low-cost and scalable technology platform for training them while compensating for lack of instructors/trainers. Our platform - Sangoshthi - provides a technical platform to support large-scale training of CHWS using a combination of feature phones and a single smartphone. The system has been deployed and is being used for training CHWs. The training has also resulted in statistically significant gains of knowledge. This research is now funded by the Bill & Melinda Gates Foundation. I will present the system and also explain the future research plans in this area. I will conclude with other technology-led initiatives in the area of public health.


    Bio:

    Pushpendra Singh is an associate professor in the dept. of Computer Science and Engineering and the dept. of Human-Centered Design. His research interests are in the areas of mobile middleware systems and HCI for development.




  • "Congestion Control for Networks with Link and Traffic Variability"


    Speaker:

    Prateesh Goyal

    Date:2019-03-20
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Achieving high throughput and low delay has been a primary motivation for congestion control researchers for decades. Unlike wired links, wireless links exhibit variations in available capacity making this task particularly challenging. This talk will present  Accel-Brake Control (ABC), a simple and deployable explicit congestion control protocol for network paths with time-varying wireless links. ABC routers mark each packet with an "accelerate" or "brake", which causes senders to slightly increase or decrease their congestion windows. Routers use this feedback to quickly guide senders towards a desired target rate.

    The talk will also describe Nimbus, a congestion control system for wide-area networks with traffic variability. Nimbus enables a sender to reduce queuing delays without compromising fairness with cross traffic. Nimbus employs a novel technique to detect whether the cross traffic competing with a flow is elastic or not, and uses the elasticity detector to improve congestion control by explicitly switching operating modes.


    Bio:

    Prateesh Goyal is a PhD student in the Networks and Mobile Systems group at MIT CSAIL, advised by Hari Balakrishnan and Mohammad Alizadeh. He is broadly interested in improving quality of service for Internet users. He and his colleagues created the ABC and Nimbus protocols for better congestion control over wireless (Wi-Fi and Cellular) and wide-area networks respectively. He has also worked adaptive bit rate algorithms for video streaming (Minerva), implementing congestion control outside the data plane (CCP), and monitoring network performance (Marple). He received the SIGCOMM Best Paper Award, and the Institute Silver Medal and Shankar Dayal Gold Medal at IIT Bombay.



  • Challenges and Opportunities in Motion and Eye Tracking


    Speaker:

    Prof. Eakta Jain, University of Florida

    Date:2019-03-19
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    We are on the cusp of ubiquitous human tracking. With Internet of Things and pervasive sensing, we can expect that our motion will be tracked, our faces will be recorded and interpreted, and our bio-measurements will be routinely monitored. My research considers two aspects of ubiquitous human tracking: motion tracking and eye tracking. In this talk, I will discuss one project from each aspect.

    First: While large databases of adult motion are commoditized today, the same type of data is hard to collect for young children. However this data is extremely valuable to develop motion models for motion tracking and synthetic motion generation. Given the difficulty of motion capturing children, I will discuss our research organized around two questions: (a) can people perceive if a motion came from a child or an adult in the absence of appearance information? (b) can we algorithmically transform adult motion capture data to look child-like?

    Second: Large databases of eye-tracking data have been collected and used to generate computational models of visual saliency. While streaming data from bio-sensors such as EDA, HR, BVP have been used for performance analysis and affective analysis, eye-tracking data has been largely ignored for these applications. As eye-tracking becomes a built-in service for virtual and augmented reality headsets, we ask the question: can we extract an index of affect from eye-tracking data? I will discuss our research aimed at leveraging the pupil as an index of user engagement.


    Bio:

    Eakta Jain is an Assistant Professor of Computer and Information Science and Engineering at the University of Florida. She received her PhD and MS degrees in Robotics from Carnegie Mellon University and her B.Tech. degree from IIT Kanpur. She has worked in industrial research at Texas Instruments R&D labs, Disney Research Pittsburgh, and the Walt Disney Animation Studios. Her research group at the University of Florida is funded through faculty research awards from Facebook/Oculus and Google/YouTube, federal funding from the National Science Foundation, and state funding from the Florida Department of Transportation.



  • Policy Enforcement Across Web Applications


    Speaker:

    Abhishek Bichhawa,Carnegie Mellon University, USA.

    Date:2019-03-14
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Web applications routinely access sensitive and confidential data through remote APIs. The privacy of such data is governed by complex policies implemented as inline checks across application code and database queries. However, the complexity of the policies and the code often results in missed policy checks that cause unauthorized information leaks. We present Estrela, a framework to build Web applications, that allows specification of rich and expressive policies separately from the code, and enforces it based on what and how the data is being accessed on the server-side. On the client-side, we build an information flow tracking policy specification and enforcement mechanism in Web browsers for fine-grained policy enforcement specified by developers.


    Bio:

    Abhishek Bichhawat is a postdoctoral fellow in the School of Computer Science at Carnegie Mellon University, USA. His research focuses on language-based security, program analysis and verification, and building secure-by-design systems. He received a PhD from Saarland University where he was affiliated with CISPA and the International Max Planck Research School for Computer Science.



  • Amplification Theorems for Differentially Private Machine Learning


    Speaker:

    Kunal Talwar (Google)

    Date:2019-03-13
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    A rigorous foundational approach to private data analysis has emerged in theoretical computer science in the last decade, with differential privacy and its close variants playing a central role. Recent works have shown that complex machine learning models can be trained with little accuracy loss, while giving strong differentially privacy guarantees. The analyses of these algorithms rely on a class of results known as privacy amplification theorems. In this talk, I will describe two recent results of this kind.
    (Joint works with Ulfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan and  Abhradeep Thakurta)


    Bio:

    Kunal Talwar (Google)



  • Relaxed Memory Concurrency and Compiler Correctness


    Speaker:

    Soham Chakraborty, MPI-SWS

    Date:2019-03-07
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    In most analysis/verification techniques, the behavior of concurrent programs is understood in terms of thread interleavings, a model formally known as sequential consistency (SC). For performance reasons, modern hardware implementations allow some executions which cannot be explained by thread interleavings. Similarly, standard compiler optimizations can affect the outcome of a concurrent programs in ways that cannot be understood by SC. Therefore, to program concurrent systems correctly and effectively, it is important to come up with a programming language semantics that accounts for all the non-SC behaviors that hardware and compilers produce. Defining such a "relaxed" memory model is challenging due to conflicting requirements: the model should not only enable efficient compilation, but also provide programmability guarantees.

    In this talk I will introduce relaxed memory consistency and present our work on formally defining a good concurrency model for C/C++. As an application of our model, I will also present a translation validator for the optimizations of LLVM, a state-of-the-art C/C++ compiler. The validator has exposed bugs in LLVM concerning the compilation of concurrent C/C++ programs and has revealed interesting differences between the C/C++ and LLVM concurrency semantics.


    Bio:

    Soham Chakraborty is a PhD candidate in Max Planck Institute for Software Systems (MPI-SWS), Germany. His main research interests are relaxed memory concurrency and compiler correctness. He received the BE in Computer Science and Engineering from Vidyasagar University and the MS in Computer Science and Engineering from IIT Kharagpur. He has worked in various industrial research and development positions for 5 years before joining PhD.



  • Common Sense Knowledge for Natural Language Understanding


    Speaker:

    Dr Ashutosh Modi,  Disney Research LA

    Date:2019-03-06
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Humans have always envisioned creating machines that could assist in daily chores. With this vision in mind, the field of Artificial Intelligence (AI) was introduced in 1956 by John McCarthy and colleagues. Given the fact that natural mode of communication for humans is the language they speak/write (a.k.a. natural language), it is desirable to have machines which can communicate using the same. However, developing computer models of natural language is far from trivial. Language comes with its own complexities and inherent ambiguities. In this talk, I would show how one could leverage common sense knowledge about the world to make the task of natural language understanding (NLU) easier.

    I would present research on modeling common sense knowledge and its integration in NLU systems. I would present empirical results showing the contribution of common sense knowledge to language comprehension in humans. Later part of the talk would show how social common sense in the form of affects (e.g. emotions) can be useful for NLU. I would further show affective information could be integrated into a conversational system in order to make it more engaging for humans.


    Bio:

    Dr. Ashutosh Modi is a postdoctoral researcher at Disney Research Los Angeles. He mainly researches on developing models of human languages in order to bridge the communication between humans and machines. In particular, he focuses on affective computing, conversational systems and Natural Language Understanding (NLU). He received his doctorate degree from Saarland University, Germany for a thesis on "Modeling Common Sense Knowledge via Scripts". His doctoral research focused on modeling common sense knowledge about prototypical activities like "making a coffee", "going to a restaurant", etc. In the past, he has worked at Siemens Research where he worked on bio-medical question answering system. Prior to that, he worked at Symantec Corporation in the area of email security with the focus on the application of machine learning techniques. Ashutosh has obtained a Masters Degree from IIT, Delhi (Indian Institute of Technology, Delhi) with specialization in digital communication and signal processing.



  • Efficient Optimization for Machine Learning


    Speaker:

    Elad Hazan, Princeton

    Date:2019-02-27
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    In this talk we will describe recent advances in optimization that gave rise to state-of-the-art algorithms in machine learning. While stochastic gradient descent is the workhorse behind the recent deep learning revolution, improving its performance both theoretically and in practice has proven challenging. We will explore recent innovations in stochastic second order methods and adaptive regularization that yield some of the fastest algorithms known.


    Bio:

    Professor, CSE department, Princeton University



  • Optimization-Based Model Validation a Mathematical Technology of High Impact


    Speaker:

    Prof. Dr. Ekaterina Kostina

    Date:2019-02-25
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Validated dependable mathematical models are of ever growing importance in science and engineering, and today also driven by a serious demand of the industry. Apart from providing scientific insight into complex nonlinear processes, mathematical models are fundamental for process simulation, optimization and control. In this context the new paradigmatic concept of “digital twins” may be worth mentioning, which is becoming more and more attractive among researchers from different industrial companies, and is advocated by NASA, GE, Siemens and others. Its idea is that every new system, product or a physical or economical process should be accompanied by a  "digital twin", which comprises an ensemble of mathematical models and algorithms. This collection of models, analysis and solution algorithms – a virtual avatar or virtual double – would ideally accompany the process from the "origin" through its life time, "grow" together with this process, and be used among others to analyze data,  predict malfunctioning and perform optimal operation. However, the application of mathematical models for the simulation, and even more for the optimization and control of complex engineering processes requires their thorough validation and calibration by parameter and state estimation based on process data, which should preferably be obtained by optimized experiments, and optimized measurement designs.  For the latter, very efficient numerical methods for the complex non-standard optimal control problem of optimal experimental design for dynamic processes were developed, which have proven to be a powerful mathematical instrument/technology of high economic impact in industrial practice.

    The talk will address new developments in optimization methods for validation and calibration of models, in particular the design of robust optimal experiments based on a second order sensitivity analysis of parameter estimates and design of optimal experiments for  PDE models.


    Bio:

    Prof. Dr. Ekaterina Kostina , Institute for Applied Mathematics, Heidelberg University



  • "Towards an Intelligence Architecture for Human-Robot Teaming"


    Speaker:

    Rohan Paul

    Date:2019-02-20
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Advances in autonomy are enabling intelligent robotic systems to enter human-centric environments like factories, homes and workplaces. To be effective as a teammate, we expect robots to accomplish more than performing simplistic repetitive tasks; they must perceive, reason, perform cognitive tasks in a human-like way. A system's ability to act intelligently is fundamentally tied to its understanding of the world around it. In this talk, I will present recent work on constructing an intelligence architecture centered around a semantic world model that can be learned from observations, background knowledge and human guidance enabling the robot to follow complex natural language commands. I will discuss how an intelligent agent can interact with the environment to fill gaps in its knowledge about the world. Finally, I will draw connections with prior work in embodied assistive devices for persons with blindness.


    Bio:

    Rohan is a Postdoctoral Associate at the Computer Science and AI Laboratory at the MIT. His research focuses on grounding natural language instructions and concept acquisition from language and vision; aimed towards capable service robots that interact seamlessly with humans. He pursued a D.Phil. degree at the University of Oxford as a Rhodes Scholar from India. His doctoral research contributed Bayesian models for large-scale semantic mapping and navigation. Rohan obtained a Dual Degree in Computer Science and Engineering and served as a Postdoctoral Fellow at IIT Delhi (2012-2015) contributing to the development of affordable assistive devices for persons with visual impairment, particular aimed towards mobility and education. Rohan's research has received Best Paper Awards at RSS 2016, IROS 2013, TRANSED 2012/2010 and ICRA 2010 (Best Vision Paper Finalist). He is a recipient of the Bracken Bequest Graduate Research Prize at Oxford University, INAE National Award, BOSS and FITT Award at IIT Delhi. He was a member of the team that received the National Award for Outstanding Applied Research Improving the Life of Persons with Disabilities conferred by the Govt. of India and was named one of "35 Global Innovators Under 35" by MIT Technology Review in 2016.




  • Cyber-Security Engineering via the Lens of Decision Science - The Case of Social-Networked Distributed Systems


    Speaker:

    Ranjan Pal

    Date:2019-02-19
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    It is widely known in the security research community that the science and engineering of cyber-security is solely not a single dimensional technical exercise, specifically for settings with  distributed networked entities. Strengthening cyber-security is a challenging multi-stakeholder, multi-dimensional problem, where the "optimal" solution approach to modern security systems design requires technology to be often aptly complemented with ideas from decision-scientific disciplines such as game theory, AI/learning, and computational economics (both mathematical and behavioral), to  name a few. In this talk, I will touch upon the idea of cyber-insurance driven security engineering - an inter-disciplinary decision-scientific engineering approach, to improve networked distributed systems security. In this context, i will focus on the design of (a) an AI-driven computationally efficient cyber-risk  estimation methodology, (b) market efficient insurance solutions that improve network security with the view to mutually satisfying multiple conflicting stakeholders, and (c) a Stackelberg game driven smart security pricing framework for networks that computationally efficiently alleviates drawbacks of aforementioned market solutions to result in a_win-win_ state for all stakeholders.


    Bio:

    Ranjan Pal is a visiting Fellow in the department of Computer Science and Technology at the University of Cambridge, hosted by Prof. Jon Crowcroft, and a member of the Cambridge Trust and Technology Initiative. Prior to this, he was a Research Scientist at the University of Southern California (USC), where he was affiliated with both the Electrical Engineering and Computer Science departments.  His primary research interests lie in the applications of decision science tools to problems in cyber-security, privacy, and resource management in distributed systems. Ranjan received his PhD in Computer Science from USC in 2014, and was the recipient of the Provost Fellowship throughout his PhD studies. During his graduate studies, Ranjan held visiting student researcher positions at Princeton University, Deutsch Telekom Research Laboratories (T-Labs) Berlin, and Aalborg University.  Ranjan is an active industry consultant in cyber-security, and is currently a technical advisor to QxBranch. He is a member of the IEEE, ACM, American Mathematical Society (AMS), and the Game Theory Society.





  • Fault-Tolerant Structures in Directed Graphs


    Speaker:

    Keerti Choudhary (Weizmann)

    Date:2019-02-05
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    In this talk, we look at the problem of single-source-reachability (SSR) in presence of failures of vertices and edges. In the static setting, SSR can be solved in O(m+n) time, for any directed graph G on n vertices and m edges. To model the scenario of a faulty network, we associate a parameter k with the network such that there are at most k vertices (or edges) that are failed at any stage in the network. The goal is to preprocess the graph G in polynomial time, and compute a compact data structure, that, given any set F of at most k vertices (or edges), efficiently computes vertices reachable from source in the graph GF. We show that for any set F of size k, our algorithm takes O(2^k n) time. We also consider reachability questions over a set of vertex-pairs in VxV. As an application, we obtain a fault tolerant algorithm for computing SSCs (strongly connected components) of a graph after k failures.


    Bio:

    Keerti Choudhary is a postdoctoral fellow at the Weizmann Institute of Sciences, Israel, hosted by Prof. David Peleg. Prior to this, she completed her PhD in Computer Science from IIT Kanpur in August 2017, where she was advised by Prof. Surender Baswana. She is broadly interested in problems in graph theory and algorithms.



  • Prophet Problems and Posted Prices


    Speaker:

    Ashish Chiplunkar

    Date:2019-02-04
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    The prophet problem is one of the central problems in optimal stopping theory. This problem and its related problems model the task of selling items to an online sequence of buyers. In the prophet problem, we are given the distributions of a sequence of independent random variables. We are then presented the realizations of the random variables in an online manner, and we must select one of them as soon as it is presented. When the sequence of random variables is adversarial, a celebrated result called the prophet inequality states that an algorithm can get, in expectation, at least half the expected maximum of the random variables. In the prophet secretary problem, the arrival sequence is uniformly random (but the distributions are adversarial). I will present an algorithm for prophet secretary which, in expectation, obtains at least (1-1/e+1/400) times the expected maximum. This improves the previously known (1-1/e) factor, which was believed to be optimal for natural reasons.



    Bio:

    .



  • Universal Access to All Information


    Speaker:

    Carl Malamud

    Date:2019-01-31
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    For the past 30 years, Carl Malamud has helped place more than 5 crore pages of new information on the Internet. He'll discuss how he helped place large government databases, such as the U.S. Patent Database, on the Internet for free access over the objections of government. He'll also discuss his work in India over the past few years, running a collection of 4 lakh books called the Public Library of India and his efforts to make all 19,000 Indian Standards available. He will also discuss broader challenges, such as building a true public library of India and the moral imperative to make available the full scholarly corpus of journal articles for text and data mining. Carl will discuss universal access to information as the great promise of our times and the challenges we face in making that promise a reality.


    Bio:

    Carl Malamud is founder of Public Resource, a California-based nonprofit organization that does extensive work on the Internet. He is the author of 9 professional reference books about computer networks and databases. In 1993, he ran the first radio station on the Internet. Carl has been a Visiting Professor at the MIT Media Lab and is the recipient of the Pioneer Award from the Electronic Frontier Foundation (EFF) and the Berkman Award from Harvard University "for his outstanding contributions to the Internet's impact on society."

    Background Reading:

    1. Interview: 'This Little USB Holds 19,000 Indian Standards. Why Should it Not Be Made Public?' https://thewire.in/tech/interview-little-usb-holds-19000-indian-standards-not-made-public

    2. Op-Ed: Who May Swim in the Ocean of Knowledge? https://thewire.in/education/who-may-swim-in-the-ocean-of-knowledge



  • Universal Access to All Information


    Speaker:

    Dr Carl Malamud

    Date:2019-01-31
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    For the past 30 years, Carl Malamud has helped place more than 5 crore pages of new information on the Internet. He'll discuss how he helped place large government databases, such as the U.S. Patent Database, on the Internet for free access over the objections of government. He'll also discuss his work in India over the past few years, running a collection of 4 lakh books called the Public Library of India and his efforts to make all 19,000 Indian Standards available. He will also discuss broader challenges, such as building a true public library of India and the moral imperative to make available the full scholarly corpus of journal articles for text and data mining. Carl will discuss universal access to information as the great promise of our times and the challenges we face in making that promise a reality.


    Bio:

    Carl Malamud is founder of Public Resource, a California-based nonprofit organization that does extensive work on the Internet. He is the author of 9 professional reference books about computer networks and databases. In 1993, he ran the first radio station on the Internet. Carl has been a Visiting Professor at the MIT Media Lab and is the recipient of the Pioneer Award from the Electronic Frontier Foundation (EFF) and the Berkman Award from Harvard University "for his outstanding contributions to the Internet's impact on society."

    Background Reading:

    1. Interview: 'This Little USB Holds 19,000 Indian Standards. Why Should it Not Be Made Public?' https://thewire.in/tech/interview-little-usb-holds-19000-indian-standards-not-made-public

    2. Op-Ed: Who May Swim in the Ocean of Knowledge? https://thewire.in/education/who-may-swim-in-the-ocean-of-knowledge



  • Extreme-Efficiency Computing


    Speaker:

    Raghav Pothukuchi

    Date:2019-01-23
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Computing is becoming ubiquitous and personal, powered by a variety of systems ranging from wearables to datacenters. However, this growth also leads us into an era of resource-constrained computing, where resources such as energy or storage often seem insufficient, and many constraints like temperature and power are imposed on the computation. My vision is to build Extreme Efficiency Computing (EEC) systems that are essential to deliver the desired Quality of Service (QoS), throughput and responsiveness. EEC systems avoid the undesirable consequences of inefficiency in this environment - loss of information, money and even human life. Currently, efficiency is sought through specialized processing, reconfigurable fabrics, and typically ad hoc control policies in the hardware, Operating System (OS) and application. However, results with ad hoc designs are often disappointing.

    In my talk, I will describe my research contributions towards EEC so far and future directions. This is interdisciplinary work spanning Computer System Architecture, Control Theory, Machine Learning and Distributed Systems (Cloud and High-Performance Computing) design. I use theoretically solid techniques from Control Theory and Machine Learning to address the pervasive inefficiency in multiple layers of the computing stack. Using experiences from prototypes in research and industrial settings, I will present the challenges in this field and the effectiveness of the systematic control methods I have been developing.


    Bio:

    Raghavendra (Raghav) is a PhD candidate at the University of Illinois, where he is advised by Prof. Josep Torrellas. His research is on building Extreme-Efficiency Computing systems. His work spans the areas of Computer Architecture, Control Theory, Machine Learning, OS, and Distributed System Design. He is a winner of the W. J. Poppelbaum Award by the Dept. of CS at Illinois for architecture design creativity, a Mavis Future Faculty Fellowship at Illinois, and an ACM SRC Competition at PACT'17. He interned at AMD and his modular control design is being considered for upcoming heterogeneous systems. He received his Masters in CS from Illinois in 2014 and worked at Nvidia before graduate school. He has a Bachelors (Hons.) in Electrical and Electronics Engineering from the Birla Institute of Technology & Science (BITS) Pilani, India where he received the university gold medal on graduation.

    http://pothukuchi.web.illinois.edu/



  • ABATe: Automatic Behavioral Abstraction Technique to Detect Anomalies in Smart Cyber-Physical Systems


    Speaker:

    Sandeep Nair Narayanan

    Date:2019-01-22
    Time:14:00:00 (IST)
    Venue:CSE, Seminar Room
    Abstract:

    Detecting anomalies and attacks in smart cyber-physical systems are of paramount importance owing to their growing prominence in controlling critical systems. However, this is a challenging task due to the heterogeneity and variety of components of a CPS, and the complex relationships between sensed values and potential attacks or anomalies. Such complex relationships are results of physical constraints and domain norms which exists in many CPS domains. In this paper, we propose ABATe, an Automatic Behavioral Abstraction Technique based on Neural Networks for detecting anomalies in smart cyber-physical systems. Unlike traditional techniques which abstract the statistical properties of different sensor values, ABATe learns complex relationships between event vectors from normal operational data available in abundance with smart CPS's and uses this abstracted model to detect anomalies. ABATe detected more than 88% of attacks in the publicly available SWaT dataset featuring data from a scaled down sewage water treatment plant with a very low false positive rate of 1.3%. We also evaluated our technique's ability to capture domain semantics and multi-domain adaptability using a real-world automotive dataset, as well as a synthetic dataset.


    Bio:

    Sandeep Nair Narayanan,  Ebiquity Lab University of Maryland Baltimore County.



  • Extracting and Labeling Information Facets


    Speaker:

    Mouna Kacimi

    Date:2019-01-22
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Extracting information can be complex in advanced domains such as news, where events typically come with different facets including reasons, consequences, purposes, involved parties, and related events. The main challenge consists in first finding the set of facets related to each fact, and second tagging those facets to the relevant category. In this talk, I will present StuffIE, a fine-grained information extraction approach which is facet-centric.  The proposed approach (1) breaks down triples having complex arguments into a set of facts and facets (2) provides nested relations, and (3) performs semantic tagging of the extracted information using machine learning strategies.I will also present in the talk the motivations and the applications of our work to give an overall picture of my research interests.


    Bio:

    Mouna Kacimi is an assistant professor at UniBZ. She received her doctoral degree in computer science in 2007 from the University of Bourgogne, France.  Afterwards, she had spent 3 years as a post-doc at the Databases and Information Systems group led by Gerhard Weikum at the Max Planck Institute for Informatics. Her research interests focus on enhancing search capabilities of information retrieval systems by developing new information extraction and text mining strategies.



  • Improving Conversational Interactions at Alexa


    Speaker:

    Rahul Goel, Amazon

    Date:2019-01-21
    Time:15:30:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Digital Assistants are becoming commonplace. In this talk, we go over how do these systems work and what are some of the challenges which are being solved to make them better. One of the components in such a system is the spoken language understanding (SLU) component. Typically SLU systems factor the language understanding problem into intent classification and slot tagging. This has an inherent limitation for more complex linguistic phenomenon like coordination where the user expresses multiple intents in a statement.
    In the latter half of the talk, we present 2 ways in which we can add structure to SLU. First, we talk about incorporating an external parser to solve coordination if we want to extend a legacy system. Second, we pose SLU as a shallow semantic parsing problem which is also able to handle tasks like Question Answering. We also talk about solving the data sparsity issue by doing transfer learning between domains and by using techniques like delexicalization and copy mechanism.


    Bio:

    Rahul Goel is a machine learning scientist at Alexa AI where he works on improving spoken language understanding and dialog systems. Many of his contributions are currently deployed in Alexa. His research interests include dialog systems, language understanding, deep learning, and social computing. Before joining Amazon he was a graduate student at Georgia Tech working with Dr. Jacob Eisenstein on computational social science. Prior to that, he completed his bachelors from IIT, Delhi.



  • A Framework for Specifying and Reasoning about Computations


    Speaker:

    Prof. Gopalan Nadathur

    Date:2019-01-16
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The computational structure of formal systems is often described  through syntax directed rules in the style of structural operational  semantics. Such presentations are typically intended to help in  prototyping and in proving properties of these systems.  This talk will survey techniques that we have developed to provide rigorous, computer-based support for such activities. Specifically,
    I will discuss the higher-order abstract syntax approach to treating  binding structure concisely and declaratively, the use of enriched recursive relational specifications to encode rule based descriptions,  a logic of definitions that includes (co-)induction principles for reasoning about such relational specifications and a generic quantifier called nabla that allows for the treatment of names in specifications and free variables that arise during computation.  I will also outline a two level logic approach to reasoning about  specifications. These ideas have led to several actual computer systems such as Teyjus, Bedwyr, and Abella that will be touched  upon during the talk.


    Bio:

    Prof. Computer Science and Engineering, University of Minnesota



  • "Complexity theory, cryptography and tests for quantumness"


    Speaker:

    Prof. Umesh Vazirani, UC Berkeley

    Date:2019-01-15
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    TBA


    Bio:

    Prof. Umesh Vazirani is a professor of computer science at UC Berkeley.  He is a leading authority on complexity, algorithms and on quantum computing.



  • Explainable AI: Making Visual Question Answering Systems more Transparent


    Speaker:

    Prof. Raymond J. Mooney, University of Texas at Austin

    Date:2019-01-11
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Artificial Intelligence systems' ability to explain their conclusions is crucial to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA), the task of answering natural language questions about images. However, most of them are opaque black boxes with limited explanatory capability. The goal of Explainable AI is to increase the transparency of complex AI systems such as deep networks. We have developed a novel approach to XAI and used it to build a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Crowd-sourced human evaluation of these explanations demonstrate the advantages of our approach.


    Bio:

    Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 160 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, general chair for HLT-EMNLP 2005, and co-chair for ICML 1990. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the Association for Computational Linguistics and the recipient of best paper awards from AAAI-96, KDD-04, ICML-05 and ACL-07.



  • Collusion Resistant Traitor Tracing from Learning with Errors


    Speaker:

    Rishab Goyal (UT Austin)

    Date:2019-01-10
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    A long-standing open problem in cryptography has been to build traitor tracing systems with compact ciphertexts. In this work we provide a traitor tracing scheme in which the ciphertext size grows polynomially in log(N) where N is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption.

    In a traitor tracing system, an authority runs a setup algorithm that takes as input the number, N, of users in the system. The setup outputs a public key PK, master secret key MSK, and N user keys (SK_1, ... , SK_N). The system has an encryption algorithm that uses the public key PK to create a ciphertext for a message M that is decryptable by any of the N user keys, but where the message will be hidden from any user who does not have access to the keys. Finally, suppose that some subset S subseteq [N] of users collude to create a pirate decoding box D which is capable of decrypting ciphertexts with some non-negligible probability. The tracing property of the system states that there exists a special algorithm Trace, which given the master secret key MSK and only oracle access to D, outputs a set of users T where T contains at least one user from the colluding set S and no users outside of S. The problem of traitor tracing was introduced by Chor, Fiat, and Naor in 1994.

    In the talk, I will discuss where the hardness of traitor tracing stems from, and describe our results which is basically conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE.

    I won't assume any prior knowledge of traitor tracing or related cryptographic primitives. This is joint work with Venkata Koppula and Brent Waters.
    Full paper: https://eprint.iacr.org/2018/346


    Bio:

    Rishab is a CSE, IIT Delhi graduate and is currently pursuing PhD at University of Texas at Austin (http://www.cs.utexas.edu/~rgoyal/)



  • Symbolic Polynomial Evaluation.


    Speaker:

    Prof. Deepak Kapur

    Date:2019-01-09
    Time:15:30:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    -


    Bio:

    Deepak Kapur is a Honorary Professor in our Department, and is a professor and former chair of the CS Department at University of New Mexico, Albuquerque.  He is visiting us till this weekend.



  • Memory Defenses - the Elevation from Obscurity to Headlines


    Speaker:

    Prof. Rajeev Balasubramaniam, University of Utah

    Date:2019-01-08
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    ecent attacks like Meltdown and Spectre have highlighted that modern processors are likely being shipped with latent vulnerabilities that are impossible to anticipate. Some of these vulnerabilities can be addressed with various hardware defenses.  Meltdown and Spectre may have finally pushed these defenses from the shadows of academia into possible commercial reality.

    This talk will describe three primary vulnerabilities in the memory system, and efficient hardware defenses to address each of these vulnerabilities. The first vulnerability is leakage of a program's memory intensity through memory controller timing channels.  The second is leakage of a program's memory access pattern through exposed DDR buses.  The third is a violation of memory integrity by a malicious cloud operator or malicious OS.  In fact, modern Intel SGX systems already have support for guaranteed memory integrity and we show how the performance of that solution can be improved
    by nearly 4X.


    Bio:

    Rajeev Balasubramonian is a Professor at the School of Computing, University of Utah.  He received his B.Tech in Computer Science and Engineering from the Indian Institute of Technology, Bombay in 1998. He received his MS (2000) and Ph.D. (2003) degrees from the University of Rochester.  His primary research interests include memory systems, security, and application-specific  architectures, and his work appears regularly at the top architecture conferences.  Prof. Balasubramonian is a recipient of a US National Science Foundation CAREER award, an IBM Faculty Partnership award, an HP Innovation Research Program award, an Intel Outstanding Research Award, various teaching awards at the University of Utah, and multiple best paper awards.



  • Memory Defenses -- the Elevation from Obscurity to Headlines


    Speaker:

    Rajeev Balasubramaniam

    Professor, School of Computing
    University of Utah

    Date:2019-01-08
    Time:15:00:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    Recent attacks like Meltdown and Spectre have highlighted that modern

    processors are likely being shipped with latent vulnerabilities that are

    impossible to anticipate.  Some of these vulnerabilities can be addressed

    with various hardware defenses.  Meltdown and Spectre may have finally

    pushed these defenses from the shadows of academia into possible

    commercial reality.

     

    This talk will describe three primary vulnerabilities in the memory system,

    and efficient hardware defenses to address each of these vulnerabilities.

    The first vulnerability is leakage of a program's memory intensity through

    memory controller timing channels.  The second is leakage of a program's

    memory access pattern through exposed DDR buses.  The third is a violation

    of memory integrity by a malicious cloud operator or malicious OS.  In fact,

    modern Intel SGX systems already have support for guaranteed memory

    integrity and we show how the performance of that solution can be improved

    by nearly 4X.


    Bio:

    Rajeev Balasubramonian is a Professor at the School of Computing, University

    of Utah.  He received his B.Tech in Computer Science and Engineering from the

    Indian Institute of Technology, Bombay in 1998. He received his MS (2000) and

    Ph.D. (2003) degrees from the University of Rochester.  His primary research

    interests include memory systems, security, and application-specific

    architectures, and his work appears regularly at the top architecture

    conferences.  Prof. Balasubramonian is a recipient of a US National Science

    Foundation CAREER award, an IBM Faculty Partnership award, an HP Innovation

    Research Program award, an Intel Outstanding Research Award, various teaching

    awards at the University of Utah, and multiple best paper awards.



  • Coding Space Theory and Optimization of 3D Cameras


    Speaker:

    Prof. Mohit Gupta, University of Wisconsin-Madison

    Date:2019-01-04
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Time-of-flight (ToF) cameras have fast emerged as the preferred 3D imaging technique in several scientific and consumer applications, including robot navigation, motion capture, and human computer interfaces. Although the spatial resolution of ToF cameras continues to rise with advances in sensor technology, the depth resolution is fundamentally limited by the camera's coding functions. Imagine a user wearing an augmented reality headset equipped with a low-power ToF depth camera. For her to achieve a realistic immersive experience, the camera must measure the 3D structure of the surroundings with extreme resolution.

    I will talk about our recent work on developing a formal coding theory of ToF cameras, and geometric optimization techniques for designing a new class of cameras with an order of magnitude higher depth resolution as compared to current cameras. These cameras can measure detailed 3D geometry (<100 microns) at long distances. Time permitting, I will also talk about cameras that can measure micro motions (e.g., detecting human pulse at long distances) using unconventional, but low-cost optics. These cameras can measure extremely subtle motions (< 10 microns), and, for the first time, track micro-motion for non-line-of-sight objects.


    Bio:

    Mohit Gupta is an Assistant Professor of Computer Sciences at the University of Wisconsin-Madison. His research interests are in computer vision and computational imaging. He received B. Tech in Computer Science from IIT-Delhi, Ph.D. from the Robotics Institute, Carnegie Mellon University, and was a postdoctoral research scientist at Columbia University.



  • Proof-of-Work without all the Work


    Speaker:

    Diksha Gupta

    Date:2019-01-04
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    The last decade has seen explosive growth in systems that are permissionless in that participants are free to join and leave at will. All such systems are open to the well- known Sybil attack, in which an adversary uses a large number of forged IDs to take control of the system. One of the most popular tools for Sybil defense is proof-of-work (PoW): IDs must periodically solve computational puzzles, thereby limiting the number of forged IDs.

    Unfortunately, current PoW-based Sybil defenses suffer from a key weakness: the computational effort expended by the good IDs in solving puzzles must at least be equal to the computational effort of an attacker. We present the first algorithm to address this problem. Our algorithm is a Sybil defense that is asymmetric in the following sense: the good IDs spend at a rate that is asymptotically less than the attacker. In particular, if T is the rate of the attacker's spending, and J is the rate of joining good participants, then our algorithm spends at a rate O(J + √(JT) ).


    Bio:

    Diksha is currently pursuing PhD at University of New Mexico  under the supervision of Prof. Jared Saia. Her research is in the area of Distributed Computing, with primary focus on developing efficient algorithms to counter attacks on Distributed Systems. She is currently working on developing Proof-of-Work schemes where the amount of work required grows slowly as a function of the amount of work done by an attacker.



  • Regular abstractions with applications to Infinite state verification


    Speaker:

    Dr Prakash Saivasan and  TU Braunschweig

    Date:2019-01-03
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Recursive programs (programs with the ability to make procedure calls) even over finite data domains are infinite state due to the unboundedness of the call stack. While sequential recursive programs can be modelled and verified via pushdown systems, verification in the presence of concurrency (programs with the ability to communicate), which is undecidable in general, remains an interesting and challenging problem. The focus of my research so far has been to address this problem via different techniques: under-approximations, accelerations and via regular abstractions. In this talk I will present one of our result on regular abstractions.

    A regular abstraction is the approximation of an infinite state system as a finite automaton. For instance, one may approximate the behaviours (as a language) of a recursive program/pushdown system by its downward closure (i.e. the collection of all subwords of words in the language) and this is always a regular language. One may also disregard the order of letters in the words and consider the Parikh-image of the language. Again for recursive programs/ pushdown systems, this is representable by a finite state automaton.

    I will explain the main ideas behind our results on computing regular abstractions for automata equipped with a counter. While such representations for pushdown systems involves an exponential blowup, we will see that the situation is significantly better for counter systems. It is polynomial for both upward and downward closures and quasi-polynomial for Parikh-image abstraction.

    Time permitting, I will then show how to use the above result to carry out verification of quantitative properties for procedural programs. Our Quantitative logic provides the ability to express arithmetic constraints over the execution times of procedure invocations. In this logic one may express properties such as “within the execution of each invocation of a procedure P, the time spent in executing invocations of procedures Q and R is less than 15%”. I will also explain a second application of our result: in deciding the control state reachability problem for an under-approximation (bounded-stage runs) of concurrent recursive programs communicating via shared memory.




2018 talks

  • Making the Invisible, Visible


    Speaker:

    Prof. Chandrajit Bajaj, U Texas at Austin

    Date:2018-12-17
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    RGB multispectral video that span the visible electromagnetic spectrum (400 nm - 700 nm) are now commonplace, with the advent of color video cameras. Near, mid, far  Infrared (IR) hyper spectral imaging, that capture a portion of the human invisible spectrum (700 nm - 1mm) have additionally proven indispensable for many scenarios such as night vision, video surveillance, terrain sensing via satellites, as well as early detection of cancer. Both multispectral and hyper spectral forms of imagery are now seeing use for navigation/control, in automobiles, transport vehicles and even drones. This spectral imagery, enables the simultaneous prediction of the visible and invisible geometry and material properties of complex scenes. This talk shall dwell on the success, and current challenges of state of the art, machine learning algorithms for spectral de-noising, data fusion, latent feature discovery, super-resolution.


    Bio:

    Prof. Chandrajit Bajaj is a Distinguished Alumnus of IIT Delhi. He is a Professor in the Department of Computer Science, and Institute of Computational Engineering and     sciences, Center for Computational Visualization, at the University of Texas at Austin. More information about Chandrajit may be found at http://www.cs.utexas.edu/~bajaj



  • Detection of Electricity Access and Reliability from Space


    Speaker:

    Brian Min, University of Michigan

    Date:2018-12-12
    Time:11:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Satellite images of the earth at night provide unique insights into patterns of electricity access and reliability. I present new results using daily nighttime lights data from the DMSP-OLS and VIIRS satellite programs spanning 1993 to 2017. Leveraging this data platform against recent data products identifying all building footprints within a country, I introduce new High Resolution Electricity Access (HREA) indicators tracking access and reliability over time, down to the settlement level. The data provide new insights into how governments and their agents shape access to electricity, a basic public service. The HREA indicators are strongly correlated with existing measures of access, while offering insights at higher spatial resolution, frequency, and scale than previously possible.


    Bio:

    Brian Min is Associate Professor of Political Science and Research Associate Professor at the Institute for Social Research at the University of Michigan. He is the author of Power and the Vote: Elections and Electricity in the Developing World (Cambridge University Press, 2015). His research uses high resolution satellite imagery to study the politics of rural electrification across the developing world. He has collaborated closely with the World Bank to develop new methods using remote sensing and statistical algorithms to plan and monitor electrification projects in settings including India, Ghana, Senegal, Mali, Kenya, Pakistan, and Vietnam. His current research focuses on the political targeting of power outages using high frequency satellite data. He holds degrees from Cornell, Harvard Kennedy School, and UCLA.




  • Sublinear Algorithms for (Delta +1)-Coloring


    Speaker:

    Prof. Sanjeev Khanna (Univ. of Pennsylvania)

    Date:2018-12-10
    Time:12:00:00 (IST)
    Venue:Bharti Building #425
    Abstract:

    Any graph with maximum degree Delta admits a proper vertex coloring with (Delta + 1) colors that can be found via a simple sequential greedy algorithm in linear time and space. But can one find such a coloring via a sublinear algorithm? In this talk, we present new results that answer this question in the affirmative for several well-studied models of sublinear computation including graph streaming, sublinear time, and massively parallel computation (MPC). At the core of our results is a remarkably simple meta-algorithm for the (Delta + 1) coloring problem: Sample O(logn) colors for each vertex uniformly at random from the (Delta + 1) available colors and then find a proper coloring of the graph using the sampled colors. Our main technical result is that the sampled set of colors with high probability contains a proper coloring of the input graph. We then show that this result naturally leads to essentially optimal sublinear algorithms for above-mentioned models of computation.

    This is joint work with Sepehr Assadi and Yu Chen.


    Bio:

    Prof. at Univ. of Pennsylvania



  • Software and cloud security


    Speaker:

    Jethro G. Beekman,

    Date:2018-12-05
    Time:10:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    "It's impossible to make absolutely secure systems" and "assume you're already hacked" are common phrases you'll hear from InfoSec professionals these days. While there is some truth to these statements, the actual truth is that most software in use today is laughably insecure. In this talk I'll explore how you to combine the age-old techniques of privilege separation, capabilities and cryptography with new developments in secure hardware (e.g. Intel SGX) and static analysis and programming languages (e.g. Rust) to write actually secure software.


    Bio:

    Jethro G. Beekman is working on next-generation cloud computing security at Fortanix. Jethro received his M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California at Berkeley in 2014 and 2016, respectively. Before that, he received his B.Sc. degree in Electrical Engineering from the University of Twente, The Netherlands, in 2011. His current research interests include cloud security, secure enclaves, side-channel countermeasures, as well as network and hardware security.



  • From Reachability in Markov chains to termination on linear loop programs (via number theory)


    Speaker:

    S. Akshay, IIT Mumbai

    Date:2018-11-28
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    In this talk, I will consider a problem on linear recurrence sequences, that has  widespread impact on many problems from easy-sounding probabilistic reasoning questions to hard program termination problems. After building connections, I will present some recent results on the complexity theoretic hardness of the problem as well as links to safety and verification problems for Markov decision processes. If time permits, we will also talk about other developments in this area.


    Bio:

    S. Akshay is a faculty member in the Department of Computer Science at IIT Bombay. He works in the broad area of formal methods with a focus on model checking, concurrency theory, timed systems and probabilistic computation. He is visiting IIT Delhi this semester, and will be here till the end of this month.



  • Quantum Computing 
and IBM Q:
An Introduction


    Speaker:

    Dr. Moitreyee Mukherjee-Roy
    IBM Quantum Ambassador (IBM Corporate Head Quarters)

    Date:2018-11-15
    Time:09:30:00 (IST)
    Venue:SIT Seminar Rooma
    Abstract:

    Though early in its development, quantum computing is now available on real hardware via the cloud through IBM Q. This radically new kind of computing holds open the possibility of solving some problems that are now and perhaps always will be intractable for “classical” computers. 

In this talk we’ll discuss the motivation for quantum computing and the types of problems to which it might be applied. We’ll describe the basics of the technology and show where we are in the timeline toward reaching quantum advantage: the point where quantum computing shows demonstrable and significant advantage over classical computers and algorithms.

We’ll continue by describing the no-charge IBM Q Experience where more than 90,000 people have used IBM’s offerings to learn and experiment with quantum computing. Finally we’ll discuss our program—the IBM Q Network—and give a glimpse of the new computation centers to come online over the next few years.


    Bio:

    Moitreyee Mukherjee-Roy has a 22 year of experience within the semiconductor and microelectronics industry including global work experience in different countries across the globe, holding a variety of technical and management positions. Her experience ranges from Semiconductor R&D, Microprocessor design and management, research lab management, Competitive analyses & IP monetization.  She has led several large projects, JDs, international research consortium; established from ground zero, new technical competencies and built teams to deliver key responsibilities.
    She has served in the Technical Committees for the SPIE Microlithography and ECS, held an adjunct Prof. position at Nanyang Technological University in Singapore and authored/co-authored more than 40 journal and conference publications. She holds 11 US patents and several applications. Her academic qualifications include a Masters in Materials Sc. and Engineering and a Ph.D.  in Electrical Engineering.



  • Algorithms for Machine Learning, Inspired from Physics


    Speaker:

    Prof. Nisheeth Vishnoi (EPFL)

    Date:2018-11-05
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    In understanding physical systems over hundreds of years, Physicists have developed a wealth of dynamics and viewpoints. Some of these methods, when abstracted appropriately, could lead to new algorithmic techniques with applications to machine learning and TCS. I will present a couple of recent examples from my own research on such interactions between Physics and Algorithms -- a Hamiltonian Dynamics inspired algorithm for sampling from continuous distributions and a Boltzmann's equation based algorithm for estimating the partition function for discrete distributions.


    Bio:

    Nisheeth Vishnoi is currently a professor in the School of Computer and Communication Sciences at Ecole Polytechnique Federale de Lausanne where he heads the theory of computation lab. Starting January 2019, he will be a professor of Computer Science at Yale. He is also an associate of the International Center for Theoretical Sciences, Bangalore, an adjunct faculty member of IIT Delhi and IIT Kanpur, and a co-founder of the Computation, Nature, and Society ThinkTank in Lausanne. His research focuses on foundational problems in algorithms, optimization, and statistics, and how tools from these areas can be used to address computational questions in society and other sciences. Topics from these areas that he is currently interested in include algorithmic bias and the emergence of intelligence. He is the recipient of the Best Paper Award at FOCS 2005, the IBM Research Pat Goldberg Memorial Award for 2006, the Indian National Science Academy Young Scientist Award for 2011 and the IIT Bombay Young Alumni Achievers Award for 2016.



  • Fairness in Machine Learning for Social Systems


    Speaker:

    Dr. Elisa Celis (EPFL)

    Date:2018-11-02
    Time:14:30:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Social systems are now fueled by machine learning algorithms that facilitate and control connections and information. Simultaneously, computational systems are now fueled by people -- their interactions, data, and behavior. Consequently, there is a pressing need to design new algorithms that are socially responsible in how they learn, and socially optimal in the manner in which they use information. Recently, we have made initial progress in addressing such problems at this interface of social and computational systems. In this talk, we will first understand the emergence of bias in data and algorithmic decision making and present first steps towards developing a systematic framework to control biases in classical problems such as data summarization and personalization. This work leads to new algorithms that have the ability to alleviate bias and increase diversity while often simultaneously maintaining their theoretical or empirical performance with respect to the original metrics.


    Bio:

    Elisa Celis is a Senior Research Scientist at the School of Computer and Communication Sciences at EPFL. Starting January 2019, she will be an Assistant Professor in the Department of Statistics and Data Science at Yale University. Previously, she worked as a Research Scientist at Xerox Research where she was the worldwide head of the Crowdsourcing and Human Computation research thrust. She received a B.Sci. degree in Computer Science and Mathematics from Harvey Mudd College and a Ph.D. in Computer Science from the University of Washington. Her research focuses on studying social and economic questions that arise in the context of the Internet and her work spans multiple areas including fairness in AI/ML, social computing, online learning, network science, and mechanism design.



  • Fairness in Machine Learning for Social Systems


    Speaker:
    Dr. Elisa Celis, Senior Research Scietist, EPFL
    Date:2018-11-02
    Time:14:30:00 (IST)
    Venue:Seminar Hall, SIT Building
    Abstract:

    Social systems are now fueled by machine learning algorithms that facilitate and control connections and information. Simultaneously, computational systems are now fueled by people -- their interactions, data, and behavior. Consequently, there is a pressing need to design new algorithms that are socially responsible in how they learn, and socially optimal in the manner in which they use information. Recently, we have made initial progress in addressing such problems at this interface of social and computational systems. In this talk, we will first understand the emergence of bias in data and algorithmic decision making and present first steps towards developing a systematic framework to control biases in classical problems such as data summarization and personalization. This work leads to new algorithms that have the ability to alleviate bias and increase diversity while often simultaneously maintaining their theoretical or empirical performance with respect to the original metrics.

    Bio:
    Elisa Celis is a Senior Research Scientist at the School of Computer and
    Communication Sciences at EPFL. Starting January 2019, she will be an
    Assistant Professor in the Department of Statistics and Data
    Science at Yale University. Previously, she worked as a Research
    Scientist at Xerox Research where she was the worldwide head of the
    Crowdsourcing and Human Computation research thrust. She received a B.Sci.
    degree in Computer Science and Mathematics from Harvey Mudd College and a
    Ph.D. in Computer Science from the University of
    Washington. Her research focuses on studying social and economic
    questions that arise in the context of the Internet and her work spans
    multiple areas including fairness in AI/ML, social computing, online
    learning, network science, and mechanism design.
    

     



  • Canopus: Scalable Consensus Permissioned Blockchains


    Speaker:

    Srinivasan Keshav

    Date:2018-11-02
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    A critical problem with the consensus protocols underlying blockchains is that they do not scale well. As the number of participants trying to achieve consensus increases, increasing network traffic from topology-oblivious broadcasts can quickly overwhelm the network, or, when using centralized consensus protocols, overwhelm the central coordinator. Thus, achieving strong consensus is typically restricted to a handful of participants, or systems must resort to weaker forms of consensus, such as those using proof of work. To address this problem, we propose Canopus, a highly-parallel consensus protocol that exploits modern data center network topology, parallelism, and consensus semantics to achieve scalability. Our key insight is to make network communication patterns topology-aware. In our prototype implementation, Canopus achieves rates as high as 5m linearizable transactions/second over 27 nodes distributed across 7 datacenters. I will also discuss a recent extension, Resilient Canopus, that makes Canopus Byzantine Fault Tolerant.

    (Joint work with Sajjad Rizvi and Bernard Wong at the University of Waterloo)


    Bio:

    Srinivasan Keshav is a Professor of Computer Science at the University of Waterloo. He received a B.Tech in Computer Science and Engineering from IIT Delhi in 1986 and Ph.D. in Computer Science from the University of California, Berkeley in 1991. He was subsequently an MTS at AT&T Bell Labs and an Associate Professor at Cornell. In 1999 he left academia to co-found Ensim Corporation and GreenBorder Technologies Inc. He has been at the University of Waterloo since 2003, holding first a Canada Research Chair and then the Cisco Chair in Smart Grid. His honours include being named ACM Fellow and IEEE Senior Member, two Test of Time awards from ACM SIGCOMM, and best paper awards at ACM SIGCOMM, MOBICOM, and eEnergy. He is the author of two graduate textbooks on computer networking and has served both as Chair of ACM SIGCOMM and as Associate Dean of Graduate Studies in the Faculty of Mathematics, University of Waterloo. 



  • Hyper Partial Order Logic: a local logic for Hyperproperties


    Speaker:

    Prof.  Loic Helouet, INRIA Rennes

    Date:2018-10-29
    Time:16:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Safety of distributed systems often depends on the capacity of monitors to accurately detect failures, abnormal behaviours, and to take corrective measures to avoid a major fault of the system. Similarly, one can consider a system insecure if an intruder can infer sensible information from what he can observe or do. A challenge is this context is to define logical frameworks to characterize systems that are secure, or that can be accurately monitored to ensure their
    safety. However, this question cannot be addressed as a standard model checking question. Indeed, logics such as LTL, CTL, consider properties of a single run at a time, or properties of execution trees of systems. Our safety/security question requires to address hyperproperties, i.e. properties of sets of runs.

    On the other hand, when reasoning about distributed systems, it seems reasonable to consider concurrency and causal dependencies among events in a run. This can be achieved with local logics, that consider runs as partial orders, and allow formal reasoning on these orders. Logics for hyperproperties have already been proposed, but all of them use interleaved representations of systems behaviors.

    In this talk, we define HyPOL, a local hyper logic for partial order models, expressing properties of sets of runs. These properties depict causal dependencies and concurrency in sets of executions, and equivalences among runs. We will show that satisfiability of Hypol is undecidable in general. We will then address the question of model checking of HyPOL and show that, already for safe Petri nets, the problem is undecidable. Fortunately, sensible restrictions on observations and nets allow us to bring back model
    checking of HyPOL to a decidable setting.


    Bio:

    Prof. Loic Helouet, INRIA Rennes



  • Hyper Partial Order Logic: a local logic for Hyperproperties


    Speaker:

    Prof Loic Helouet, INRIA Rennes

    Date:2018-10-29
    Time:16:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Safety of distributed systems often depends on the capacity of monitors to accurately detect failures, abnormal behaviours, and to take corrective measures to avoid a major fault of the system. Similarly, one can consider a system insecure if an intruder can infer sensible information from what he can observe or do. A challenge is this context is to define logical frameworks to characterize systems that are secure, or that can be accurately monitored to ensure their safety. However, this question cannot be addressed as a standard model checking question. Indeed, logics such as LTL, CTL, consider properties of a single run at a time, or properties of execution trees of systems. Our safety/security question requires to address hyperproperties, i.e. properties of sets of runs.

    On the other hand, when reasoning about distributed systems, it seems reasonable to consider concurrency and causal dependencies among events in a run. This can be achieved with local logics, that consider runs as partial orders, and allow formal reasoning on these orders. Logics for hyperproperties have already been proposed, but all of them use interleaved representations of systems behaviors.

    In this talk, we define HyPOL, a local hyper logic for partial order models, expressing properties of sets of runs. These properties depict causal dependencies and concurrency in sets of executions, and equivalences among runs. We will show that satisfiability of Hypol is undecidable in general. We will then address the question of model checking of HyPOL and show that, already for safe Petri nets, the problem is undecidable. Fortunately, sensible restrictions on observations and nets allow us to bring back model
    checking of HyPOL to a decidable setting.


    Bio:

    Prof Loic Helouet, INRIA Rennes



  • Parse Condition: Symbolic Encoding of LL(1) Parsing


    Speaker:

    Dr. Subhajit, CSE department ,IIT Kanpur

    Date:2018-10-22
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    We propose the notion of a Parse Condition---a logical condition that is satisfiable if and only if a given string w can be successfully parsed using a grammar G. Further, we propose an algorithm for building an SMT encoding of such a parse condition for LL(1) grammars and demonstrate its utility by building two applications over it: automated repair of syntax errors in Tiger programs and automated parser synthesis to automatically synthesize LL(1) parsers. We implement our ideas into a tool, Cyclops, that is able to successfully repair 80% of our benchmarks (675 buggy Tiger programs), clocking an average of 30 seconds per repair and synthesize parsers for interesting languages from examples. Like verification conditions (encoding a program in logic) have found widespread applications in program analysis, we believe that Parse Conditions can serve as a foundation for interesting applications in syntax analysis.


    Bio:

    Subhajit is a faculty member in the CSE department at IIT Kanpur with broad research interests in program verification and synthesis.



  • Data- and/or Compute- intensive workloads: Is NoC the solution?


    Speaker:

    Karthi Duraisamy, Synopsys

    Date:2018-10-12
    Time:11:30:00 (IST)
    Venue:SIT Building #001
    Abstract:

    ith an unprecedented growth in computing technology, the twentieth century has paved the way for an information revolution. More than the high volumes of data that are stored in and retrieved from the cloud, the analysis that is happening around this data is the major contributor for the computational challenges associated with Big Data. Hence, achieving fast and energy-efficient Big Data analytics is vital to enable modern discoveries across numerous fields, including medicinal research, genetic analysis, financial market analysis and astrophysical research. Traditional Big Data analytics employ server farms, where racks of servers are interconnected through multiple layers of electrical switches. This traditional approach has serious limitations in terms of computation scalability, network reliability and power consumption.

    In an era when power constraints and data movement are proving to be significant barriers for high-end computing, manycore architectures offer a low power and highly scalable platform suitable for both data and compute-intensive applications. Despite the promise and the recent advances in silicon integration technology, designing manycore platforms remains a highly challenging task. Tackling memory and power walls, establishing low-latency global interconnects and handling thermal hotspots are few of the challenges involved in crafting high performing manycore platforms. Towards addressing these challenges, this seminar is focused on designing latency and energy efficient Network-on-Chip (NoC) architectures for emerging applications. The contents of the seminar include

    1. An extensive discussion on the on-chip traffic patterns induced by today's manycore applications
    2. Limitations of today's industry standard NoCs
    3. Designing scalable network topologies and developing suitable data transfer algorithms for NoCs
    4. Optimizing the power consumption of the NoCs


    Bio:

    Karthi Duraisamy is currently a senior Research and Development (R&D) Engineer at Synopsys Inc., California. In Synopsys, his R&D work is focused on the synthesis of digital circuits. Karthi graduated from Washington State University in December 2017, with a PhD degree in Computer Engineering. His PhD dissertation, titled "Collective Communication-Aware High-Performance Network-On-Chip Architectures for Big Data Processing", discusses the challenges associated with designing ultra-low latency on-chip networks and proposes efficient solutions. In May 2013, Karthi received his M. Tech in Electronics and Electrical Communication Engineering from IIT Kharagpur. He received his bachelors in Electronics and Communication Engineering from College of Engineering Guindy in 2010. His research interests include, design of scalable high-performance manycore platforms and hardware accelerators to address the computational challenges of emerging applications, leveraging passive optical components to design high bandwidth low power data center interconnects and developing machine learning assisted methodologies for the synthesis of digital circuits.



  • Ballerina: A Modern Programming Language Focused on Integration


    Speaker:

    Dr Sanjiva Weerawarana

    Date:2018-10-09
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Ballerina is a concurrent, transactional, statically typed programming language. It is a general purpose programming language that has been designed to support integration. It combines fundamental concepts and tools of distributed systems with direct support for network services, distributed transactions, reliable messaging, stream processing, security and workflows. It is intended for commercial adoption and provides familiarity for users of Java, C# and JavaScript. This talk will discuss the core principles behind Ballerina including the semantics of combining aspects of networking, security, transactions, concurrency and events within a single architecture.


    Bio:

    Sanjiva Weerawarana is the Founder, Chairman and Chief Architect of WSO2, where he leads the design, architecture and development of Ballerina. After starting WSO2 in 2005 and as its CEO for 12 years, Sanjiva lead the creation of a complete set of middleware products before deciding to throw them all away and start again with a programming language approach. Prior to starting WSO2, he was at IBM Research where he led the development of Web services standards and technologies. He's a long time open source developer and advocate and is a Member of the Apache Software Foundation, an Emeritus Board Member of the Open Source Initiative and Founder and Chief Scientist of the Lanka Software Foundation. He also volunteers in the Sri Lanka Army where he serves as the IT advisor. He received a Ph.D. in Computer Science from Purdue University in 1994.



  • Biometric authentication using template protection schemes


    Speaker:

    Dr. Chang

    Date:2018-09-27
    Time:15:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    The widespread use of biometrics for authentication purposes emphasises the need for securing the biometric template that represents biometric data. Biometric template protection (BTP) schemes transform the original biometric templates into the protected templates that do not reveal any information about the original biometric templates. The popular BTP approaches are biocryptosystems and cancellable biometrics. Biocryptosystems further include fuzzy commitment and fuzzy vault. In this talk, he will discuss each of these approaches along with the advantages and limitations. In the end, he will share the scope of future research topics.


    Bio:

    Dr. Chang completed bachelor (Mathematics), master (Information Security), and Ph.D. (Information Management and Security) from Korea University, respectively, in 2001, 2003, and 2008. He was a postdoc in CS department of Columbia University in 2008-2009 and a guest researcher in the computer security division of NIST, USA in 2009-2012. He joined in IIIT Delhi in June 2012 and he is now a tenured associate professor of CS department of IIIT Delhi. The areas of his expertise are design, analysis and implementation of (1) cryptographic Techniques, (2) secure biometric-based systems, and (3) blockchain-based systems.



  • CrystalBall: Gazing in the Black Box of SAT Solving


    Speaker:

    Kuldeep Meel (National University of Singapore)

    Date:2018-09-26
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Boolean satisfiability is a fundamental problem in computer science with a wide range of applications including planning, configuration management, and design and verification of software and hardware systems. The annual SAT competition continues to witness impressive improvements in the performance of the winning SAT solvers largely thanks to the development of new heuristics arising out of intensive collaborative research in the SAT community. The modern SAT solvers achieve scalability and robustness with complex heuristics that are challenging to understand and explain. Consequently, the development of new algorithmic insights has been largely restricted to "expert intuitions" and evaluation of the new insights have been restricted to performance measurement in terms of the runtime of solvers or a proxy for the runtime of solvers. In this context, one may ask: whether it is possible to develop a framework to provide white-box access to the execution of SAT solver, which can aid the end user (i.e., a SAT solver
    developer) to synthesize algorithmic heuristics for modern SAT solvers?

    The primary focus of our project is precisely such a framework, which we call CrystalBall. More precisely, we propose to view modern CDCL solvers as a composition of prediction engines for different tasks such as branching, clause memory management, and restarts. In this talk, I will introduce CrystalBall and some promising preliminary results that we have obtained.

    (Joint work with Mate Soos and Raghav Kulkarni)


    Bio:

    Kuldeep Meel is an Assistant Professor of Computer Science in School of Computing at the National University of Singapore where he holds the Sung Kah Kay Assistant Professorship. He received his Ph.D. (2017) and M.S. (2014) degree from Rice University, and B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology, Bombay. His research interests lie at the intersection of Artificial Intelligence and Formal Methods. Meel has co-presented tutorials at top-tier AI conferences, IJCAI 2018, AAAI 2017, and UAI 2016. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received the IBM Ph.D. Fellowship and the 2016-17 Lodieska Stockbridge Vaughn Fellowship for his work on constrained sampling and counting.



  • Polystore Systems for Query over Scientific Data-set Archives


    Speaker:

    Prof. Subhash Bhalla

    Date:2018-09-19
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Scientific data-set archives store records of observations, as in Astronomy, planetary sciences and earthquake monitoring. These also include data repositories on terminology and ontology. There are many instances of data archives in healthcare, medical science, biology, geo-sciences and material science. The data in records and repositories tends to have complex forms with images, signals, sensor data. Often the data-sets include time-domain components. These may have a geographic data component. In most cases, the end-users create ad hoc approaches to access and perform workflow related to tasks on hand. Recent studies indicate that the support system for any data-set archive needs to address growth and scalability issues, in addition to supporting end-users with functioning workflow support systems. We describe state-of-the-art polystore database systems for creating meta-data. We also demonstrate integrated approach to addressing the problems the problems of growth and scalability.


    Bio:

    Subhash Bhalla joined faculty of Jawaharlal Nehru University (JNU), New Delhi in 1986, at the School of Computer and Systems Sciences. He was a Visiting Scientist at Sloan School of Management, Massachusetts Institute of Technology (MIT), Cambridge, assachusetts, USA (1987-88). He is a member of the Computer Society of IEEE and SIGMOD of ACM. He is with the Department of Computer Software at the University of Aizu. He has also toured and lectured at many industries for conducting feasibility studies and for adoption of modern techniques. He has received several grants for research projects. He is exploring database designs to support models for Information Interchange throgh the World wide Web. He is working with a study team on creating user interfaces for web users and transaction management system for mobile computing. He is studying transaction management and algorithmic designs for Distributed Real-Time Systems. He is also pursuing performance evaluation and modelling of distributed algorithms.



  • Addressing the multi-faceted challenges of image-text related tasks through multimodal solutions


    Speaker:

    Deepanwita Datta (PhD)

    Date:2018-09-18
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    It is not always easy for common users, who lack technical know-how about search systems, to express their information need in a crisp and clear fashion. Consequently, a gap arises between the users' need and the systems interpretation or search results. The advent of multimedia data has rendered more complexity to the issue. So when it comes to the problem on multimodal information retrieval, one of the primary issues is to reduce the semantic gap between the high-level information need of users and commonly employed low-level features such as image features or audio features and the other one is how to blend or fuse the data of different modalities. In light of these problems, in this talk I will discuss the effect of query reformulation, in particular by query expansion (through different keyphrase extraction techniques), on multimodal information retrieval to bridge the semantic gap. The second part of the talk will cover various other aspects involved in multimedia search including automatic image annotation, image classification in a multimodal context etc. and show how an insightful manifestation of these techniques can aid in solving various problems.


    Bio:

    Deepanwita Datta is a Ph.D. student in the Department of Computer Science and Engineering at Indian Institute of Technology (BHU), Varanasi since December 2013. The topic of her thesis was on "Multimodal Information Retrieval". She completed her M.Tech in Information Technology from National Institute of Technology, Durgapur in 2013. She obtained her Bachelor's degree from the West Bengal University of Technology, Kolkata in Electronics and Communication Engineering in the year 2009. Her research interests include Information Retrieval, Text Mining, Image Processing, Image Encryption, etc. In between her masters and Ph.D., she worked as a project engineer at Indian Institute of Technology, Kharagpur on a project involving Medical Image Processing.



  • Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks


    Speaker:

    Brendan Dolan-Gavitt

    Date:2018-08-31
    Time:11:30:00 (IST)
    Venue:SIT Building, #006
    Abstract:

    Deep neural networks (DNNs) provide excellent performance across a wide range of classification tasks, but their training requires high computational resources and is often outsourced to third parties. Recent work has shown that outsourced training introduces the risk that a malicious trainer will return a backdoored DNN that behaves normally on most inputs but causes targeted misclassifications or degrades the accuracy of the network when a trigger known only to the attacker is present. In this talk, we provide the first effective defenses against backdoor attacks on DNNs. We implement three backdoor attacks from prior work and use them to investigate two promising defenses, pruning and fine-tuning. We show that neither, by itself, is sufficient to defend against sophisticated attackers. We then evaluate fine-pruning, a combination of pruning and fine-tuning, and show that it successfully weakens or even eliminates the backdoors, i.e., in some cases reducing the attack success rate to 0% with only a 0.4% drop in accuracy for clean (non-triggering) inputs. Our work provides the first step toward defenses against backdoor attacks in deep neural networks.


    Bio:

    Brendan Dolan-Gavitt is an Assistant Professor in the Computer Science and Engineering Department at the NYU Tandon School of Engineering. He holds a Ph.D. in computer science from Georgia Tech (2014) and a BA in Math and Computer Science from Wesleyan University (2006). His research interests include software security, reverse engineering, and the security of deep learning.



  • Big data and AI (Artificial Intelligence) in Health: opportunities and risks


    Speaker:

    Professor Dave Parry, Department of Computer Science ,AUT University, Auckland, NZ

    Date:2018-08-21
    Time:15:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    -


    Bio:

    Department of Computer Science ,AUT University, Auckland, NZ



  • Putting Networks on a Firm Footing: Revolutionizing Network Management (and More)


    Speaker:

    Aditya Akella , Professor of Computer Sciences and an H. I. Romnes Faculty Fellow at UW-Madison.

    Date:2018-08-13
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Network management plays a central role in keeping networks up and running. An important network management task is configuring a network's "control plane". The control plane governs how the network moves data around, and plays a critical role in imposing important security and compliance policies, such as, who can/not communicate, which communications to prioritize or isolate, etc. Typical control planes are composed of several distributed routing protocol configurations spread across many hundreds of network devices. Unfortunately, this complexity often leads to configuration bugs that cause large-scale outages, blackholes, and isolation breaches.

    In this talk, I will survey recent advances that are transforming the field of network management from the ground up. These techniques, inspired by formal methods, aim to automate key network management tasks, and systematically improve resilience, and trustworthiness of networks. These techniques automatically verify whether networks satisfy important properties; synthesize networks with provably correct policies realizations; and repair broken networks with minimal or no human involvement. In the limit, these advances lead to "zero touch" networking.

    The impact of this confluence of formal methods and networking is transforming not just management but other big portions of the field of networking into a science. It is leading to new approaches to designing network hardware, topologies, protocols, and software from precise high-level specifications and with strong provable guarantees. I will survey emerging ideas in these directions as well.

     


    Bio:

    Aditya Akella is a Professor of Computer Sciences and an H. I. Romnes Faculty Fellow at UW-Madison. He received his B. Tech. in Computer Science from IIT Madras in 2000, and PhD in Computer Science from CMU in 2005. His research spans computer networks and systems, with a focus on network verification, synthesis, and repair, data center networking, software defined networks, and big data systems. Aditya has published over 100 papers, and has served as the program co-chair for several conferences including NSDI, SIGMETRICS, and HotNets. He is currently the Vice Chair for ACM SIGCOMM.  Aditya co-leads CloudLab (http://cloudlab.us), a large-scale testbed for foundational cloud research. Aditya has received several awards including the "UW-Madison CS Professor of the Year" Award (2017), Vilas Associates Award (2017), IRTF Applied Networking Research Prize (2015), ACM SIGCOMM Rising Star Award (2014), NSF CAREER Award (2008) and several best paper awards.



  • Pricing for Online Resource Allocation


    Speaker:

    Shuchi Chawla , professor of Computer Sciences at the University of Wisconsin-Madison

    Date:2018-08-13
    Time:15:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    We study an online resource allocation problem where a seller has many items to offer and buyers are interested in buying bundles of items. Buyers have stochastic values and arrive online. Our goal is to design an allocation algorithm that maximizes the total value of the allocation, a.k.a. social welfare. Our algorithm is required to be online in that it makes irrevocable decisions about what to allocate to each buyer and how much to charge them without knowing who will arrive in the future; however, its performance is compared against the optimal allocation in hindsight. We show that in many settings simple pricing mechanisms obtain a near-optimal competitive ratio for this problem.


    Bio:

    Shuchi Chawla is a professor of Computer Sciences at the University of Wisconsin-Madison. She received her Ph.D. from Carnegie Mellon University and her B.Tech. from the Indian Institute of Technology, Delhi. She has held visiting positions at Microsoft Research Silicon Valley, Microsoft Research Redmond, and the University of Washington. Shuchi's work spans approximation and online algorithms, algorithmic economics, stochastic optimization, data privacy, and fairness. She is the recipient of an NSF Career award, a Sloan Foundation fellowship, and UW's Carolyn Rosner Excellence in Teaching award. Shuchi currently serves on the editorial boards of the SIAM Journal on Discrete Mathematics, the ACM Transactions on Algorithms, and the ACM Transactions on Economics and Computation. She also currently serves on the executive committee of the ACM SIGACT.



  • Summary of Research Activities at Oracle Labs, Brisbane


    Speaker:

    Paddy Krishnan 

    Date:2018-08-09
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    In this presentation, I will give an overview of our recent research activities including scalable and precise analysis for detecting vulnerabilities in web applications, an approach to security mutation testing for libraries and unsuccessful explorations into the use of machine learning techniques for bug detection.

    I can go into depth into some of these topics, depending on interest.


    Bio:

    Paddy Krishnan is a Consulting Member of Technical Staff at Oracle Labs in Brisbane since Feb 2013. His current research interests are in the areas of program analysis, automatic test generation and software security. Prior to joining Oracle Labs he was an academic for over 20 years with some industrial research experience. He is a Senior Member of both the ACM and the IEEE.



  • Ultra Low Power Computing in the IoT Era...(and a brief discussion of waferscale processors and self-driving cars)


    Speaker:

    Rakesh Kumar , University of Illinois at Urbana Champaign

    Date:2018-06-19
    Time:11:30:00 (IST)
    Venue:SIT Building, # 113
    Abstract:

    Wearables, sensors, and Internet of Things (IoT) arguably represent the next frontier of computing. They will be characterized by extremely low power and area requirements. In our recent research, we asked the question: are there opportunities for power and area reduction that are unique to these emerging computing platforms. We answered the question in the affirmative and developed several techniques that appear to be very effective. In this talk, I will focus one such technique--symbolic hardware-software co-analysis--that is applicable over a wide class of applications. Through a novel symbolic execution-based approach, we can determine for a given application the gates in the hardware that the application is guaranteed to not touch. This information can then be used to determine application-specific Vmin, determine application-specific peak power, and, build bespoke processors customized to a given application. If time permits, I will also discuss our efforts at building waferscale processors and data management for self-driving cars.


    Bio:

    Rakesh Kumar is an Associate Professor in the Electrical and Computer Engineering Department at the University of Illinois at Urbana Champaign. He has made contributions in the area of processor design and memory system design that have directly impacted industry and state-of-art. His current research interests are in computer architecture, low power and error resilient computer systems, and approximate computing. He has a B-Tech from IIT Kharagpur and a PhD from University of California at San Diego. He is often seen at a restaurant or hanging out with his very active four-year old.



  • Intelligent Environments using IoT Systems


    Speaker:

    Dr. Deepak Vasisht, MIT

    Date:2018-05-14
    Time:16:00:00 (IST)
    Venue:404, Bharti Building
    Abstract:

    Recent years have seen a great push for putting more and more devices on the Internet, leading to over 20 billion connected devices today. The push has been motivated by the need to create intelligent environments like smart homes that can react to the needs of the occupants, cities that can modulate traffic on the go, and industries that can optimize processes to maximize output and reduce waste. However, in spite of the massive boost in connected devices, three key barriers prevent the creation of smart environments today: power, communication and the need to deploy extensive infrastructure. IoT devices may be placed in inaccessible locations where both power and communication resources are unavailable. Furthermore, the proliferation of IoT devices provides additional challenges for managing the limited spectrum available for communication. Finally, the adoption of smart systems is inversely correlated with the need for extensive infrastructure deployment. In this talk, I will present two examples of my research that seek to address these challenges. The first example provides a positioning framework, for smart homes and small businesses, that can enable decimeter level positioning of connected devices without the need of deploying any additional infrastructure or war-driving one-time fingerprinting of the environment. I will discuss how this framework enables applications in smart homes, robotics, and geofencing. The second example tackles these challenges in the industrial context to build an intelligent agriculture system, FarmBeats, that can provide actionable outputs to a farmer in spite of the lack of conventional power and communication resources on the farm. FarmBeats significantly reduces the need for dense sensor deployment on the farm by leveraging techniques from machine learning and computer vision. FarmBeats is currently being piloted across farms in ten countries and was recently highlighted by Satya Nadella as one of the ten inventions that inspired him in 2017.


    Bio:

    Deepak Vasisht is a PhD candidate in Electrical Engineering and Computer Science at MIT. His research is focused on building intelligent environments using internet-of-things (IoT) systems. He has designed, built and deployed IoT systems that deliver ubiquitous sensing, accurate indoor positioning, enhanced communication capabilities and new human computer interfaces. His research has been featured in the Economist, IEEE Spectrum, BBC, Daily Mail and CBC among others. He is the recipient of the ACM SIGCOMM Best Paper award, the Microsoft Research PhD Fellowship and the President's Gold Medal at IIT Delhi.



  • Dynamics of Information Diffusion on Online Social Networks


    Speaker:

    Yayati Gupta

    Date:2018-05-08
    Time:15:30:00 (IST)
    Venue:404, Bharti Building
    Abstract:

    Understanding and modelling diffusion on social networks is a widely studied research problem. This problem has been addressed by epidemiologists, economists, biologists, physicists, computer scientists as well as medical doctors. Diffusion models have evolved from simple epidemiological models to complex models based on network structure and voluminous data available online. This talk focuses on information diffusion on Online Social Networks. The two major research works addressed by the talk are (i) Better modelling of information diffusion in Online Social Networks (OSNs) and (ii) Developing efficient algorithms for accelerating virality in OSNs. While the literature in this direction has appreciated the role of influential (core) nodes in making an Internet meme viral, the talk unleashes the potential of some non-influential (pseudo core) nodes for inducing virality in a meme. The last component of the talk establishes an analogy between a new Social Networking Site (SNS) on the web and an Internet meme, in terms of their diffusion on online population. Despite certain analogies, it is seen that the competition between two SNSs is more fierce as compared to that between two memes. Further, we have tried addressing the popular debate on “Will Facebook ever be overthrown?” with the help of a game theoretical model. The talk opens up new directions for exploration of diffusion in the online world. We believe that this line of research should continue to prosper with rich results. The work presented through the talk addresses just a few pieces of a big jigsaw puzzle.


    Bio:

    Yayati Gupta received Ph.D. in Computer Science from Indian Institute of Technology Ropar, in November 2017. She has been engaged in research in social network analysis and complex networks. Her major research projects include modeling information diffusion and studying virality of Internet memes. Apart from research, she likes recording lecture videos and have delivered few weeks of lecture content for the NPTEL course on Social Networks. She has recently started working as a data analyst for NPTEL, IIT Madras, as a hobby project.



  • Energy and Thermal Management of CMPs by Dynamic Cache Reconfiguration


    Speaker:

    Shounak Chakraborty, IIT Guwahati

    Date:2018-05-08
    Time:16:15:00 (IST)
    Venue:404, Bharti Building
    Abstract:

    To address the high data demand in recent CMPs (Chip Multi-Processors), equipped with a number of cores, large on-chip LLCs (Last Level Caches) are attached. It has been concluded that LLCs play a vital role in maintaining system performance. But large sized LLCs are accounted for their significant leakage energy consumption, which has a circular dependency on effective temperature of the chip. In addition with curtailing circuit's reliability, this increased chip temperature has enough potential to damage the on-chip circuitry permanently. In our work, we initially proposed a set of performance constrained Dynamic Cache Resizing techniques to reduce leakage in LLCs. The resizing is done by turning off/on some cache banks which can be implemented by power gating at circuit level. The cache resizing decision is triggered based upon the dynamic change in system performance. We get 65% savings in leakage energy consumption. Apart from bank level granularity, we have also proposed cache resizing at way-level granularity, where performance degradation is handled by incorporating DAM (Dynamic Associativity Management). In this policy, we have the 70% improvement in leakage energy. However, from thermal efficiency perspective, these turnedoff cache portions are utilised as on-chip thermal buffers to reduce effective chip temperature, especially in CMPs having larger LLCs. The gained energy and thermal benefits are studied (i) for a Tiled CMP architecture and (ii) for a CMP having centralised LLC. For Tiled CMP, we get a reduction of around 4◦C for average chip temperature, which is closer to 6◦C in CMPs having centralised LLC.


    Bio:

    Shounak Chakraborty received his B.Tech. degree from MAKAUT (formerly known as WBUT), India, in 2009, and the M.E. degree from College of Engineering Guindy, Anna University Chennai, India, in 2011, both in Computer Science and Engineering. On February 22, 2018, he has completed his PhD degree in Computer Science and Engineering from IIT Guwahati, India. The broad research area of his PhD was Computer Architecture, however, specifically, his research interests include High Performance Computer Architectures, Power Efficient Memory Systems, and Thermal Aware Architectures, etc. During his PhD he published several of his research contributions in conferences like ACM SAC, IPDPS, VLSI-SoC, VLSI Design, GLS-VLSI etc. He has also presented parts of his dissertation at the PhD Forum of VLSI-SoC and VLSI Design conferences. While attending conferences in India and abroad, he received travel awards/grants several times from ACM, IEEE, IFIP etc. He has also published couple of his PhD contributions in IEEE Transactions on Sustainable Computing and Journal of Microprocessors and Microsystems (Elsevier). He is also a reviewer of the Journal of Supercomputing (Springer).



  • The Billion Building Challenge


    Speaker:

    Dr Nipun Batra

    Date:2018-05-04
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Buildings contribute roughly one-thirds of the total energy consumption. Energy breakdown - providing energy consumption at a per-appliance level can help occupants save up to 15% energy. In this talk, I will propose the building billion challenge - providing an energy breakdown to a billion buildings. Previous approaches required sensor hardware to be installed in each home and thus do not scale well. Towards the challenge, I will present my approach called collaborative sensing which provides an energy breakdown without installing any additional sensors. The key intuition is that the energy consumption variation across buildings can be represented via a low dimensional representation. Our approach leverages transfer learning techniques to scale energy breakdown to regions with none to low instrumentation. Finally, I will talk about providing actionable insights based on the energy breakdown data, and about making energy breakdown research comparable.


    Bio:

    Nipun Batra is a CS postdoc at the University of Virginia. Previously, he completed his PhD from IIIT Delhi and his bachelors from Delhi College of Engineering. His work has received various awards such as best doctoral consortium presentation at SenSys 2015, best demo award at Buildsys 2014, finalist in the UChicago Urban Indian challenge, among others. He recently served as the TPC chair of the NILM 2018 workshop.



  • Technology/Research Trends and Exciting Career Options in an ever changing Digital World


    Speaker:

    Dr. Rajeev Shorey

    Date:2018-03-22
    Time:15:00:00 (IST)
    Venue:SIT Building, Seminar room
    Abstract:

    As there are a variety of options available post B.Tech/M.Tech/Ph.D. and students are usually confused about which path to choose. In this talk the speaker shall share his experience, followed up by questions.


    Bio:

    Dr. Rajeev Shorey is the Principal Scientist at the TCS Innovation Lab, Cincinnati, USA and Bangalore, India. Dr. Shorey received his Ph.D and MS (Engg) in Electrical Communication Engineering from the Indian Institute of Science (IISc), Bangalore, India in 1997 and 1991 respectively. He received his B.E degree in Computer Science and Engineering from IISc, Bangalore in 1987. Dr. Shorey’s career spans several reputed research labs – General Motors (GM) India Science Laboratory (ISL), IBM India Research Laboratory and SASKEN Technologies. He has been an adjunct faculty in the Computer Science & Engineering Dept at IIT, Delhi from 1998 to 2005 and again from 2017 onward. He was a faculty in the Computer Science Dept at the National University of Singapore from 2003 to 2004, while on leave from IBM Research Labs in New Delhi. Dr. Shorey served as the first President of NIIT University from 2009 to 2013 before joining the TCS Innovation Labs in 2014. Dr. Shorey’s work has resulted in more than 60 publications in international journals and conferences and several US patents, all in the area of wireless, wired networks, including wireless security. He has 12 issued US patents and several pending US patents to his credit. His areas of interest are Wireless Networks, Internet, Telecommunications, Telematics, Data Security and Data Analytics. Dr. Shorey has served on the Editorial boards of IEEE Transactions on Mobile Computing and is currently serving on the Editorial board of WINET (Wireless Networks Journal of Mobile Communication, Computation and Information) journal. He is the editor of the book titled “Mobile, Wireless and Sensor Networks: Technology, Applications and Future Directions” published by John Wiley, US in March 2006. Dr. Shorey has given numerous talks, tutorials and seminars in industry and academia all over the world. He is the founding member of the Communication Systems & Networks (COMSNETS) conference in India. For his contributions in the area of Communication Networks, Dr. Shorey was elected a Fellow of the Indian National Academy of Engineering in 2007. Dr. Shorey was recognized by ACM as a Distinguished Scientist in December 2014. He is a Fellow of the Institution of Electronics and Telecommunication  Engineers, India and a Senior Member of IEEE.

    PS:
    This talk is in continuation with the panel discussion "Career options after a Ph.D." (in December). However, this talk can be attended by B.Tech/M.Tech/Ph.D.



  • Computer Vision @ Facebook scale


    Speaker:

    Manohar Paluri, Facebook

    Date:2018-03-13
    Time:15:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    In this talk I will focus on the broad AI efforts happening at Facebook and specifically in the AI organization and dive more into specifics of how we are approaching Computer Vision problems. I will touch on scale, utility, unique challenges and opportunities, show demos of this technology in use and platforms we built that are currently running on billions of images and videos every day. I will then focus on three key directions – 1. Extreme Vision – where I will talk about how we are pushing the frontiers of computer vision and bridging the gap between human and machine vision and surpassing it in some places, 2. Video – where we are thinking about building models that understand appearance and motion effectively and bridge multiple modalities like visual, audio and text for holistic understanding of video, 3. AI & Bias – expose the limitations of the current models not only in terms of accuracy but in terms of biases they learn and what challenges we face as the technology community to improve these systems towards building axiological AI that is fair to all!


    Bio:

    Manohar Paluri is currently a Research Lead at Facebook and manages the Computer Vision team in the Applied Machine Learning organization. He is passionate about Computer Vision and in the longer term goal of building systems that can perceive the way humans do. Throughout his career he spent considerable time looking at Computer Vision problems in Industry and Academia motivated by the applied research aspects of pushing the frontiers of computer vision while bringing the technology to real world usecases. He worked at renowned places like Google Research, IBM Watson Research Labs, Stanford Research Institute before helping co found Facebook AI Research directed by Dr. Yann Lecun.  Mr. Paluri spent his formative years at IIIT Hyderabad where he finished his undergraduate studies with Honors in Computer Vision and joined Georgia Tech. to pursue his Ph.D. For over a decade he has been working on various problems related to Computer Vision and in general Perception and has made various contributions through his publications at many top-tier conferences. He is passionate about building real world systems that are used by billions of people. Some of these systems are running at Facebook and already have tremendous impact on how people communicate using Facebook.



  • "SGXBounds: Memory Safety for Shielded Execution"


    Speaker:

    Pramod Bhatotia

    Date:2018-03-09
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    In this talk, I will present our work on how to efficiently ensure memory safety for shielded execution in the untrusted environment of cloud.

    Software: https://github.com/tudinfse/sgxbounds


    Bio:

    Pramod Bhatotia http://homepages.inf.ed.ac.uk/pbhatoti/ is a systems researcher and faculty member at the University of Edinburgh. He has extensively worked in efficient processing of big data using the novel idea of incremental computations on streaming data. His other focus area is using new hardware architectural extensions to enhance systems security (this talk).

    Pramod is seeking PhD students and research collaborators. Students interested in pursuing PhD at the intersection of systems/security or systems/data-science are highly encouraged to attend the talk.



  • TrustBase: Certificate-based Authentication as an Operating System Service.


    Speaker:

    Dr. Kent Seamons, Director Internet Security Research Lab

    Date:2018-03-08
    Time:12:00:00 (IST)
    Venue:101, Bharti Building
    Abstract:

    In this talk, I will first give an overview of recent security research at BYU. This includes system designs and usability studies for secure email, secure messaging, and two-factor authentication. I will then describe TrustBase, a project to support TLS authentication as an operating system service. Our approach automatically repairs all of the broken TLS applications on a system. It also provides a platform for researchers to easily experiment with alternatives to the current certificate authority system.


    Bio:

    Dr. Kent Seamons is the Director of the Internet Security Research Lab in the Computer Science Department at BYU. His research interests are in usable security, privacy, authentication, and end-to-end encryption. He has published over 80 peer-reviewed papers, has been cited nearly 5,000 times, and been awarded million in funding from NSF, DHS, DARPA, and industry. He is also a co-inventor on four patents in the areas of automated trust negotiation, single sign-on, and security overlays. See https://isrl.byu.edu/.



  • Fingerprinting Latent Structure in Data


    Speaker:

    Ravindra Guntur

    Date:2018-03-07
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    As data-hungry algorithms find wide spread applications, there is an increased interest in exploring these algorithms in the context of small-data. In many niche industrial applications, data is not only held in secrecy for reasons of privacy and competitive  advantage, but is also limited in volume and variety. Under these constraints, data-driven algorithms are expected to exhibit low application misbehavior. If one can discard data that does not match capabilities of underlying algorithms, there is better control over how unexpected data can influence the application behavior. In this talk, data fingerprinting techniques are presented in the context of small-data and application behavior. Capturing and representing a latent structure in data as a fingerprint helps evolve algorithm complexity, thereby improving application reliability. As an illustration, problems involving question answering, cell structure detection, and recognition of classes of short textual messages will be discussed.


    Bio:

    Ravindra Guntur is the Head of Data Science at Talentica software in Pune, where he develops data-driven algorithms for various startups. He graduated with Masters (2000) and PhD degrees (2006) from the Indian Institute of Science and since then has been working on various problems involving image, video, text and data. In the past, he worked at Motorola Labs, National University of Singapore, Samsung R&D Institute, and Symantec. His research interests are in the fields of mathematical optimization, pattern recognition, and machine learning algorithms. He has over 15 granted patents, equal number of research publications, and is a senior member of the ACM.



  • Exploiting Word Properties for Sentiment Analysis


    Speaker:

    Raksha Sharma, TRDDC

    Date:2018-02-26
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    One of the main tasks that have lured businesses and customers is to extract the polarity of the user-generated content available on the Web in the form of reviews on shopping or opinion sites, posts, blogs or customer feedback. As many users do not explicitly indicate their sentiment polarity, it needs to be predicted from the text which has led to a plethora of work in the field of Sentiment Analysis (SA). More and more sophisticated techniques are built to tackle the problem. The evolution of methods has been on several different dimensions. Some of these are the complexity of the algorithm, the knowledge source used, dealing with unlabeled datasets, etc.

    Ideally, the Sentiment Analysis (SA) task is to predict the sentiment orientation of a text (document, paragraph or sentence) by analyzing the polarity (positive or negative) of words present in the text. However, mere identification of a word as a polar word is inadequate for deep (fine-grained) sentiment analysis. In our work, we tackle some of the rare, yet important, problems in sentiment analysis, that is, finding properties of polar words. Identification of properties of polar words allows us to progress in the direction of fine-grained sentiment analysis, which can result beyond zero (negative) - one (positive) polarity. In our work, we have explored four properties, viz., domain dependence for polarity, domain dependence for significance, intensity variation among words having the same semantic property and intensity variation among words having the same sense (synonymous words). We incorporated these properties in various dimensions of sentiment analysis, viz., in-domain sentiment analysis, cross-domain sentiment analysis, star-rating prediction, etc.


    Bio:

    Raksha Shama received the B.Tech. Degree in Information Technology from RGPV Bhopal, in 2006 and the M.Tech degree in Information Technology from IIITM Gwalior, in 2009. She received Ph.D. in Computer Science and Engineering under Prof. Pushpak Bhattacharyya from IIT Bombay, in 2017. She has been working at TATA Research Development and Design Center as Research Scientist since October 2017. Her areas of research include Natural Language Processing and Machine Learning. During her Ph.D., she worked on enrichment of sentiment analysis by exploring various properties of polar words. 



  • Thermal-aware Application Mapping Strategy for Network-on-Chip based System Design


    Speaker:

    Kanchan Mann, IIT Patna

    Date:2018-02-23
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    I will introduce the basic concept of Network-on-Chip (NoC)-based multicore systems and the application mapping strategy to map an application on to the multicore systems which is used to control the temperature and the performance of the multicore systems. I will describe the current techniques to control the performance and temperature of multicore systems. Finally, I will describe our recent algorithm that carries out efficient analysis search space for the large application using a meta-heuristic approach (Manna et al., doi: 10.1109/TC.2017.2770130).


    Bio:

    Kanchan Manna received the Ph.D. degree in Computer Science and Engineering from the Indian Institute of Technology (IIT) Kharagpur, India. He has worked as a Post-Doctoral Scientist in the HPCAT Lab. at George Washington University (GWU), Washington-DC, USA. Currently, he is working as a Visiting Assistant Professor in Department of Computer Science and Engineering at the IIT Patna, India. His current research interests include Network-on-Chip (NoC)-based multicore architecture design, performance and cost evaluation, application mapping in 2D and 3D environments, including thermal-safety, reliability, fault tolerant and testing.



  • Inference Algorithms for MRF-MAP Problems in Computer Vision with Extremely Large Cliques


    Speaker:

    Chetan Arora, IIIT Delhi

    Date:2018-02-20
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Several tasks in computer vision and machine learning can be modelled as MRF-MAP inference problems. Using higher order potentials to model complex dependencies can significantly improve the performance. While the general MRF-MAP optimization problem is computationally hard, algorithms have been developed which have acceptable computational complexity when the potentials are submodular. We have shown that this optimization problem can be solved by network flow techniques, which successfully pushed the tractability of state of the art from cliques of size 4 to 16 for problems containing millions of cliques. But, the solution is practical only when potentials are defined over small cliques. When the cliques are of large size we show a block co-ordinate descent based methodology which generalizes the standard Min Norm Point algorithm to handle very large cliques of size even upto 1000. We also show a hybrid approach which combines our flow based algorithm with the generalized Min Norm Point algorithm to handle problems containing millions of small, and large cliques.


    Bio:

    Chetan Arora received his B.Tech in Electrical Engineering and Ph.D. in Computer Science, both from IIT Delhi in 1999 and 2012 respectively. He did his Post Doctoral Research with Prof. Shmuel Peleg from 2012-2014 in Hebrew University, Israel. Since 2014, he is an Assistant Professor in CS at IIIT Delhi. Chetan has spent over 10 years in industry, where he cofounded 3 startups, along with his counterparts in Japan and Israel, all working on computer vision products coming out of latest research ideas.



  • Looking at (Almost) Nothing, Understanding Everything: Deep Learning for Hand-drawn Sketches


    Speaker:

    Ravi Kiran Sarvadevabhatla

    Date:2018-02-19
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Deep Learning-based object category understanding is an important and active area of research in Computer Vision. In the talk, we shall see deep-learning approaches for analysis of hand-drawn object sketches and modelling sketch-driven cognitive processes. On the analysis front, I propose a hierarchical approach for the challenging problem of sketch parsing (i.e. given an object sketch, determine fine-grained details -- pose, semantic parts etc.). As an example of modelling sketch-driven cognitive processes, I shall introduce the first computational model for Pictionary, the popular word-guessing social game. Towards the end of the talk, I shall also briefly overview deep-learning based approaches for other sketch-centric problems and discuss two useful perspectives for studying sketches in general.


    Bio:

    Ravi Kiran Sarvadevabhatla is a Research Associate at Qualcomm India. He recently submitted his Ph.D. thesis from Department of Computational and Data Sciences, Indian Institute of Science (IISc), Bangalore and was advised by Dr. R. Venkatesh Babu. Before joining IISc in 2014, he worked as a Research Engineer at Duos Technologies, Jacksonville, USA (2012-2013). Prior to that, he was a Research Engineer at Honda Research Institute USA, Mountain View, USA (2008-2012). He obtained his MS in Computer Science from University of Washington, Seattle, USA in 2008. He got an integrated MS/B.Tech degree (Honors) in Computer Science from International Institute of Information Technology, Hyderabad (IIIT-H) in 2004. His research interests include visual object category understanding, document image analysis, multi-modal human-robot and human-computer interaction. For additional info, visit his webpage at https://ravika.github.io/.



  • Relaxed Diophantine Equations for Geometric Primitives in Integer Space


    Speaker:

    Ranita Biswas, IIT Roorkee

    Date:2018-02-16
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Diophantine equations are polynomial in nature, usually in two or more unknowns, seeking integer solutions. With these equations, one can define algebraic curves and surfaces and can characterize lattice points on them. Geometric interpretations of Diophantine equations are found to enrich "geometry of numbers", and broadly speaking, to a great extent, the upcoming subject of digital geometry. In this talk, the speaker will present certain novel theoretical findings. First, she will explain how by relaxing a Diophantine equation, the corresponding set of lattice points gives rise to a specific topological model of the underlying geometric object in integer space. Next, she will show its merit in designing efficient integer algorithm for construction of that object in the discrete space. For an easy understanding, she will start from the basics of linear Diophantine equations, extend them to quadratic ones, and show their correspondence with different topological models of primitives like 3-dimensional plane, sphere, circles, and geodesics. In conclusion, several open problems would be mentioned.


    Bio:

    Dr. Ranita Biswas is an Assistant Professor in the Computer Science and Engineering Department, Indian Institute of Technology Roorkee, just after finishing her PhD in June 2016 from Computer Science and Engineering Department, Indian Institute of Technology Kharagpur. She has previous experience of research in the Computer Vision and Pattern Recognition Unit, Indian Statistical Institute, Kolkata. Her current domain of research is Digital Geometry (theory) and Computer Graphics (practice).



  • "AI, Society and Politics of the Future"


    Speaker:

    Dr. Anupam Guha, University of Maryland

    Date:2018-02-09
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    AI, Society, and Politics of the Future
    At the dawn of the Fourth Industrial Revolution, this time centred around artificial intelligence there is both great peril and great hope. Great peril because currently demonstrated technology, if there are no radical socio-economic changes, will kill a significant percentage of Indian jobs and create precarity for the hundreds of millions of India’s workers, both formal and informal, from farmers to engineers. Great hope because the same technology could create a quantum leap in productivity and logistics enabling the potential formation of a radical national policy which could perhaps, if backed with political imagination and wisdom, lead to an emancipation of Indian labour. This fork in the road cannot be avoided.

    There are reactionaries who will not look at AI critically and such a political dismissal of a titanic force will render us vulnerable, and there are also neoliberal technocrats who are enamoured with solutionism and who ignore the structural changes needed to make prosperity under AI possible. My talk will focus on, in the Indian context, the potentials of current AI, the need for a critical politically-educated look at it, the need for a radical rethink, beyond band-aid measures like UBI and robot taxes, towards the structural issues of work, wage, property, and public prosperity, and the possible futures of AI. In describing the above, my talk will also lay out the framework for a new kind of politics for the same.


    Bio:

    Anupam Guha is a computer scientist, working on Artificial Intelligence and Computational Linguistics based in the US east coast with a PhD from the University of Maryland (batch of 2017) and a MS from Georgia Tech (batch of 2010).

    He is deeply interested in and does outreach on the future Social, Political, and Economic implications of AI, especially both its oppressive and emancipatory potentials for Labour. Dr. Guha advocates a better understanding of the political-economy of the fully automated future. He sees the question of AI intricately woven with the question of future labour, i.e. the student community. Thus, Dr. Guha engages in critical solidarity with progressive Indian student politics. Dr. Guha advocates a radical reinterpretation of wage, work, and public prosperity, as well as democratic ownership models of automation, in the light of imminent AI.
    Dr. Guha does advocacy for an increased communication between silos of academia, science informed policy, politics informed science, and supports free, universal, and quality higher education in India.

    Dr. Guha has written in Hindustan Times, Indian Express, and TheWire.in on the issues of AI, Labour, and the Politics of Science.



  • AI, Society and Politics of the Future


    Speaker:

    Dr. Anupam Guha, University of Maryland

    Date:2018-02-09
    Time:12:01:00 (IST)
    Venue:SIT-001
    Abstract:

    At the dawn of the Fourth Industrial Revolution, this time centred around artificial intelligence there is both great peril and great hope. Great peril because currently demonstrated technology, if there are no radical socio-economic changes, will kill a significant percentage of Indian jobs and create precarity for the hundreds of millions of India's workers, both formal and informal, from farmers to engineers. Great hope because the same technology could create a quantum leap in productivity and logistics enabling the potential formation of a radical national policy which could perhaps, if backed with political imagination and wisdom, lead to an emancipation of Indian labour. This fork in the road cannot be avoided.

    There are reactionaries who will not look at AI critically and such a political dismissal of a titanic force will render us vulnerable, and there are also neoliberal technocrats who are enamoured with solutionism and who ignore the structural changes needed to make prosperity under AI possible. My talk will focus on, in the Indian context, the potentials of current AI, the need for a critical politically-educated look at it, the need for a radical rethink, beyond band-aid measures like UBI and robot taxes, towards the structural issues of work, wage, property, and public prosperity, and the possible futures of AI. In describing the above, my talk will also lay out the framework for a new kind of politics for the same.


    Bio:

    Anupam Guha is a computer scientist, working on Artificial Intelligence and Computational Linguistics based in the US east coast with a PhD from the University of Maryland (batch of 2017) and a MS from Georgia Tech (batch of 2010).


    He is deeply interested in and does outreach on the future Social, Political, and Economic implications of AI, especially both its oppressive and emancipatory potentials for Labour. Dr. Guha advocates a better understanding of the political-economy of the fully automated future. He sees the question of AI intricately woven with the question of future labour, i.e. the student community. Thus, Dr. Guha engages in critical solidarity with progressive Indian student politics. Dr. Guha advocates a radical reinterpretation of wage, work, and public prosperity, as well as democratic ownership models of automation, in the light of imminent AI. Dr. Guha does advocacy for an increased communication between silos of academia, science informed policy, politics informed science, and supports free, universal, and quality higher education in India.

    Dr. Guha has written in Hindustan Times, Indian Express, and TheWire.in on the issues of AI, Labour, and the Politics of Science.



  • Decision Making in E-Markets Via POMDPs


    Speaker:

    Athirai Aravazahi Irissappane, Dimensional Mechanics, Seattle

    Date:2018-02-08
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    In open systems (dynamic, uncertain, insecure), users often need to rely on the abilities, competencies and knowledge of others to fulfill their own objectives. Making decisions about whom to interact with becomes cumbersome in these scenarios as the other users may be self-interested, diverse or deceptive. Further, these decisions should be made in an optimal manner, as every user strives towards maximizing its utility in the system. E-commerce, Crowd-Sourcing, Robotics and Healthcare are some of the domains where this issue of optimal decision-making under uncertainty has garnered attention.
    In this talk, I will describe how we use Partially Observable Markov Decision Processes (POMDPs) to address the problem of deciding a trustworthy interaction partner in e-markets. A technique to improve the scalability of POMDPs called Mixture of POMDP Experts (MOPE) will also be presented. MOPE exploits the inherent structure of trust-based domains by aggregating the solutions of smaller sub-POMDPs. Experiments show that using MOPE one can scale to millions of states and thousands of actions.


    Bio:

    Athirai Aravazahi Irissappane is a Machine Learning Engineer at Dimensional Mechanics, Seattle, USA. She is also an Adjunct Lecturer at the University of Washington, USA. She obtained her Ph.D. from Nanyang Technological University, Singapore, where she also received an Honorable Mention for her outstanding thesis. Her research interests are in Machine Learning and Artificial Intelligence focusing on Fraud Detection, Planning algorithms (POMDPs) for decision making in uncertain environments, Deep Learning for image recognition and time series analysis, etc. For further details please visit https://sites.google.com/view/athirai/ .



  • Integrating technology in personalized Ayurvedic diagnosis


    Speaker:

    Aniruddha Joshi

    Date:2018-02-02
    Time:14:30:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    We see two major trends today. First, increasing awareness and realization by a common man of long-term effects of personalized Ayurvedic traditional medicines. Secondly, patients are technology savvy, curious to know about what is happening. Doctors need to transform not only the language but also the patient experience by integrating today's technologies with traditional wisdom of sciences.
    The talk will cover few of the modern approaches to assist in the Ayurvedic diagnosis through signal processing, tongue analysis, voice analysis face analysis, gait analysis and big data analytics.


    Bio:

    * PhD. (Computer Science, IIT Bombay)
    With specialization in data analysis and machine learning.
    * CSIR research fellowship for 5 years
    Research associate at NCL Pune
    * Founder and CEO of Atreya Innovations Pvt. Ltd We strongly believe in the concepts of Ayurveda and are focused to work towards increasing its popularity and acceptance in today’s world by providing a new-age platform with technology gadgets and software solutions
    * Nadi Tarangini (NT) is the first of its kind technology product in Ayurveda for digitization of pulse-based diagnosis. In next 5 years, the plan is to replace most medical tests and biopsies (lengthy and costly) with one single report card of NT
    * Director at Reliable Process Design Solutions
    * Consultant to 3 companies in Pune
    * 3 patents and more than 35 papers in international journals and conferences



  • The Storage Complexity of Packet Routing on Trees


    Speaker:

    Prof. Boaz Patt-Shamir, Tel Aviv University

    Date:2018-01-31
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    We study the size of buffer storage required by lossless packet routing algorithms. We assume that packet injection into the network is bounded such that in any window of t time units, at most t*rho + sigma packets are injected, for some given parameters sigma >= 0 and 0 <= rho <= 1. (This is the adversary of the Adversarial Queuing Theory.)

    We give an overview of a sequence of results about the buffer space required to avoid overflows in the special case that the network is a tree, and all packets are destined for the root. In particular, we give (1) an extremely simple centralized algorithm that requires O(sigma+rho) buffers (which is optimal), (2) a simple, but unintuitive local algorithm that requires O(sigma+log(n)) buffers (which is the best possible for local algorithms), and (3) an algorithm with locality O(log n) that requires the optimal O(sigma+rho) buffer space. (An algorithm is said to have locality d if an action at a node depends only on the current state of nodes at distance at most d.) We also prove a few lower bounds on the storage requirement in case of other topologies.


  • Throughput-Optimal Algorithms for Generalized Flow Problems


    Speaker:

    Abhishek Sinha, Qualcomm

    Date:2018-01-18
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    We consider the problem of throughput-optimal packet dissemination, in the presence of an arbitrary mix of unicast, broadcast, multicast and anycast traffic, in an arbitrary wireless network. We propose an efficient online dynamic policy, called Universal Max-Weight (UMW), which is provably optimal. To the best of our knowledge, UMW is the first known throughput-optimal policy of such versatility in the context of generalized network-flow. Conceptually, the UMW policy is derived by relaxing the precedence constraints associated with multi-hop routing and then solving a min-cost routing and max-weight scheduling problem on a virtual network of queues.
    When specialized to the unicast setting, the UMW policy yields a throughput-optimal cycle-free routing and scheduling policy. This is in contrast with the well-known Back-Pressure (BP) policy which allows for packet cycling, which, in turn, causes excessive delay. Time permitting, an extension of the UMW policy to the Network Utility Maximization (NUM) problem and the optimal wireless broadcasting problem with point-to-multipoint links will also be discussed. Extensive simulation results show that the proposed UMW policy incurs a substantially smaller delay as compared to the BP policy and its enhanced variants. The proof of throughput-optimality of the UMW policy combines ideas from the stochastic Lyapunov theory with a sample path argument from the adversarial queueing theory and may be of independent theoretical interest.


    Bio:

    Abhishek Sinha is currently working as a senior engineer at Qualcomm Research, San Diego, CA. He obtained his Ph.D. from the Massachusetts Institute of Technology in June 2017, where he worked in the Laboratory for Information and Decision Systems (LIDS). Prior to joining MIT, he obtained his M.E. in Telecommunication Engg. from the Indian Institute of Science, Bangalore, and B.E. in Electronics and Telecommunication Engg. from the Jadavpur University, Kolkata, in the years 2012 and 2010 respectively. He is a recipient of several awards including the Best Paper Award in ACM MobiHoc 2016, Prof. Jnansaran Chatterjee memorial Gold medal and T.P. Saha Memorial Gold centered silver medal from Jadavpur University, and Jagadis Bose National Science Talent Search (JBNSTS) scholarship, Kolkata, India. His areas of interests include queuing theory, information theory, and network control.



  • Optimization for derandomization: polynomial identity testing, geodesically convex optimization and more


    Speaker:

    Ankit Garg, MSR New England

    Date:2018-01-12
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Randomness plays an important role across scientific disciplines. In theoretical computer science, there is a large body of work trying to understand the role of randomness in computation. An important part in this quest is the polynomial identity testing question which asks if one can test identities deterministically. In this talk, I will talk about deterministic identity testing of a special class of polynomials using tools from optimization. The class of problems that we solve form an amazing web of connections across areas such as geodesically convex optimization, quantum information theory, representation theory, non-commutative algebra and functional analysis. I will also outline applications to these areas. The talk will be based on several joint works with Zeyuan Allen-Zhu, Peter Burgisser, Leonid Gurvits, Yuanzhi Li, Rafael Oliveira, Michael Walter and Avi Wigderson.


    Bio:

    Ankit Garg is a postdoctoral researcher at Microsoft Research New England. He got his PhD in Computer Science from Princeton University, where he was advised by Mark Braverman. He did his undergraduate studies in the Department of Computer Science and Engineering at Indian Institute of Technology, Delhi. Ankit is interested in theoretical computer science, with a focus on communication complexity, information theory and algebraic complexity. He has also worked on quantum information theory and algorithmic problems at the intersection of quantum information and algebraic geometry. His recent interests are in combining ideas from databases, information retrieval and machine learning, with emphasis on large-scale graph management, graph analytics and visualization.



  • Combinatorial Bandits with Multinomial Logistic Feedback


    Speaker:

    Vineet Goyal, Columbia U.

    Date:2018-01-05
    Time:16:00:00 (IST)
    Venue:SIT Building, #006
    Abstract:

    We consider a combinatorial multi-armed bandits problem under a multinomial logistic feedback where at each time step, the decision maker selects a subset of cardinality K from N possible items (arms), and observes a (bandit) feedback in the form of the index of one of the items in said subset, or none. Each item in the index set is ascribed a certain value (reward), and the feedback is governed by a Multinomial Logit (MNL) model whose parameters are unknown. The objective of the decision maker is to maximize the expected cumulative rewards over a finite horizon T, or alternatively, minimize the regret relative to an oracle that knows the MNL parameters. We refer to this as the MNL-Bandit problem. This problem is representative of a larger family of exploration-exploitation problems that involve a combinatorial objective and arise in several important application domains including retailing and advertising where the user preferences govern the Multinomial logit (MNL) bandit feedback.

    Existing methods for this problem follow an explore-then-exploit approach which initially explores to estimate the model parameters and then exploits with the optimal solution based on the computed estimates. Such an approach requires certain a priori knowledge of the degree of ``separability'' of the instance to determine the length of the exploration period. This knowledge of separability depends on the true parameters of the underlying model and is typically unavailable in most applications. In this work, we give an efficient algorithm that simultaneously explores and exploits, without a priori knowledge of any problem parameters, and achieves near-optimal regret in both the ``well separated'' case, as well as the general parameter setting where this separation need not hold. Furthermore, we present a Thompson sampling based algorithm with a near-optimal worst-case performance in terms of regret but significantly better empirical performance.

    This is based on joint work with Shipra Agrawal, Vashist Avadhanula and Assaf Zeevi


    Bio:

    Vineet Goyal is Associate Professor in the Industrial Engineering and Operations Research Department at Columbia University where he joined in 2010. He received his Bachelor's degree in Computer Science from Indian Institute of Technology, Delhi in 2003 and his Ph.D. in Algorithms, Combinatorics and Optimization (ACO) from Carnegie Mellon University in 2008. Before coming to Columbia, he spent two years as a Postdoctoral Associate at the Operations Research Center at MIT. He is interested in the design of efficient and robust data-driven algorithms for large scale dynamic optimization problems with applications in revenue management and smart grid problems. His research has been continually supported by grants from NSF and industry. He received the NSF CAREER Award in 2014, Google Faculty Research Award in 2013, IBM Faculty Award in 2014 and Adobe Faculty Award in 2015.



  • Planar Graph Perfect Matching is in NC


    Speaker:

    Vijay V. Vazirani, University of California, Irvine.

    Date:2018-01-02
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution!

    The case of bipartite planar graphs was solved by Miller and Naor in 1989 via a flow-based algorithm. In 2000, Mahajan and Varadarajan gave an elegant way of using counting matchings to finding one, hence giving a different NC algorithm. However, non-bipartite planar graphs still didn't yield: the stumbling block being odd tight cuts. Interestingly enough, these are also a key to the solution: a balanced odd tight cut leads to a straight-forward divide and conquer NC algorithm. The remaining task is to find such a cut in NC. This requires several algorithmic ideas, such as finding a point in the interior of the minimum weight face of the perfect matching polytope and uncrossing odd tight cuts.

    Paper available at: https://arxiv.org/pdf/1709.07822.pdf
    Joint work with Nima Anari.



2017 talks

  • Transitioning IoT Edge Networks to Smart Communities


    Speaker:

    Prof Anish Arora, Ohio State University

    Date:2017-12-19
    Time:11:00:00 (IST)
    Venue:SIT Building, Seminar room
    Abstract:

    This talk draws upon experiences in designing and deploying IoT based cyber-physical systems for anti-poaching and sound complaint mitigation to discuss aspects of robustness, management, scalability, and security of such systems. These aspects are essential to transitioning these systems, whether they be point solutions for individual enterprises or a part of open data systems that are emerging in the context of smart communities. Our discussion will focus on low power, resource constrained IoT devices, albeit the findings are relevant to more diverse IoT edge systems. The last part of the talk will describe Smart Columbus, a large effort aiming at providing diverse community services in Columbus OH, as a reference architecture for open system incorporation of IoT Edge Networks and other data sources.


    Bio:

    Anish Arora is Professor of Computer Science and Engineering at The Ohio State University, a faculty-in- residence in OSU's Translational Data Analytics Institute, and a co-founder of The Samraksh Company. Arora has engaged in research, development, and translation to practice of device networks and wireless sensor networks since 1999. He is expert in scalability and dependability ―in terms of fault-tolerance, stabilization, security, and robustness― of these networked systems, and was made an IEEE Fellow in 2008 for contributions to this field. Arora has led the design and development of many fielded wireless networked systems across several application areas. Ongoing related efforts in the Smart and Connected Community context include sound complaint classification (SONYC in New York City), Anti-Poacher Surveillance Networks (HORNNET in South Africa and India), bicycle-pedestrian monitoring sensor networks (PedoCyc at OSU), and Tuscarora software framework (open-sourced by DARPA) for supporting development of application-specific networks. He is currently a member of OSU's Smart Columbus core team, which is working on smart and connected community projects including some in low-income areas.



  • Information-Centric Networking in Wireless and Mobile Networks


    Speaker:

    Prof. Torsten Braun, University of Bern, Switzerland

    Date:2017-12-15
    Time:10:15:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    In this talk we discuss challenges and opportunities of information-centric networking (ICN) in wireless and mobile networks. First, we describe NetCod, an approach using network coding in multi-path network environments to support efficient and reliable
    video streaming. We investigate the use of ICN in vehicular networks and present a solution for the Reverse Path Partitioning problem, which occurs when Data messages can not be sent along the same path as established by Interest messages. We discuss a virtualized architecture for cellular networks, where edge network caches can be proactively filled with data dependent on users' mobility. Â This approach, however, achieves good performance when users' mobility can be predicted accurately. Thus, we investigated proactive caching mechanisms, which are more robust to mobility prediction mistakes, by caching in nodes somewhat closer to the core of the network. Finally, we discuss our recent work on mobility prediction in cellular networks exploiting the combination of various machine learning mechanisms.

    CV: http://www.cds.unibe.ch/about_us/team/prof_dr_braun_torsten


  • Information-Centric Networking in Wireless and Mobile Networks


    Speaker:

    Prof. Torsten Braun, Head of Communication and Distributed Systems, University of Bern

    Date:2017-12-15
    Time:10:00:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    In this talk we discuss challenges and opportunities of information-centric networking (ICN) in wireless and mobile networks. First, we describe NetCod, an approach using network coding in multi-path network environments to support efficient and reliable video streaming. We investigate the use of ICN in vehicular networks and present a solution for the Reverse Path Partitioning problem, which occurs when Data messages can not be sent along the same path as established by Interest messages. We discuss a virtualized architecture for cellular networks, where edge network caches can be proactively filled with data dependent on users' mobility. This approach, however, achieves good performance when users' mobility can be predicted accurately. Thus, we investigated proactive caching mechanisms, which are more robust to mobility prediction mistakes, by caching in nodes somewhat closer to the core of the network. Finally, we discuss our recent work on mobility prediction in cellular networks exploiting the combination of various machine learning mechanisms.


    Bio:

    Prof. Torsten Braun, Head of Communication and Distributed  Systems, University of Bern
    http://www.cds.unibe.ch/about_us/team/prof_dr_braun_torsten



  • Estimating Matching Size in Graph Streams


    Speaker:

    Prof. Sanjeev Khanna (Univ. of Pennsylvania)

    Date:2017-12-15
    Time:15:00:00 (IST)
    Venue:Room # 425, Bharti Building
    Abstract:

    We study the problem of estimating the maximum matching size in sublinear space when the edges of a graph are revealed in a streaming manner. In particular, we consider the following question: Is estimating the maximum matching size a distinctly easier task than finding an approximate matching in the streaming model of computation? Previously known results do not provide any separation in the space requirements for these two tasks.

    We establish new upper and lower bound results for tradeoff between space requirement and desired accuracy in estimating maximum matching size. Our results show that while the problem of matching size estimation is provably easier than the problem of finding an approximate matching, the space complexity of the two problems starts to converge together as the accuracy desired in the computation approaches near-optimality. A well-known connection between matching size estimation and computing rank of Tutte matrices allows us to further carry our lower bound results to the problem of estimating rank of a matrix in the streaming model, and we show that almost quadratic space is necessary to obtain a good approximation to matrix rank.

    Based on a joint work with Sepehr Assadi and Yang Li.


  • Managing Systems Memory


    Speaker:

    Prof. K. Gopinath from IISc Bangalore

    Date:2017-11-22
    Time:11:30:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    Memory management is known to be an important problem since the dawn of the computing era. With larger memories and many specialized subsystems, the problem of managing memory is becoming more complex. In this talk, we give a perspective on the problem including some results from recent research on mem mgmt interactions with RCU and huge pages.


  • Low-Latency Analytics on Colossal Data Streams with SummaryStore


    Speaker:

    Nitin Agrawal from Samsung research

    Date:2017-11-20
    Time:15:30:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    In this talk I will present SummaryStore, an approximate time–series storage system designed for real-time analytics and machine learning at unprecedented cost savings. SummaryStore is capable of storing large volumes of data streams (~petabyte on a single node) as highly compact summaries through a novel abstraction of time-decayed summaries; it can provide accurate query responses, and tightly-bound error estimates, on data approximated by factors of 10-100x. This work was recently presented at SOSP '17.


    Bio:

    Nitin Agrawal is a systems researcher currently at Samsung Research. His interests lie broadly in distributed, mobile, and storage systems. He has received best paper awards at USENIX FAST 2009, 2011, 2012 and his work on storage for smartphones was featured in the MIT Technology Review, Slashdot, and other popular press. He is currently serving as the program committee chair for FAST 2018 and previously chaired HotStorage 2016. He obtained a PhD from Wisconsin in 2009.



  • "Challenges and Opportunities in Wearable Systems"


    Speaker:

    Prof. David F. Kotz, Champion International Professor,Department of Computer Science,Dartmouth College,Hanover, NH.

    Date:2017-11-10
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Wearable systems offer great promise in application domains as varied as healthcare, eldercare, augmented work, education, athletics, entertainment, parenting, travel, and personal productivity. In particular, I highlight two of our own wearable-mHealth projects. First, I introduce the Amulet, a wristband with the capability of running multiple mHealth apps with strong security and long battery life. Second, I introduce the Auracle, which aims to be an unobtrusive earpiece that can detect eating incidents in support of eating-behavior studies and interventions. I leverage these examples to outline some of the broader opportunities and challenges faced in developing wearable technology. In particular, we must address security and privacy challenges as we design and develop wearable devices, systems, and applications if we hope to see this technology widely accepted and adopted. Strong (and usable) security mechanisms are essential for safety-critical applications; meaningful privacy protections are essential for systems that accompany us through our private and public life. By raising these concerns today, I seek to ensure the wearable systems of tomorrow will have strong and usable security and privacy properties.


    Bio:

    David Kotz is the Champion International Professor in the Department of Computer Science at Dartmouth College. He served as Associate Dean of the Faculty for the Sciences for six years and as the Executive Director of the Institute for Security Technology Studies for four years. He served on the US Healthcare IT Policy Committee from 2013-17. His research interests include security and privacy, pervasive computing for healthcare, and wireless networks. He has published over 130 refereed journal and conference papers and obtained over m in grant funding. He is PI of a m grant from the NSF Secure and Trustworthy Cyberspace program and leads a five-university team investigating Trustworthy Health & Wellness technology (see thaw.org). He is an IEEE Fellow, a Senior Member of the ACM, a 2008 Fulbright Fellow to India, and an elected member of Phi Beta Kappa.
    After receiving his A.B. in Computer Science and Physics from Dartmouth in 1986, he completed his Ph.D in Computer Science from Duke University in 1991 and returned to Dartmouth to join the faculty. For more information see http://www.cs.dartmouth.edu/~dfk/.

    photo:
    http://www.cs.dartmouth.edu/~dfk/photos/Kotz2013/
    http://www.cs.dartmouth.edu/~dfk/photos/Kotz2013/Kotz-20130624-DSC03617-face.jpg



  • Make in India: Massively scalable WiFi Networks"


    Speaker:

    Dr. Pravin Bhagwat, CTO and co-founder Mojo Networks www.mojonetworks.com

    Date:2017-10-30
    Time:14:00:00 (IST)
    Venue:101, Bharti Building
    Abstract:

    Mojo Networks is revolutionizing WiFi through the power of cloud and open standards to make WiFi networks remarkably easy to manage  and reliable at massive scale and incredibly cost effective. The company started as a 'Make in India' experiment fourteen years ago. It has now grown into a provider of Cloud Managed Wi-Fi for the global market. In partnership with Reliance Jio, we are building the world's largest public WiFi network. ~100,000 access points have been deployed throughout the country and this network is rapidly growing with a target to reach 1 million access points by next year.  I'll present three key architecture innovations – (1) Cloud management, (2) Tri-radio WiFi Access Points and (3) Open programmable APIs –which offer ground breaking advantages over controller-based WLAN architecture prevalent in large Enterprises, Higher EDU campuses today. I'll also talk about the new developments in Open Compute Project (OCP) that are pushing all technology eco-system players towards open  standards, more interoperability and disaggregation of software & hardware.


    Bio:

    Pravin Bhagwat is an entrepreneur and a wireless security researcher. He is Co-founder and CTO of Mojo Networks -- a provider of highly scalable & secure cloud-managed Wi-Fi solutions. Mojo Networks has built a world class engineering team in Pune which is successfully launching 'first-of-its-kind' Cloud managed Wi-Fi products in the global market.

    Prior to starting his entrepreneurial career, Pravin was a lead researcher at AT&T Research and IBM Thomas J. Watson Research Center, New York where he was a member of MobileIP, WiFi and Bluetooth research teams. He also served as a visiting faculty member at computer science department, IIT Kanpur. Pravin also has 27 patents to his credit.

    For the past five years, Pravin has also been experimenting with a couple of non-tech ideas, such as leading a community level green/recycling initiative, jumpstarting an eco-restoration project near Pune and making a feature film length documentary. Pravin has B.Tech. in Computer Science from IIT Kanpur, India and MS/PhD in computer science from the University of Maryland, College Park, USA.



  • Wadhwani Institute for Artificial Intelligence: Independent Not-For-Profit Research Institute for Social Good.


    Speaker:

    Dr. P. Anandan , CEO Wadhwani Institute for Artificial Intelligence

    Date:2017-10-27
    Time:14:30:00 (IST)
    Venue:101, Bharti Building
    Abstract:

    WIAI is a new philanthropic effort by the Wadhwani brothers, Romesh Wadhwani and Sunil Wadhwani, entrepreneurs of Indian origin living in the US. Our mission is to do research in AI, ML, Data Science and related areas that will address societal challenges in a variety of domains including (but not limited to) Education, Health, Infrastructure, and Agriculture. We plan to build a strong team of researchers dedicated to this mission and to work with a variety of partners – from Academia, NGOS, Government and International Agencies, Industry and others. We plan to take an approach that is strongly driven by societal use cases keeping in mind the feasibility of deployment at scale as the ultimate measure of impact.  The Institute will be headquartered in India and will begin its work by addressing opportunities for research on societal impact in India.


    Bio:

    CEO, Wadhwani Institute for Artificial Intelligence



  • SpanFS - Building a Distributed Filesystem with Strict Consistency.


    Speaker:

    Apurv Gupta

    Date:2017-10-26
    Time:16:15:00 (IST)
    Venue:404, Bharti building
    Abstract:

    This talk will cover the key challenges of marrying strict consistency required by POSIX while building a scale out filesystem that can span thousands of nodes. Traditionally, the cloud scale storage systems gave up some properties in order to achieve scalability (e.g., GFS/S3). The enterprise stacks (built on SMB/NFS) could not benefit from these new protocols. Cohesity's SpanFS solves these problems. We'll also touch upon newer filesystems like BTRFS and the similarities it shares with SpanFS.


    Bio:

    Apurv is the Chief Architect at Cohesity. His career spans Bell Labs, several startups, including GreenBorder (acquired by Google), and over 8 years at Google. He was also the Co-founder of ArkinNet (acquired by VMWare). He is passionate about building highly scalable systems - query engines, data warehousing systems and most recently, filesystems. Apurv holds a B.Tech. from Indian Institute of Technology, Kanpur.

     



  • Pfaffian Orientations and Conformal Minors


    Speaker:

    Nishad Kothari, U. of Waterloo

    Date:2017-09-28
    Time:16:00:00 (IST)
    Venue:Room # 425, Bharti Building
    Abstract:

    Valiant (1979) showed that unless P = NP, there is no polynomial-time algorithm to compute the number of perfect matchings of a given graph even if the input graph is bipartite. Earlier, the physicist Kasteleyn (1963) introduced the notion of a special type of orientation of a graph, and he referred to graphs that admit such an orientation as Pfaffian graphs. Kasteleyn showed that the number of perfect matchings is easy to compute if the input graph is Pfaffian, and he also proved that every planar graph is Pfaffian. The complete bipartite graph K3;3 is the smallest graph that is not Pfaffian. In general, the problem of deciding whether a given graph is Pfaffian is not known to be polynomial-time solvable. Special types of minors, known as conformal minors, play a key role in the theory of Pfaffian orientations. In particular, a graph is Pfaffian if and only if each of its conformal minors is Pfaffian. It was shown by Little (1975) that a bipartite graph G is Pfaffian if and only if G does not contain K3;3 as a conformal minor (or, in other words, if and only if G is K3;3-free); this places the problem of deciding whether a bipartite graph is Pfaan in co-NP. Several years later, a structural characterization of K3;3-free bipartite graphs was
    obtained by McCuaig, Robertson, Seymour and Thomas (STOC 1997), and this led to a polynomial-time algorithm for deciding whether a given bipartite graph is Pfaffian. No such algorithm is known for general graphs. Norine and Thomas (2008) showed that, unlike the bipartite case, it is not possible to characterize all Pfaffian graphs by excluding a nite number of graphs as conformal minors.
    In light of everything that has been done so far, it would be interesting to even identify rich classes of Pfaffian graphs (that are nonplanar and nonbipartite). Inspired by a theorem of Lovasz (1983), we took on the task of characterizing graphs
    that do not contain K4 as a conformal minor | that is, K4-free graphs. In a joint work with U. S. R. Murty (Journal of Graph Theory 2016), we provided a structural characterization of planar K4-free graphs. The problem of characterizing nonplanar K4-free graphs is much harder, and we have evidence to believe that it is related to the problem of recognizing Pfaffian graphs. In particular, we conjecture that every graph that is K4-free and K3;3-free is also Pfaffian.


    Bio:

    Nishad Kothari is a Postdoctoral Fellow at the Department of Combinatorics and Optimization, University of Waterloo. His research is in structural and algorithmic graph theory with a focus on Matching Theory and Pfaffian Orientations. Nishad completed his PhD at Waterloo under the supervision of Joseph Cheriyan and U. S. R. Murty, and before that he completed a Masters degree in Computer Science at Georgia Tech. October 2017 onwards, Nishad will be a Postdoctoral Fellow at the Faculty of Mathematics, University of Vienna, where he plans to continue his work in Pfaffian orientations with Ilse Fischer.



  • On Demystifying CNF-XOR Formulas


    Speaker:

    Dr. Kuldeep Meel, professor at National University of Singapore.

    Date:2017-09-13
    Time:12:00:00 (IST)
    Venue:SIT , Seminar Room
    Abstract:

    The Boolean-Satisfaction Problem (SAT) is fundamental in many diverse areas such as artificial intelligence, formal verification, and biology. A large body of work explains the runtime performance of modern SAT solvers through analysis of random k-CNF formulas. Recent universal-hashing based approaches to the problems of sampling and counting crucially depend on the runtime performance of specialized SAT solvers on formulas expressed as the conjunction of both k-CNF constraints and variable-width XOR constraints (known as CNF-XOR formulas), but random CNF-XOR formulas are unexplored in prior work.
    In this work, we present the first study of random CNF-XOR formulas. We show empirical evidence of a surprising phase-transition in the satisfiability of random CNF-XOR formulas that follows a linear trade-off between k-CNF and XOR constraints. Moreover, we prove that a phase-transition in the satisfiability of random CNF-XOR formulas exists for k=2 and (when the number of k-CNF constraints is small) for k>2. We empirically demonstrate that a state-of-the-art SAT solver scales exponentially on random CNF-XOR formulas across a wide range of XOR-clause densities, peaking around the empirical phase-transition location. Finally, we prove that the solution space of random CNF-XOR formulas 'shatters' at all nonzero XOR-clause densities into well-separated components.


    Bio:

    Kuldeep Meel is starting as Assistant Professor at National University of Singapore. His research broadly lies at the intersection of artificial intelligence and formal methods. He is the recipient of a 2016-17 IBM PhD Fellowship, the 2016-17 Lodieska Stockbridge Vaughn Fellowship and the 2014 Vienna Center of Logic and Algorithms International Outstanding Masters thesis award.



  • Microsoft Research Fellows Program


    Speaker:

    Dr. Kalika Bali , Microsoft Research Bangalore.

    Date:2017-09-04
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    The Research Fellows program at Microsoft Research India (MSR India) exposes bright, young minds in India to world-class research and the state-of-the-art in technology. The program prepares students for careers in research, engineering, as well as entrepreneurship. MSR India has some of the best researchers pushing the frontiers of computer science and technology. Research Fellows have contributed to all aspects of the research lifecycle, spanning ideation, implementation, evaluation, and impact.

    Please note: For the next cycle of RF applications candidates should have completed BS/BE/BTech or MS/ME/MTech in Computer Science or related areas, graduating by summer 2018.


    Bio:

    Kalika Bali is a Researcher at Microsoft Research Lab India. She has worked for nearly  two decades in the area of Speech and Language Technology, especially for resource poor languages.  The primary focus of her current research is on how Natural Language systems can help Human-Machine Interaction more   socio-culturally relevant and appropriate.



  • New Techniques for Power-Efficient CPU/CPU-GPU Processors


    Speaker:

    Dr. Kapil Dev, NVIDIA, Santa Clara, USA

    Date:2017-08-31
    Time:15:45:00 (IST)
    Venue:SIT , Seminar Room
    Abstract:

    After the end of Dennard scaling, multi-core CPU and CPU-GPU processors have emerged as primary type of computing devices to satisfy the high performance demands of modern applications. Performance of these devices is typically limited by the power density and thermal design power (TDP) of the chip. Specifically, power efficiency (performance/Watt) is the key factor in improving the performance of modern computing devices being used in mobile, desktop and server computers.

    In this talk, I will highlight couple of new techniques to build power-efficient CPU-GPU processors. First, I will present a framework to perform fine-grained power mapping and modeling of multi-core processors. The framework uses a high-end infrared camera to capture thermal images of the processor to generate accurate power models, which can potentially be used for better run-time power management of the processor. In the second half of my talk, I will present a machine learning-based technique to map workloads on appropriate device of CPU-GPU processors under different static and time-varying workload/system conditions.Overall, the proposed techniques could be used to implement an efficient power management unitfor CPU/CPU-GPU processors.


    Bio:

    Dr. Kapil Dev is a Senior Engineer at Nvidia, Santa Clara. He received his PhD from Brown University, Providence, Rhode Island in 2016. He has published 10 papers in the peer-reviewed international conferences and journals. He has also co-authored 4 US patents. Further, he is an active reviewer of multiple journals, including the IEEE Transactions on Very Large Scale Integration Systems and the Elsevier integration- The VLSI Journal. He has also served in the programcommittee of different conferences such as DAC, HPC, and ICCAD. His research interests are in the field of building energy and power-efficient computing systems.  He is particularly interested in applying machine learning techniques to improve design and run-time power methodologies for CPU-GPU processors.



  • Internet of Things: Information Analytics, Energy-Efficiency and Hardware Security


    Speaker:

    Prof. Keshab K. Parhi (http://www.ece.umn.edu/~parhi)

    Date:2017-08-25
    Time:11:00:00 (IST)
    Venue:SIT Building, Seminar room
    Abstract:

    This talk will address VLSI architectures for emerging internet-of- things. Machine learning and information analytics are important components in all these things. Reducing energy consumption and silicon area are also critical for these things. Enhancing security and preventing piracy are also of critical concern. Almost all things should have embedded classifiers to make decisions on data. Thus, reducing energy consumption of features and classifiers is important. First part of the talk will present energy reduction approaches for a medical internet-of-thing for monitoring EEG and predicting seizures. In the second part of the talk, I will addresses hardware security and present approaches to designing circuits that cannot be easily reverse engineered and cannot be pirated. To this end, authentication and obfuscation approaches will be presented.


    Bio:

    Keshab K. Parhi received the B.Tech. degree from the Indian Institute of Technology (IIT), Kharagpur, in 1982, the M.S.E.E. degree from the University of Pennsylvania, Philadelphia, in 1984, and the Ph.D. degree from the University of California, Berkeley, in 1988. He has been with the University of Minnesota, Minneapolis, since 1988, where he is currently Distinguished McKnight University Professor and Edgar F. Johnson Professor in the Department of Electrical and Computer Engineering. He has published over 600 papers, is the inventor of 29 patents, and has authored the textbook VLSI Digital Signal Processing Systems (Wiley, 1999) and coedited the reference book Digital Signal Processing for Multimedia Systems (Marcel Dekker, 1999). Dr. Parhi is widely recognized for his work on high-level transformations of iterative data-flow computations, for developing a formal theory of computing for design of digital signal processing systems, and for his contributions to multi-gigabit Ethernet systems on copper and fiber and for backplanes. His current research addresses VLSI architecture design of signal processing, communications and biomedical systems, error control coders and cryptography architectures, high-speed transceivers, stochastic computing, hardware security, and molecular computing.  He is also currently working on intelligent classification of biomedical signals and images, for applications such as seizure prediction and detection, schizophrenia classification, biomarkers for mental disorders, brain connectivity, and diabetic retinopathy screening.  Dr. Parhi is the recipient of numerous awards including the 2017 Mac Van Valkenburg award and the 2012 Charles A. Desoer Technical Achievement award from the IEEE Circuits and Systems Society, the 2004 F. E. Terman award from the American Society of Engineering Education, the 2003 IEEE Kiyo Tomiyasu Technical Field Award, the 2001 IEEE W. R. G. Baker prize paper award, and a Golden Jubilee medal from the IEEE Circuits and Systems Society in 2000. He was elected a Fellow of IEEE in 1996. He served as the Editor-in-Chief of the IEEE Trans. Circuits and Systems, Part I during 2004-2005 and as an elected member of the Board of Governors of the IEEE Circuits and Systems society from 2005 to 2007.



  • The Era of Accelerators


    Speaker:

    Viktor K. Prasanna, University of Southern California

    Date:2017-08-02
    Time:15:00:00 (IST)
    Venue:SIT Building, Seminar room
    Abstract:

    FPGAs have matured over the years and are being used along with multi-core and emerging memory technologies to realize advanced platforms to accelerate a variety of applications. This talk will review the promise of reconfigurable computing leading up to current trends in accelerators. We will illustrate FPGA-based parallel architectures and algorithms for a variety of data analytics kernels in advanced networking, streaming graph processing and machine learning. While demonstrating algorithm-architecture co-design methodology to realize high performance accelerators for deep packet inspection, regular expression matching, packet classification, traffic classification, heavy hitter detection, etc., we demonstrate the role of modeling and algorithmic optimizations to develop highly efficient IP cores. We also show high throughput and energy efficient accelerator designs for a class of graph analytics and machine learning kernels. Our approach is based on high level abstractions of the FPGA platforms and design of efficient data structures, algorithms and mapping methodologies. We illustrate the performance improvements such accelerators offer and demonstrate the suitability of FPGAs for these computations. We conclude by identifying opportunities and challenges in exploiting emerging heterogeneous architectures composed of multi-core processors, FPGAs, GPUs and coherent memory.


    Bio:

    Viktor K. Prasanna (ceng.usc.edu/~prasanna) is Charles Lee Powell Chair in Engineering in the Ming Hsieh Department of Electrical Engineering and Professor of Computer Science at the University of Southern California. He is the director of the Center for Energy Informatics at USC and leads the FPGA (fpga.usc.edu) and Data Science Labs (dslab.usc.edu). His research interests include parallel and distributed systems including networked sensor systems, embedded systems, configurable architectures and high performance computing. He served as the Editor-in-Chief of the IEEE Transactions on Computers during 2003-06 and is currently the Editor-in-Chief of the Journal of Parallel and Distributed Computing. Prasanna was the founding Chair of the IEEE Computer Society Technical Committee on Parallel Processing. He is the Steering Co-chair of the IEEE International Parallel and Distributed Processing Symposium (www.ipdps.org) and the Steering Chair of the IEEE International Conference on High Performance Computing (www.hipc.org). He is a Fellow of the IEEE, the ACM and the American Association for Advancement of Science (AAAS). He is a recipient of 2009 Outstanding Engineering Alumnus Award from the Pennsylvania State University. He received the 2015 W. Wallace McDowell award from the IEEE Computer Society for his contributions to reconfigurable computing.



  • Novel Markets on the Internet: Models and Algorithms


    Speaker:

    Prof Vijay Vazirani

    Date:2017-07-26
    Time:11:30:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    The Internet has revolutionized markets; perhaps the most impressive of these in terms of size, impact and novelty of mechanisms is Google's Adwords market. Google's initial mechanism for this market did not effectively serve small advertisers, who form the fat tail of the budget distribution, thereby losing a significant fraction of potential revenue. Building on the theory of online matching algorithms, we gave an optimal algorithm for the Adwords market and a formal framework that has proved useful in thinking about budgeted auctions more generally. These ideas have been widely adopted by Google and other search engine companies.

    The rapidly growing cloud computing market, which is projected to eclipse even the Adwords market, provides another opportunity for novel algorithms and practical impact.I will describe a first attempt in this direction: an equilibrium-based market model for pricing and allocating resources, and a polynomial time algorithm for computing them without relying on the proverbial ``invisible hand of the market’'.


    Bio:

    Vijay Vazirani got his Bachelor's degree from MIT in 1979, his Ph.D. from U.C. Berkeley in 1983, and is currently Professor of Computer Science at Georgia Tech.His research career has been centered around the design of algorithms, together with work on complexity theory, cryptography, coding theory, and game theory.He is best known for his work on efficient algorithms for the classical maximum matching problem (1980's), fundamental complexity-theoretic results obtained using randomization (1980's), approximation algorithms for basic NP-hard optimization problems (1990's), and efficient algorithms for computing market equilibria (current).In 2001 he published what is widely viewed as the definitive book on approximation algorithms.This book has been translated into Japanese, French and Polish. In 2005 he initiated work on a comprehensive volume on algorithmic game theory; the co-edited volume appeared in 2007.



  • Faster test execution by improving cache locality.


    Speaker:

    Ajitha Rajan, University of Edinburgh

    Date:2017-07-25
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    With the growing complexity of software, the number of test cases needed for effective validation is extremely large. Executing these large test suites is expensive, both in terms of time and energy. Cache misses are known to be one of the main factors contributing to execution time of a software. Cache misses are reduced by increasing the locality of memory references. In this talk, I will discuss a novel approach to improve instruction locality across test case runs. Our approach measures the distance between test case runs (number of different instructions). We then permute the test cases for execution so that the distance between neighbouring test cases is minimised. We hypothesise that test cases executed in this new order for improved instruction locality will reduce time consumed.


    Bio:

    Ajitha Rajan is a an assistant professor at the University of Edinburgh. Prior to that she was a post doc at University of Oxford and Laboratoire d'Informatique de Grenoble. She graduated with a PhD from the University of Minnesota. Her research is in the area of Software engineering, particularly software testing. She has served on program committees of major international software engineering conferences like ASE, ICST and FMCAD.

     



  • Geometric Optimization Problems


    Speaker:

    Dr. Saurabh Ray

    Date:2017-07-21
    Time:12:00:00 (IST)
    Venue:SIT Building, #006
    Abstract:

    We consider classical combinatorial optimization problems (e.g. independent set, set cover, dominating set etc) in a geometric setting where the set system is induced by geometric objects. Even though these problems remain NP-hard, much better approximation algorithms are often possible due to the special structure in geometrically induced problems.

    In this talk we give an overview of broad techniques and results in this area including some of our recent results. We also present a number of open problems.​


    Bio:

    Dr. Saurabh Ray is currently on the faculty of Computer Science in NYU-Abu Dhabi.
    Dr. Ray completed his undergraduate studies in Computer Science and Engineering in Indian Institute of Technology Guwahati. He received a Ph.D. in Computer Science from Universität des Saarlandes in Germany in 2009 and since then has done postdoctoral research work at the Max-Plank Institute for Informatics in Germany, École Polytechnique Fédérale de Lausanne in Switzerland and Ben Gurion University in Israel. As a postdoc, Saurabh has spent time in both Computer Science and Mathematics departments. His research interests are in discrete and computational geometry, extremal combinatorics, and approximation algorithms. He is particularly excited about novel applications of mathematics in Computer Science.



  • Is Interaction Necessary for Distributed Private Learning?


    Speaker:

    Jalaj Upadhyay, Penn State U.

    Date:2017-06-19
    Time:10:00:00 (IST)
    Venue:Room # 425, Bharti Building
    Abstract:

    Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual's device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users' hands.

    For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol.

    We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction.

    We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.

    Based on a joint work with Adam Smith and Abhradeep Thakurta.


  • Energy Efficiency and Reliability in Many-Core Embedded Systems


    Speaker:

    Amit Kumar Singh, Univ. of Southampton

    Date:2017-04-20
    Time:11:30:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Modern embedded systems, e.g., smart phones, PDAs and tablet PCs often need to support multiple embedded software applications and this number is increasing faster than ever. In order to satisfy the high performance requirement of various applications, the reliance on multi/many-core based systems is increasing. These systems require optimization for required performance metrics in order to fulfill the end-user demands. Energy consumption and reliability are two important metrics as their optimization leads to increased battery life and better user experience, respectively. To fulfill these requirements, efficient run-time mapping methodologies are required that should be able to map and execute applications efficiently on the many-core system resources.

    This talk will provide an overview of run-time resource management activities that have been carried within PRiME project in order to optimize energy consumption and reliability. Additionally, more details about some recently proposed run-time management methodologies by highlighting the potential bottlenecks of existing ones. These methodologies exploit the application domain knowledge to perform compute intensive design space exploration at design-time such that only light-weight computations need to be performed at run-time. The results obtained by the proposed and relevant existing methodologies will be presented to show their advantages over existing approaches.


    Bio:

    Dr. Amit Kumar Singh received the B. Tech degree in Electronics Engineering from Indian Institute of Technology (Indian School of Mines), Dhanbad, India, in 2006, and the Ph.D. degree from the School of Computer Engineering, Nanyang Technological University (NTU), Singapore, in 2013. He was with HCL Technologies, India for year and half before starting his PhD at NTU, Singapore, in 2008. He worked as a post-doctoral researcher at National University of Singapore (NUS) from 2012 to 2014 and at University of York, UK from 2014 to 2016. Currently, he is working as senior research fellow at University of Southampton, UK. His current research interests include system level design-time and run-time optimizations of 2D and 3D multi-core systems with focus on performance, energy, temperature, and reliability. Dr. Singh was the receipt of ISORC 2016 Best Paper Award, PDP 2015 Best Paper Award, HiPEAC Paper Award, and GLSVLSI 2014 Best Paper Candidate. He has served on the TPC of IEEE/ACM conferences like ISED, MES, NoCArc and ESTIMedia.



  • "Data-Driven Techniques Towards Performance Optimization in Wireless Networks"


    Speaker:

    Ayon Chakraborty, SUNY Stonybrook

    Date:2017-03-22
    Time:12:00:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    In this talk, I will introduce two emerging issues in wireless networks in the two extreme ends of the network protocol stack. The first deals with managing quality-of-experience (QoE) in using mobile applications and the second deals with large-scale monitoring of the radio frequency spectrum. Essentially, for both the directions, we show how data-driven techniques can be leveraged to model performance of complicated real world systems and apply necessary optimization.
    Guaranteeing good QoE for the user is a challenge in mobile apps due to their diverse resource requirements and the resource-constrained, variable nature of wireless networks and mobile devices. A key question we ask is: how to select the best network for a given application, or how to optimize the overall user experience for multiple users and applications in a given network? To address such issues, we develop a network-side system called ExBox (Experience Middlebox) that uses a new experiential capacity-based formulation to aid admission control and network selection. ExBox can handle multiple different types of applications in the network and dynamic network conditions.
    The second issue relates to the problem of RF spectrum limitations in the face of exponentiating demands for wireless network capacity. This has promoted new spectrum sharing regimes where a diverse set of wireless technologies, including broadcast television, radar and various forms of wireless communications, co-exist in the same spectrum band while respecting specific spectrum rights. Our key question is: how effectively can we monitor available spectrum opportunities across time and space? One key technology in this space is spectrum databases that store spectrum availability information. We augment existing spectrum database technologies to improve their accuracies in a cost-effective fashion. The idea is to supplement existing model-based techniques with actual spectrum measurements. We also contribute towards making the spectrum measurements themselves scalable by developing techniques to perform spectrum sensing on mobile devices. These efforts culminate building a spectrum database system called SpecSense that can schedule and collect measurements from a distributed system of spectrum sensors in order to estimate spatio-temporal patterns in spectrum availability.


    Bio:

    Ayon Chakraborty is a final year Ph.D. candidate at State University of New York - Stony Brook working with Prof. Samir Das. He is broadly interested in mobile systems and data driven performance analysis/optimization of such systems functioning in constrained environments. Many papers resulting from his research appeared in top venues including ACM SIGCOMM CoNEXT, IMC, MOBICOM and IEE INFOCOM. His work on characterizing quality-of-experience (QoE) among mobile virtual network operators (MVNOs) was nominated for the Best Paper Award at ACM IMC 2014. Before joining Stony Brook, Ayon completed his B.E. from Jadavpur University (Calcutta) in 2011, where he won the Department Gold Medal. Ayon is also a recipient of DAAD scholarship.



  • Fusing Physical and Social Sensors for Situation Awareness


    Speaker:

    Mohan Kankanhalli, School of Computing, National University of Singapore

    Date:2017-03-09
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    With the spread of physical sensors and social sensors, we are living in a world of big sensor data. Though they generate heterogeneous data, they often provide complementary information. Combining these two types of sensors together enables sensor-enhanced social media analysis which can lead to a better understanding of dynamically occurring situations.

    We utilize event related information detected from physical sensors to filter and then mine the geo-located social media data, to obtain high-level semantic information. Specifically, we apply a suite of visual concept detectors on video cameras to generate "camera tweets" and develop a novel multi-layer tweeting cameras framework. We fuse "camera tweets" and social media tweets via a unified matrix factorization model. We perform matrix factorization on a spatio-temporal situation matrix formed from physical sensors signals, incorporating the surrounding social content to exploit a set of latent topics that can provide explanations for the concept signal strengths. We have tested our method on large scale real data including PSI stations data, traffic CCTV camera images and
    tweets for situation prediction as well as for filtering noise from events of diverse situations. The experimental results show that the proposed approach is effective.


    Bio:

    Mohan Kankanhalli is Provost's Chair Professor of Computer Science at the National University of Singapore (NUS). He is also the Dean of NUS School of Computing. Before becoming the Dean in July 2016, he was the NUS Vice Provost (Graduate Education) during 2014-2016 and Associate Provost during 2011-2013. Mohan obtained his BTech from IIT Kharagpur and MS & PhD from the Rensselaer Polytechnic Institute.

    His current research interests are in Multimedia Computing, Information Security, Image/Video Processing and Social Media Analysis. He directs the SeSaMe (Sensor-enhanced Social Media) Centre which does fundamental exploration of social cyber-physical systems which has applications in social sensing, sensor analytics and smart systems. He is on the editorial boards of several journals including the ACM Transactions on Multimedia, Springer Multimedia Systems Journal, Pattern Recognition Journal and Springer Multimedia Tools & Applications Journal. He is a Fellow of IEEE.



  • Urban sustainability, preserving privacy


    Speaker:

    Rijurekha Sen, MPI-SWS 

    Date:2017-03-06
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Developing regions are moving towards rapid urbanization, with tremendous growth in urban population who are attracted by the economic prospects in cities. The growth and management of urban infrastructure has to match this population growth, to make our cities efficient, sustainable and more livable. In this talk, I will give a brief overview of different projects where I have used Computer Science methodologies for applications in urban sustainability. I will describe one project in detail where we monitored traffic queues at low latency for automated intersection management,though the traffic was non-laned and disorderly as is common in developing countries. 

    The second part of my talk will focus on another system for privacy preserving sensing, where image capture devices can automatically sense privacy preferences of people in its vicinity and edit photographs to match these preferences. This aspect of security and privacy is important to consider, as gathering information ubiquitously for better urban management exposes citizens to risks of their privacy violation. 


    Bio:

    Rijurekha Sen is a Humboldt post-doctoral researcher at Max-Planck Institute for Software Systems. She works in the area of cyber-physical systems and uses inter-disciplinary Computer Science methods for applications at the intersection of information technology and society. She completed her PhD at IIT Bombay and has interned at different industrial labs and startups like Microsoft Research India and SMU Livelabs. Her dissertation on automated road traffic monitoring in developing regions was awarded the ACM India Doctoral Dissertation Award, 2014. 



  • Robust Image Feature Description, Matching and Applications


    Speaker:

    Shiv Ram Dubey, IIIT Chitoor

    Date:2017-03-03
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Most applications of computer vision such as image correspondence, image retrieval, object recognition, texture recognition, face and facial expression recognition, 3D reconstruction, etc. required matching of two images. In order to match two images, or in other words to find out the similarity/dissimilarity between two images, some description of image is required because the matching of raw intensity values of two images will be more time consuming and it will be affected by any small variations in its inherent properties such as brightness, orientation, scaling, etc. Thus, the images can be matched with its description derived from the basic properties of the image such as color, texture, shape, etc. This description is called as the feature descriptor/signature of the image. The main objectives of any descriptor are 1) to capture the discriminative information of the image, 2) to provide the invariance towards the geometric and photometric changes, and 3) to reduce the dimension of the feature to be matched.

    We have proposed an interleaved intensity order based local descriptor (IOLD) for region based image matching under various geometric and photometric transformation conditions. We have also proposed four grayscale local image descriptors namely local diagonal extrema pattern (LDEP), local bit-plane decoded pattern (LBDP), local bit-plane dissimilarity pattern (LBDISP), and local wavelet pattern (LWP) for biomedical image retrieval in MRI and CT databases. We have reported four color based local descriptors namely local color occurrence descriptor (LCOD), rotation and scale robust hybrid descriptor (RSHD), multichannel adder based local binary pattern (maLBP), and multichannel decoder based local binary pattern (mdLBP) for natural and texture image retrieval. An illumination compensation mechanism has been also introduced as a preferred pre-processing step for brightness invariant image retrieval.


  • Adaptivity and Optimization for Physics-Based Animation


    Speaker:

    Rahul Narain, U. of Minnesota

    Date:2017-03-02
    Time:10:00:00 (IST)
    Venue:301, Bharti building
    Abstract:

    Many visually compelling natural phenomena, from the wrinkling and creasing of thin sheets to the collision-free flow of pedestrians in large crowds, are challenging to model computationally because they involve nonlinear interactions across a large number of degrees of freedom. For many applications that must be resposive to user input, such as computer animation, virtual environments, and computer-aided design, it is essential to obtain a sufficiently accurate solution under a very limited computational budget. 

    In this talk, I will present my work on efficient computational methods for solving such simulation problems. First, I will describe an adaptive remeshing technique for simulating thin elastic sheets such as cloth. Through remeshing, one can dynamically adapt the resolution of the simulation mesh as the simulation proceeds, focusing computational effort on regions containing emerging detail such as wrinkles, creases, and fractures. Second, I will discuss optimization-based simulation techniques, in which the equations of motion are reformulated in terms of energy minimization. This approach enables robust and parallel time integration of complex nonlinear systems such as pedestrian crowds and nonlinear elastic materials. These techniques help make it possible to simulate large complex systems to high fidelity with a limited computational cost. 


    Bio:

    Rahul Narain is an assistant professor in the Department of Computer Science and Engineering at the University of Minnesota, Twin Cities. His research interests lie in numerical methods for computer graphics and animation, particularly focusing on efficient numerical optimization  techniques for large-scale problems in physics-based animation and computational displays. He did his B.tech at IITD and an M.S. and a Ph.D. from the University of North Carolina at Chapel Hill advised by Prof. Ming Lin. He was a postdoc at the University of California, Berkeley working with Prof. James O'Brien. 



  • PinDrop: Networking Substrate for High-Quality Real-Time Streaming


    Speaker:

    Venkat Padmanabhan, from Microsoft Research

    Date:2017-02-21
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Applications such as Internet telephony (VoIP) and online gaming place stringent demands on network performance in terms of latency, jitter, and packet loss. The best-effort Internet often falls short of meeting these requirements. In the PinDrop project at Microsoft Research, we are developing the networking substrate for supporting high-quality real-time streaming. Our approach centers on three pillars: (1) leveraging data-driven insights, (2) exploiting network path redundancy, and (3) performing wireless-specific optimizations. After touching briefly on these, I will focus on Kwikr, which uses WiFi hints to improve bandwidth estimation for Skype calls. Kwikr uses a simple technique called ping-pair to estimate and attribute the queuing delay at the WiFi downlink, which enables more informed bandwidth adaptation than the state of the art.


    Bio:

    Venkat Padmanabhan is a Principal Researcher at Microsoft Research India, where he founded the Mobility, Networks, and Systems group in 2007. He was previously with Microsoft Research Redmond, USA for nearly 9 years, and holds a B.Tech. from IIT Delhi and an M.S. and a Ph.D. from UC Berkeley, all in Computer Science. Venkat’s research interests are broadly in networked and mobile systems. He recently received the Shanti Swarup Bhatnagar Prize and the inaugural ACM SIGMOBILE Test-of-Time paper award. He has been elected a Fellow of the Indian National Academy of Engineering (INAE), the IEEE, and the ACM. He presently serves on the SIGCOMM Industrial Liaison Board. He can be reached online at http://research.microsoft.com/~padmanab/.



  • Multi-biometric Authentication Systems


    Speaker:

    Puneet Gupta, TCS Kolkata

    Date:2017-02-20
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Authentication system aims is the urgent need of time in several inevitable domains like banking, forensics and airport security. Traditional, keys or passwords based systems can be easily spoofed, lost or stolen hence they are replaced by biometric based systems which require user characteristics for accurate and automatic authentication. Unfortunately, no single trait provides all the characteristics due to sensor noise, user factors and environmental conditions. The performance can be improved by fusing multiple traits. Such a fused system is referred as multi-modal system.

    In this talk, several unimodal systems will be discussed that are based on: (i) slap image that contains all the fingerprints of a hand; (ii) finger veins; (iii) palm dorsal veins (dorsal means the back side of the hand); and (iv) Infrared hand geometry. Eventually, fusion of these systems will be analyzed in terms of performance improvement and involved cost.


    Bio:

    Puneet Gupta is presently a scientist in the TCS innovation lab, Kolkata. He earned his PhD degree in Computer science and Engineering in  2016 from IIT Kanpur. He was advised by Prof. Phalguni Gupta. His research interest lies in image processing and computer vision, in particular biometrics. During his Ph.D, he has designed several multi-biometric systems for accurate human authentication.



  • On Programming Multiprocessors and Reviving Innovation in Hardware Design


    Speaker:

    Dr. Gagan Gupta, Microsoft Research

    Date:2017-02-17
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In this talk I will address two topics. The first examines challenges posed by parallel programming. Conventional wisdom says that we must abandon order from programs to secure performance on multiprocessors. Unfortunately the resulting nondeterminism complicates programming and using multiprocessors, in general.

    In contrast, microprocessors execute instructions in sequential programs concurrently while providing an ordered view of the execution. This "ordered" paradigm kept system use simple while improving performance, contributing to the phenomenal success of microprocessors. I explore whether an analogous approach can be applied to multiprocessors, and what its pros and cons might be.

    In the second topic, I examine why innovation has stalled in the hardware industry. Drawing inspiration from the contribution of open-source technology to the success of the software industry, I explore whether a similar approach might benefit the hardware industry.


    Bio:

    Gagan Gupta has worked in the semiconductor industry for over two decades. Presently he is a researcher at Microsoft Research. He has led engineering and marketing teams to launch commercially successful processors at LSI Logic, Huawei, and ARC International, and influenced R&D strategies (e.g., at Intel and Sandisk). Gagan has a Ph.D. in Computer Sciences from the University of Wisconsin-Madison. His work has been recognized with awards at multiple industrial and academic fora, and is regularly cited in trade publications.



  • Joint Seat Allocations in IITs and Other CFTIs : Impact of an Algorithm


    Speaker:

    Surender Baswana (CSE, IIT Kanpur)

    Date:2017-02-17
    Time:17:00:00 (IST)
    Venue:LHC - 111
    Abstract:

    Until 2014, the admissions to the Indian Institutes of Technology (IITs) were conducted separately (based on JEE Advanced ranks) from the admissions to the non-IIT Centrally Funded Technical Institutes (CFTIs) (based on JEE mains ranks). The same set of candidates were eligible to apply for a seat in each of the two sets of institutes, and several hundred candidates would indeed receive two seats from the two different sets. Each such candidate could use at most one of the seats, leaving a vacancy in the other seat; this would be noticed much later, in many cases after classes began. Such seats would either remain vacant or would be reallocated at a later stage using spot rounds organized locally which used to be inherently unfair and inefficient.

    In 2015 and 2016, a new combined seat allocation process was implemented to resolve some of these issues, especially the vacancies. The process brought all CFTIs under one umbrella for admissions. Each candidate submitted a single choice list over all available programs, and received no more than a single seat from the system, based on the choices and the ranks in the relevant merit lists. At the core of this seat allocation process lies an algorithm adapted elegantly to many business rules of admissions and engineered to handle many practical challenges.

    In this talk we shall discuss this algorithm, the challenges faced in adapting it to practice, and its positive impact in the last two years.

    Important notes :
    1. At least 80-90% of the talk will be accessible even to a non-expert.
    2. For fans and researchers in algorithms: This talk might change (positively) the way you view algorithms and data structures.
    3. This talk will be interesting for all, especially to the BTech students who joined IITD in 2015 and 2016. They might be curious to know the technology that determined the unique program for them given two merit lists and a choice list spanning two sets of
    institutes.
    4. This talk will also have a few inspirational stories about the role played by the entire IIT fraternity (a student, an alumnus, a director, and a few faculty members) in this project.


  • Exploring the Role of Context and Social Media to Enhance the User-Experience in Photography and Image Tag Recommendation


    Speaker:

    Yogish Rawat, NUS Singapore

    Date:2017-02-15
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The last decade has seen a significant improvement in the ease of capture of multimedia content. Cameras now have intelligent features, such as automatic focus, face detection, etc., to assist users in taking better photos. However, it still is a challenge to capture high-quality photographs. The complex nature of photography makes it difficult to provide real-time assistance to a user for improving the aesthetics of capture. The recent advancements in sensor technology, wireless networks, and social media provide us an opportunity to enhance the photography experience of users.

    Our work aims at providing real-time photography assistance to users by leveraging on camera sensors and social media content. We focus on two different aspects of user-experience in photography. The first focus is on camera guidance and the second is on location recommendation for photography. In addition, we also focus on exploring the role of context and user-preference in developing a deep network for image tag recommendation. In this talk, we will provide a brief overview of our work on camera guidance and mainly discuss the research on location and image-tag recommendation.

    In camera guidance, we focus on landmark photography where feedback regarding scene composition and camera parameter settings is provided to a user while a photograph is being captured. We also focus on group photography where we use ideas of spring-electric graph model and color energy to provide real-time feedback to the user about the arrangement of people, their position and relative size on the image frame. In location recommendation, we first focus on a viewpoint recommendation system that can provide real-time guidance based on the preview on user's camera, current time and user's geo-location. Next, we introduce a photography trip recommendation method which guides a user to explore any tourist location from the photography perspective. We leverage on ideas from behavioural science for a better photography experience. More specifically, we utilize the optimal foraging theory to recommend a tour for efficient exploration of a tourist location.

    Social media users like to assign tags to the captured images before sharing with others. One of the biggest advantages of adding tags to photographs is that it makes them searchable and easily discoverable by other users. However, assigning tags to the photographs when shared immediately after capture can be challenging. In such scenario, real-time tag prediction based on image content can be very useful. However, the tags assigned to an image by a user also depend on the user-context apart from the visual content. In addition, user-preference also plays an important role in predicting image tags. Motivated by this, we propose a convolutional deep neural network which integrates the visual content along with the user-preference and user-context in a unified framework for tag prediction. We observe that integrating user-preference and the context significantly improves the tag prediction accuracy.


  • Applications of Regular quantifiers in Logics


    Speaker:

    A.V. Sreejith, U. Warsaw

    Date:2017-02-10
    Time:12:00:00 (IST)
    Venue:Bharti Building 501
    Abstract:

    In this talk, we look at various extensions of linear temporal logic (LTL) and first order logic (FO). Of particular interest are extensions using modulo counting operators.  First, we give a very brief introduction to logic and automata giving special attention to linear temporal logic. LTL has found tremendous use in program verification. It though has some limitations. One such limitation is, it cannot express periodic properties.  For example, ``the bell rings every 1 hour". We introduce the modulo counting operator which address this weakness of LTL and the satisfiability and model checking problem of this extended logic.  In the second part of the talk, we address an open problem in Descriptive complexity which was posed by Howard Straubing. The modulo counting extensions of first order logic (using addition relation) are closely related to Circuit complexity. It was not known whether there are regular languages which are not definable in the modulo counting extension of first order logic. We use a combination of combinatorics and semigroup theory to show that there are regular languages not definable in this logic, thereby answering Straubing.


  • Statistically Secure Additions and Restricted Multiplication of Secret Shares


    Speaker:

    Dor Bitan (Ben-Gurion University)

    Date:2017-02-06
    Time:15:30:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    One of the most interesting research topics in cryptography is finding schemes for an efficient fully-homomorphic encryption (FHE), preferably information-theoretically secure schemes, which are not based on unproven computational hardness assumptions. The greatest breakthrough in the field of FHE schemes was made by Gentry in 2009, and since then there were some interesting developments, e.g. Boneh et al. and Barkevski and Perlman. All currently known FHE schemes have computational security and are time-wise inefficient.
    We suggest an information-theoretically secure secret sharing scheme of restricted values that supports one multiplication of secrets and additions of, practically, any number of such multiplied secrets. Our scheme has perfect security against a single curious participant attack, and statistical security against an attack of a coalition of two curious participants. The scheme can be generalized to one with a larger number of participants, while remaining perfectly secure against a single curious participant attack.
    One application of our scheme is a secure homomorphic evaluation of multi-variant quadratic functions and 2-CNF circuits.
    (This is a joint work with Shlomi Dolev and Daniel Berend)


    Bio:

    Dor Bitan is a Ph.D student at the mathematics department at Ben-Gurion University of the Negev in Israel.



  • Statistically Secure Additions and Restricted Multiplication of Secret Shares


    Speaker:

    Dor Bitan (Ben-Gurion University)

    Date:2017-02-06
    Time:15:30:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    One of the most interesting research topics in cryptography is finding schemes for an efficient fully-homomorphic encryption (FHE), preferably information-theoretically secure schemes, which are not based on unproven computational hardness assumptions. The greatest breakthrough in the field of FHE schemes was made by Gentry in 2009, and since then there were some interesting developments, e.g. Boneh et al. and Barkevski and Perlman. All currently known FHE schemes have computational security and are time-wise inefficient.
    We suggest an information-theoretically secure secret sharing scheme of restricted values that supports one multiplication of secrets and additions of, practically, any number of such multiplied secrets. Our scheme has perfect security against a single curious participant attack, and statistical security against an attack of a coalition of two curious participants. The scheme can be generalized to one with a larger number of participants, while remaining perfectly secure against a single curious participant attack.
    One application of our scheme is a secure homomorphic evaluation of multi-variant quadratic functions and 2-CNF circuits.
    (This is a joint work with Shlomi Dolev and Daniel Berend)


    Bio:

    Dor Bitan is a Ph.D student at the mathematics department at Ben-Gurion University of the Negev in Israel.



  • Constrained Counting and Sampling: Bridging the Gap between Theory and Practice


    Speaker:

    Kuldeep Meel, Rice University

    Date:2017-01-18
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, decision making under certainty, probabilistic reasoning, and constrained-random verification. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints. In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints.

    In this talk, I will introduce a novel algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in Boolean reasoning over the past two decades. This has allowed us to obtain breakthrough results in constrained sampling and counting, providing a new algorithmic toolbox in design verification, machine learning, probabilistic reasoning, and the like. I will demonstrate the utility of the above techniques on various real applications including probabilistic inference, hardware verification, and our ongoing collaboration in estimating the reliability of critical infrastructure networks during natural disasters.


    Bio:

    Kuldeep Meel is a final year Ph.D. candidate at Rice University working with Prof. Moshe Vardi and Prof. Supratik Chakraborty (IITB). He obtained a B.Tech. from IIT Bombay and an M.S. from Rice in 2012 and 2014 respectively. His research broadly lies at the intersection of artificial intelligence and formal methods. He is the recipient of the 2016-17 IBM Ph.D. Fellowship, the 2016-17 Lodieska Stockbridge Vaughn Fellowship, and the 2013-14 Andrew Ladd Fellowship. His research won the best student paper award at the International Conference on Constraint Programming 2015. He co-won the 2014 Vienna Center of Logic and Algorithms International Outstanding Masters Thesis Award.



  • Local Guarantees in Graph Cuts and Clustering


    Speaker:

    Neha Gupta, Stanford (B.Tech, CSE, IITD, 2015)

    Date:2017-01-11
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Correlation Clustering is an elegant model that captures fundamental graph cut problems such as Min s − t Cut, Multiway Cut, and Multicut, extensively studied in combinatorial optimization. Here, we are given a graph with edges labeled + or − and the goal is to produce a clustering that agrees with the labels as much as possible: + edges within clusters and − edges across clusters. The classical approach towards Correlation Clustering (and other graph cut problems) is to optimize a global objective. We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node. This naturally gives rise to a family of basic min-max graph cut problems. A prototypical representative is Min Max s − t Cut: find an s − t cut minimizing the largest number of cut edges incident on any node. We present the following results: (1) an O(√n)-approximation for the problem of minimizing the maximum total weight of disagreement edges incident on any node, (2) a remarkably simple 7-approximation for minimizing local disagreements in complete graphs

    This is joint work with Prof. Moses Charikar and Prof. Roy Schwartz


  • Orchestrating NFV Symphony


    Speaker:

    Puneet Sharma, Distinguished Technologist at Hewlett Packard Labs

    Date:2017-01-10
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Communications Service Providers are poised to embark on Cloudification stage in their Network Functions Virtualization (NFV) transformation journey after Decoupling and Virtualization stages. There is imminent need for automated resource flexing technologies for enabling realization of NFV promise to deliver elasticity and agility. This talk discusses NFV orchestration challenges and articulates some architectural choices for service orchestration and resource allocation.


    Bio:

    Puneet Sharma is Distinguished Technologist at Hewlett Packard Labs, where he conducts research on software-defined networking, network function virtualization, cloud and datacenter networks, mobility, and network monitoring. Prior to joining HP Labs, Puneet received a Ph.D. in Computer Science from the University of Southern California and a B.Tech. in Computer Science & Engineering from the Indian Institute of Technology, Delhi. He has delivered Keynotes at various forums such as NFV World Congress 2016 and IEEE LANMAN 2014. He was recipient of best paper award at IEEE NFVSDN 2015 for his work on VNF performance characterization.

    He has also contributed to various standardization efforts such as co-authoring UPnP's QoS Working Group's QoSv3 standard and the IETF RFCs on the multicast routing protocol PIM. Puneet has published 60+ research articles in various prestigious networking conferences and journals (Transactions on Networking, ACM SIGCOMM, ACM HotNets, USENIX NSDI, IEEE INFOCOM, etc.). His work on Mobile Collaborative Communities was featured in the New Scientist Magazine. He has been granted 30+ US patents.

    He is an IEEE Fellow and an ACM Distinguished Scientist.



  • Pervasive Computing and Communication Techniques for Smart Healthcare


    Speaker:

    Vaskar Roy Choudhury, IIT Roorkee

    Date:2017-01-09
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Pervasive computing (PvC) paradigm emerged as a successor of Distributed and Mobile computing. Given the human-centric nature of PvC, intelligent healthcare has been identified as a major application domain. Furthermore, increase in the elderly population across the globe requires researchers to develop round-the-clock remote health monitoring systems for elderly and ailing individuals (using BP, pulse sensors) and to inform caregivers in case of an emergency. We need to develop and use innovative approaches that would support the much needed transformation of healthcare from reactive and hospital-centered to preventive, proactive, evidence-based, person-centered and focused on well-being rather than disease. We have been working on various aspects of the intelligent healthcare including Body Area Network (BAN) construction for collection of vital signs, investigating appropriate networking solution for transporting large amounts of healthcare data, intelligent decision making by processing healthcare data, detecting epidemics over a large geographical area, etc. In this talk, I shall introduce an overview of our works on intelligent healthcare and associated open research challenges.


Copyright © 2020 Department of Computer Science and Engineering. All Rights Reserved.