• Module 1: Human-AI Collaborated Generation: From Programming to Creativity

    In this report, we focus on a special topic in Human-AI Collaboration: Human-AI Collaborated Generation, including Human-AI Programming and Human-AI Co-Creation. We aim to answer 2 questions: 1). To what extent is the ongoing collaboration between human and AI systems? 2). Does AI or human contribute more to the collaboration task? Through a thorough survey of related works, we find currently human plays the leading role in both tasks, but the level of collaboration differs in the two tasks. Future works need to strengthen AI’s abilities on generation tasks as well as improve the communication between humans and AI during collaboration.

  • Module 1: A study of Human-AI Collaboration in Healthcare

    In this report, we survey the impact of Human-AI Collaboration in the Healthcare setting. We categorise the various studies done in this domain on the basis of their major focus, be it model architecture, human trust or virtual assistance. For each category, we provide an analysis of varied work done in this sub-domain, identifying their key takeaways. The survey also provides a bigger picture on the current trends in human-AI collaboration in healthcare, answering some key questions on the role each stakeholder plays. In doing so, we also identify some key challenges in furthering development in this field, and conclude with our view on the future of Human-AI Collaboration in healthcare.

  • Module 2: Human-AI Creation - Text to Image Generation

    Text to image generation has been famous in recent times with the advancements in the generative models like GANs, Autoregressive models, Diffusion based models, etc. There have been exciting papers on text to image generation using these concepts. One such paper is Imagen (7) that has been recently released by Google research. In this work we aim to provide a survey of methods used to achieve the text to image generation task comparing them both qualitatively and quantitatively.

  • Module 2: Unsupervised appraoches for latent space interpretaion of GANs

    Since the first GAN model[1] proposed by Ian GoodFellow, there has been a significant improvement in GANs to generate photorealistic images. However the domain of understanding and controlling the images being generated was not widely explored. Recently there are few different appraoches that have surfaced to interpret the image generation in deep generative models. Such approaches can be broadly categorized into supervised and unsupervised approaches. Supervised appraoches have come first and there was lot of analysis done on them. In contrast, this survey focuses on summarizing the very recent works done on unsupervised interpretation of GANs. We will discuss about identifying and editing the human intepretable concepts hidden in the image generation process of GANs. We will briefly discuss the unsupervised methods GANSpace[2], SeFa[3] and LatentCLR[4] to understand the recent works in this domain.

  • Module 3 Survey: On the Emergence and Elimination of Biases in Artificial Intelligence

    While being applied to a broader and broader range of scenarios, artificial intelligence algorithms can still be less helpful or even potentially harmful in many aspects, like social equity or the reliability of essential services in our society. Therefore, we hereby conduct a broad-ranged survey of existing analyses of such and try to compile the underlying ideas into a systematic understanding of the problem to facilitate future research.

  • Module 4: AI robustness- Benchmarking Adversarial Robustness for Image Classification

    Deep neural networks subject to adversarial examples has become one of the most significant research issues in deep learning. One of the most difficult aspects of benchmarking robustness is that its assessment is frequently prone to inaccuracy, resulting in robustness overestimation. The research on adversarial robustness is faced with the absolutes between attacks and defenses. Defensive methods proposed to prevent existing attacks became outdated as new attacks emerge. Therefore it’s hard to truly understand the effects of these methods. In this paper, we investigate various thorough, rigorous, and coherent benchmarks for evaluating adversarial robustness on image classification tasks.

  • Module 4: AI robustness for Object Detection

    Object detection is an important vision task and has emerged as an indispensable component in many vision system, rendering its robustness as an increasingly important performance factor for practical applications. However, object detection models have been demonstrated to be vulnerable against various types of attack, it’s significant to survey the current research attempts for a robust object detector model.

  • Module 5, Topic: Data collection for Autonomous Driving

    Data collection for Autonomous Driving: A survey

  • Module 5, Topic: Interactive Object Instance Annotation

    Interactive Object Instance Annotation: A Survey on Toronto Annotation Suite

  • Module 6: Weak Supervision and Self Supervision: Representation Learning

    Weak supervision and self-supervision algorithms have seen tremendous successes recently. We briefly discuss key papers for both these topics to give an overview of recent progress in the field.

  • Module 6: Weak supervision and self supervision - Semantic Segmentation with Scribbles

    In this survey report we will be exploring the advances in semantic segmentation with scribbles. We will be focusing on three major papers in this area including the first major investigation on this topic as well as two more recent papers which represent the current developments in this area.

  • Module 6: Weak supervision and self supervision - Topic weakly supervised segmentation

    We perform research on weakly supervised semantic segmentation with image level annotation. Six papers are introduced and compared to explore the common pipeline of weakly supervised semantic segmentation.

  • Module 7 - A Survey on Intervention-based Learning

    Intervention-based learning is a modern and rapidly progressing field based on utilizing human interventions in the learning process to train an agent online. Traditionally, imitation learning and DAgger were used to instill human expert knowledge to the learner, but such methods require the human expert to manually label tediously large amounts of data. With this in mind, intervention-based learning has arisen to tackle the need for large amounts of expert queries by learning from periods of uninterrupted human expert control. This paper discusses three key intervention-based learning methods, HG-DAgger, Expert Intervention Learning (EIL), and Human-AI Co-pilot Optimization (HACO). We provide an in-depth explanation of how intervention-based learning improves upon imitation learning and DAgger. In addition to this, we discuss how EIL and HACO, the current state-of-the-art, improve upon HG-DAgger and outperform all other methods discussed while requiring significantly less human annotated data.

  • Module 7: Human-in-the-loop autonomy - Preference Based Reinforcement Learning

    In this survey, we talk about the evolution of human perference based reinforcement learning, with a bunch of examples.

  • Module 8: Superhuman AI and knowledge generation

    How do self-played models such as AlphaZero obtain knowledge? Can we uncover novel strategies from those models that surpass humans? In this survey, we present the recent advences in related fields.

  • Module 9: Explainable ML - Topic: Class Activation Mapping and its Variants

    While neural networks demonstrate impressive performance across tasks, they make predictions in a black-box way that is hard for humans to understand. To alleviate the issue, researchers have proposed several ways of interpreting the behaviors of neural networks. In this survey, we focus on class activation mapping and its variants, which are popular model interpretation techniques and allow us to visualize the decision process of neural networks and ease up debugging.

  • Module 9: Explainable ML - Topic: Exploring Powerful Interfaces for Explainable AI

    While neural networks demonstrate impressive performance across tasks, they make predictions in a black-box way that is hard for humans to understand. To alleviate the issue, researchers have proposed several ways of interpreting the behaviors of neural networks. Visual inspection of the same is also equally important. In the same context, powerful interpretable interfaces have been developed. In this survey, we focus on such interfaces to get a better understanding of what and why our model is learning what it is attempting to learn!

  • Module 9: Explainable ML

    Explainable ML (or XAI) attempts to bridge the gap between the black-box nature of machine learning models and human understanding. The goal is to explain the behavior of models in a human-understandable manner. It is a crucial for applying the ML models with superior performance into critical applications such as medical or financial domains. It functions as a sanity check for whether the models behave as we humans expect them to and helps gain trust from human users, as well as debug the models for underlying biases.

  • Module 10: Debate on Explainable ML

    Nowadays, deep neural networks are widely used to build machine learning models and AI, and their applications are common in daily life including chatbox, object detection, etc. People want to interpret the black box of the model to understand what those models learn and see. However, people tend to overexplain the association between the result and the model or over-rely on the interpretation method such as model properties or post-hoc interpretation techniques. In this survey, we focus on analyzing several feature attribution based interpretation methods. We would like to discuss how people evaluate those methods and how those methods might mislead people.