Nowadays, I'm mostly working on creating new technologies in the industry and doing science independently, but here are some thoughts and details about my time spent in AI research as a student:
Deep Learning Techniques for Automated Image Captioning; S. Srivastava, Y. Damania, Y. Chaudhari, P. Jadhav; Springer Lecture Notes on Networks and Systems, Smartcom 2021.
Neutrino Detectors and Computation; work with Akhil Nahar; white paper presented at NSSC '18, IIT Kharagpur.
My long-term goal is to understand the underlying mechanisms and methods that engender intelligence.
One of my major goals is to develop AI systems that go beyond the prevalent machine learning/deep learning algorithms -- that use a gigantic amount of data, which in my opinion is still the biggest drawback of current AI systems.
I also think it's very important for machines to learn to understand. Intelligence would be a direct byproduct of a good understanding mechanism.
Moving ahead, some specific areas I would like to work on that I think could mean useful for the future of AI are:
causal (and probabilistic) inference is a topic that I'm very interested in exploring, since a robust framework of causality in AI systems could potentially mean increased robustness to biases (due to invariances in causal features upon varying non-causal features) and a deeper understanding of the environment (knowing how all features are actually related).
fairness and strategic interactions (Human-centric AI) with AI systems, and broadly, how to ensure a safe future when our society starts interacting very closely with AI systems (especially in high-stakes situations), keeping in mind strategic and adversarial behavior.
multi-task, meta learning, continual learning and other techniques of having a common ground of knowledge that is easily transferable and adaptable between machines (across tasks in distinct domains). I'm also interested in exploring (and extending) how meta-learning algorithms behave from a Bayesian perspective. I'm also extremely interested in open-endedness and creating (or extending) algorithms, environments, and benchmarks to evaluate open-endedness.
moving beyond the i.i.d. assumption in AI, so that machines are not limited to learning and understanding subject to the availability of train/test data from the same distribution (possibly using a lot less data too)
Previously, my research revolved around some quantum mechanics, bioinformatics (some work on PAM and BLOSUM), and engineering problems for physics (eg: neutrino detectors). Although, I'm still working (independently) on a few problems at the intersection of physics and machine learning, I'm not as active on the physics front anymore (owing to my schedule). I'm still interested in learning and pursuing problems in physics again soon!
My research is also highly motivated by the potential applications of AI in virtually all domains and scientific disciplines -- advancing our knowledge and propelling humanity towards a better future. I work towards making safe AI systems for accelerating scientific discoveries and using AI for social good (some work at AlgoAsylum).
Moving ahead, I'm interested to investigate how AI can help in the following domains:
Physics, Education, Econometrics, Computational Molecular Biology, and Climate Change (& renewable energy systems) among many others.
NSSC, Oct 2018. Indian Institute of Technology, Kharagpur.
Our work in the newspaper.