Shubhamkar Ayare


I am a graduate student under the Cognitive Science department at IIT Kanpur, India. As part of my master's thesis, I investigated the Visual Indexing Theory of Dr. Zenon Pylyshyn (also see Pylyshyn 2007). I've worked on an index-less model of human multiple object tracking of visually identical objects under the guidance of Prof. Nisheeth Srivastava (Ayare and Srivastava 2023)

I did my undergraduate studies in Computer Science at IIT Bombay, where I had the chance to acquaintance myself with Reinforcement Learning and do a project on Tangram Solving using Constraint Satisfaction under Prof. Shivaram Kalyanakrishnan.

I spend my free time developing and maintaining Common Lisp libraries. These include a library to call python callables from Common Lisp or a (very primitive) library for high performance numerical computing powered by SIMD operations. Now, Common Lisp has its limitations, so while developing the numerical computing library, I have also gotten dragged into extending Common Lisp's types and dispatching over them.

My interest in Common Lisp has stemmed from a search for a stable platform for long term projects, potentially spanning decades. It is more flexible yet stable than Python and at least one of its implementations (SBCL) is an optimizing compiler generating native code as performant as C compilers. These days, there are various defacto libraries providing support for a number of modern features including multithreading and unicode amongst others. There are also efforts to bring the goodness of Hindley-Milner type system to Common Lisp.

I fancy the development of machines with human-like general intelligence. But after an initial encounter with artificial intelligence and machine learning, I have ended up on a tour to Dreyfus and Hiedeggerian AI.

I now think research into causality (Gopnik 2022) and on enabling robots and machines to learn from their sociocultural environments (Colas, Karch, Moulin-Frier, et al. 2022) would be important precursors for general intelligence. Reinforcement learning with some augmentations (Colas, Karch, Sigaud, et al. 2022) looks like a useful framework for both these goals.

References

Ayare, Shubhamkar, and Nisheeth Srivastava. 2023. “Tracking Multiple Objects without Indexes.” In Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 45. 45. https://escholarship.org/uc/item/29x6398w.
Colas, Cédric, Tristan Karch, Clément Moulin-Frier, and Pierre-Yves Oudeyer. 2022. “Language and Culture Internalisation for Human-Like Autotelic AI.” Nature Machine Intelligence 4. Nature Research: 1068–76. doi:10.1038/s42256-022-00591-4.
Colas, Cédric, Tristan Karch, Olivier Sigaud, and Pierre-Yves Oudeyer. 2022. “Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: A Short Survey.” Journal of Artificial Intelligence Research 74 (July). Association for the Advancement of Artificial Intelligence. doi:10.1613/jair.1.13554.
Gopnik, Alison. 2022. “Causal Models and Cognitive Development.” Probabilistic and Causal Inference. https://api.semanticscholar.org/CorpusID:247230058.
Pylyshyn, Zenon W. 2007. Things and Places: How the Mind Connects with the World. The Jean Nicod Lectures 2004. Cambridge, Mass.: MIT Press.

Contact

Email: shubhamayare[at]yahoo[dot]co[dot]in.


Ideas are dime a dozen. Tell me your trials, your learnings, and the assumptions behind them.

Say not just what's wrong with things, but also how they can be made right.

Github | LinkedIn

(made with emacs org)