A brainwide “universal translator” for neural dynamics at single-cell, single-spike resolution
Columbia University

In this project, our primary goal is to develop a multimodal foundation model of the brain by combining large-scale, self-supervised learning with the IBL brainwide dataset. This model aims to serve as a "universal translator," facilitating automatic translation from neural activity to various outputs such as behavior, brain location, neural dynamics prediction, and information flow prediction. To achieve this, we will leverage ACCESS computational resources for model training, fine-tuning, and testing. These resources will support the computation-intensive tasks involved in training large-scale deep learning models on distributed GPUs, as well as processing and analyzing the extensive dataset. Additionally, we will utilize software packages tailored for deep learning to implement our algorithms and models effectively. Ultimately, the project's outcome will be shared as an open-source model, serving as a valuable resource for global neuroscience research and the development of brain-computer interfaces. With ACCESS resources, we aim to accelerate the advancement of neuroscience and enable broader participation in brain-related research worldwide.

Status: In Review
AI for Business
San Diego State University

The research focus is to apply the pre-training techniques of Large Language Models to the encoding process of the Code Search Project, to improve the existing model and develop a new code searching model. The assistant shall explore a transformer or equivalent model (such as GPT-3.5) with fine-tuning, which can help achieve state-of-the-art performance for NLP tasks. The research also aims to test and evaluate various state-of-the-art models to find the most promising ones.

Status: In Progress