Scientists at the Waisman Center have recently had a paper published in Nature Machine Intelligence on their creation of a machine learning algorithm called JAMIE (Joint Variational Autoencoders for Multimodal Imputation and Embedding). JAMIE uses data from one modality, such as gene expression, to predict a missing modality, such as electrophysiology. This technique is called cross-modal imputation. In addition to imputation, or value estimation for missing data, JAMIE is able to integrate different modalities of a cell together for a more comprehensive understanding of their function.
Noah Cohen Kalafut, Computer Sciences PhD student and first author of the study, says, “In addition to the imputation, predicting one modality from another, we’re also doing integration, which means taking both of the modalities and then putting them together in such a way that they form common latent spaces, which can then be used for further downstream analysis.”
The scientists are also looking to use JAMIE on single-cell data from disease samples to impute the single-cell features that are missing or difficult to observe in brain diseases such as neurodevelopmental and neurodegenerative disorders and learn more about single-cell features specific to disease. Daifeng Wang, associate professor of biostatistics & medical informatics, and computer sciences, likens JAMIE to ChatGPT in the sense that you can give it some input and it can give you missing information. “JAMIE could function as a sort of neuronalGPT or brainGPT,” he says.
Read more in this Waisman Center post!