Talking with the Latents -- how to convert your LLM into an astronomer

Avatar
Poster
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

Talking with the Latents -- how to convert your LLM into an astronomer

Authors

Ilay Kamai, Marc-Huertas Company, Mike J. Smith, Hagai B. Perets

Abstract

Recent advances in Large Language Models (LLMs) offer unique opportunities for scientific tasks, yet their ability to reason over complex numerical data remains largely unexplored. We propose a simple mechanism to introduce domain-specific physical knowledge into LLMs by fusing pre-trained latent physical features with a pre-trained language model. Our method employs a teacher-student knowledge distillation framework where a large LLM (teacher) generates synthetic question-answer supervision to transfer physical reasoning to a smaller LLM (student). The student is conditioned on latent physical features and trained via a lightweight adapter and Low-Rank Adaptation (LoRA). We demonstrate that this approach, applied to models with 1B, 8B, and 32B parameters, enables effective reasoning over real scientific data. Our models substantially outperform strong baselines, such as Gemini 3 Pro, across multiple downstream tasks without task-specific fine-tuning. We show that the model combines latent information with general physical understanding to predict complex properties and can be "steered" by identifying physically meaningful directions in the latent space. This allows for explicit physical manipulation and natural language interpretation of latent structures. While our experiments focus on astrophysics, the framework is domain-agnostic and applicable to various scientific fields. Our main contribution is a general framework for using LLMs as interpretable interfaces to scientific latent spaces, enabling a single model to perform diverse tasks through natural language guidance. This work marks a step toward developing scientifically capable and useful LLMs.

Follow Us on

0 comments

Add comment