Lucas Dionisopoulos Professional Research Personal Contact
Currently working on RL + LLMs.

Research

How Reasoning Evolves from Post-Training Data in Sequential Domains
Outperformed state-of-the-art open-source reasoning models in chess through SFT and RL on a 7B-parameter language model. The key focus of this work was to study how fine-tuning influenced post-RL reasoning (both quantitative and qualitative performance) using custom theoretically-inspired datasets.
A* Neural Guided Search on ARC-AGI
Trained an image embedding model from scratch (modified iBOT) using a synthetic data generation pipeline to combine with a probabilistic context-free grammar (PCFG) in a neurosymbolic programming approach for ARC-AGI.
Philosophize That
Created a corpus of philosophical texts using various techniques to isolate signal from correlated noise and trained a model using document-level embeddings to classify the philosophical basis of my own journal entries.