Energy-Based Transformers are Scalable Learners and Thinkers Paper • 2507.02092 • Published Jul 2, 2025 • 69 • 26
Energy-Based Transformers are Scalable Learners and Thinkers Paper • 2507.02092 • Published Jul 2, 2025 • 69 • 26
When Models Lie, We Learn: Multilingual Span-Level Hallucination Detection with PsiloQA Paper • 2510.04849 • Published Oct 6, 2025 • 114 • 7
The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers Paper • 2510.11218 • Published Oct 13, 2025
The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers Paper • 2510.11218 • Published Oct 13, 2025 • 2
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild Paper • 2502.12769 • Published Feb 18, 2025 • 3 • 2
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild Paper • 2502.12769 • Published Feb 18, 2025 • 3
Centurio: On Drivers of Multilingual Ability of Large Vision-Language Model Paper • 2501.05122 • Published Jan 9, 2025 • 19