id stringlengths 10 10 | url stringlengths 42 42 | title stringlengths 5 214 | average_rating float64 -1 8.5 | average_confidence float64 -1 5 | ratings listlengths 0 9 | confidences listlengths 0 9 | reviewers_num int64 0 9 | keywords listlengths 1 42 | abstract stringlengths 26 4.31k | tldr stringlengths 0 250 | primary_area stringclasses 21
values | pdf_url stringlengths 40 40 | submission_date timestamp[s]date 2025-09-01 19:59:51 2025-09-20 20:18:08 | total_reviews int64 0 18 | reviews listlengths 0 9 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vxkzW4ljeX | https://openreview.net/forum?id=vxkzW4ljeX | A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws | 5.5 | 3 | [
4,
6,
8,
4
] | [
3,
3,
2,
4
] | 4 | [
"Neural scaling law",
"model compression",
"lottery ticket hypothesis",
"deep learning theory"
] | When training large-scale models, the performance typically scales with the number of parameters and the dataset size according to a slow power law. A fundamental theoretical and practical question is whether comparable performance can be achieved with significantly smaller models and substantially less data. In this w... | We prove that permutation symmetry enables polylogarithmic compression of neural networks and datasets, thus establishing the dynamical lottery ticket hypothesis and boosting neural scaling laws | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=vxkzW4ljeX | 2025-09-19T05:07:02 | 4 | [
{
"id": "vvIZ8RIzRX",
"forum": "vxkzW4ljeX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_YxjE",
"reviewer_name": "Reviewer_YxjE",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
fwCoRzh0Dw | https://openreview.net/forum?id=fwCoRzh0Dw | InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU | 4 | 3 | [
6,
4,
2
] | [
2,
3,
4
] | 3 | [
"Sparse Attention",
"Efficient Attention",
"Context Extrapolation",
"KV Cache Offloading"
] | In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical l... | InfiniteHiP extends the servable model context length beyond VRAM and pretrained model context limitation. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=fwCoRzh0Dw | 2025-09-17T09:29:23 | 3 | [
{
"id": "1VQ0xZHvLL",
"forum": "fwCoRzh0Dw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_SD7R",
"reviewer_name": "Reviewer_SD7R",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "Infini... |
5rjSeZCM6l | https://openreview.net/forum?id=5rjSeZCM6l | FedSumUp:Secure Federated Learning Without Client-Side Training for Resource-Constrained Edge Devices | 3.5 | 3.25 | [
4,
2,
4,
4
] | [
3,
3,
3,
4
] | 4 | [
"Federated Learning",
"Data Condensation",
"Server-Side Optimization",
"Privacy-Preserving",
"Edge Devices",
"Variational Autoencoder"
] | Horizontal Federated Learning (HFL) enables multiple clients with private data to collaboratively train a global model without sharing their local data. As a research branch of HFL, Federated Data Condensation with Distribution Matching (FDCDM) introduces a novel collaborative paradigm where clients upload small synthe... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=5rjSeZCM6l | 2025-09-20T12:40:47 | 4 | [
{
"id": "GcXZTsH254",
"forum": "5rjSeZCM6l",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_VTkQ",
"reviewer_name": "Reviewer_VTkQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
qN0Il4dtGg | https://openreview.net/forum?id=qN0Il4dtGg | HARMAP: Hierarchical Atomic Representation for Materials Property Prediction | 3.5 | 3 | [
2,
2,
4,
6
] | [
4,
3,
3,
2
] | 4 | [
"AI for Materials",
"Atomic Representation",
"Material Property Prediction"
] | Accurate prediction of material properties is a key step toward rapid materials discovery and cost-effective exploration of vast chemical spaces. Recent advances in machine learning (ML) offer a data-driven alternative that enables fast and scalable property estimation. However, prevailing graph-based pipelines use one... | A Hierarchical Atomic Representation for Materials Property prediction. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=qN0Il4dtGg | 2025-09-10T21:25:01 | 4 | [
{
"id": "Kr0LTtqs14",
"forum": "qN0Il4dtGg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_CDzq",
"reviewer_name": "Reviewer_CDzq",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
0hLuQAT3fV | https://openreview.net/forum?id=0hLuQAT3fV | Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection | 5 | 3.5 | [
4,
4,
4,
8
] | [
3,
4,
4,
3
] | 4 | [
"Diffusion Model",
"AI Safety",
"Image Immunization",
"Adversarial Attack",
"Image Editing"
] | Recent advances in diffusion models have enabled powerful image editing capabilities guided by natural language prompts, unlocking new creative possibilities. However, they introduce significant ethical and legal risks, such as deepfakes and unauthorized use of copyrighted visual content. To address these risks, image ... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=0hLuQAT3fV | 2025-09-12T19:50:27 | 4 | [
{
"id": "Cp6SNqZd08",
"forum": "0hLuQAT3fV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_nGCo",
"reviewer_name": "Reviewer_nGCo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
3sJ4zKToW6 | https://openreview.net/forum?id=3sJ4zKToW6 | Consistent Low-Rank Approximation | 6.666667 | 3.333333 | [
4,
8,
8
] | [
3,
2,
5
] | 3 | [
"low-rank approximation",
"online algorithms",
"consistency",
"recourse"
] | We introduce and study the problem of consistent low-rank approximation, in which rows of an input matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ arrive sequentially and the goal is to provide a sequence of subspaces that well-approximate the optimal rank-$k$ approximation to the submatrix $\mathbf{A}^{(t)}$ that has arr... | optimization | https://openreview.net/pdf?id=3sJ4zKToW6 | 2025-09-19T05:52:21 | 3 | [
{
"id": "G9M6d2dYmo",
"forum": "3sJ4zKToW6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_ex4U",
"reviewer_name": "Reviewer_ex4U",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
OyIJvyyB3R | https://openreview.net/forum?id=OyIJvyyB3R | LLM2Fx-Tools: Tool Calling for Music Post-Production | 5.5 | 3.5 | [
4,
8,
6,
4
] | [
3,
3,
4,
4
] | 4 | [
"Music Post Production",
"Fx Chain Generation",
"Tool Calling"
] | This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine their order, and estimate parameters, guided... | LLM2Fx-Tools is a framework that uses a multimodal LLM to automatically generate executable audio effect chains (as tools), chain-of-thought reasoning, and natural language responses. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=OyIJvyyB3R | 2025-09-19T13:42:11 | 4 | [
{
"id": "B7fQjc5nan",
"forum": "OyIJvyyB3R",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Rbd9",
"reviewer_name": "Reviewer_Rbd9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
rcsZNV9A5j | https://openreview.net/forum?id=rcsZNV9A5j | Flash Multi-Head Feed-Forward Network | 5 | 3.75 | [
6,
4,
4,
6
] | [
3,
4,
4,
4
] | 4 | [
"Machine Learning Systems",
"Machine Learning",
"Software-Hardware Codesign",
"Natural Language Processing",
"Transformer",
"Deep Learning",
"Model Architecture"
] | We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the... | We propose a novel multi-head FFN that achieves better transformer model performance while using 3-5x less memory and running 1.00-1.08x faster than standard SwiGLU FFNs. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=rcsZNV9A5j | 2025-09-16T16:13:44 | 4 | [
{
"id": "TygVX9zSRX",
"forum": "rcsZNV9A5j",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_i2pJ",
"reviewer_name": "Reviewer_i2pJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This p... |
eS4MAmmCHy | https://openreview.net/forum?id=eS4MAmmCHy | PEL-NAS: Search Space Partitioned Architecture Prompt Co-evolutionary LLM-driven Hardware-Aware Neural Architecture Search | 3.5 | 4 | [
4,
4,
4,
2
] | [
4,
4,
4,
4
] | 4 | [
"Large Language Model",
"Hardware-aware",
"Neural Architecture Search"
] | Hardware-Aware Neural Architecture Search (HW-NAS) requires joint optimization of accuracy and latency under device constraints.
Traditional supernet-based methods require multiple GPU days per dataset. Large Language Model (LLM)-driven approaches avoid training a large supernet and can provide quick feedback, but we ... | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=eS4MAmmCHy | 2025-09-18T03:16:21 | 4 | [
{
"id": "r5WN4tP0vh",
"forum": "eS4MAmmCHy",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_ygWA",
"reviewer_name": "Reviewer_ygWA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This p... | |
MgVNhx5uaa | https://openreview.net/forum?id=MgVNhx5uaa | ATOM-Bench: From Atoms to Conclusions in Objective Evaluation of Large Multimodal Models Reasoning | 3 | 3.75 | [
2,
4,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"multimodal Large Language Models",
"benchmark",
"chain of thought"
] | Chain-of-Thought (CoT) reasoning has significantly enhanced the ability of Large Multimodal Models (LMMs) to tackle complex image–text tasks, establishing itself as a cornerstone of multimodal learning. Despite significant progress, the impact of CoT on LMMs still lacks objective evaluation and in-depth research. Curre... | We introduce ATOM-Bench, a diagnostic benchmark for evaluating Chain-of-Thought reasoning in Large Multimodal Models via objective atomic questions, spanning 2,920 QAs over 570 real-world images, to address challenges of reasoning reliability. | datasets and benchmarks | https://openreview.net/pdf?id=MgVNhx5uaa | 2025-09-18T21:58:39 | 4 | [
{
"id": "qyea8A8FPG",
"forum": "MgVNhx5uaa",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sAqG",
"reviewer_name": "Reviewer_sAqG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
wztR0XcNW9 | https://openreview.net/forum?id=wztR0XcNW9 | TopoCore: Unifying Topology Manifolds and Persistent Homology for Data Pruning | 4 | 3 | [
4,
2,
6
] | [
3,
3,
3
] | 3 | [
"Coreset Selection",
"Topological Data Analysis",
"Persistent Homology",
"Architectural Transferability",
"Data-Efficient Learning",
"Manifold Learning",
"Pretrained Models"
] | Geometric coreset selection methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometric metrics makes them highly sensitive to variations in feature embeddings, leading to poor performance when transferring across different network architectures or when d... | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=wztR0XcNW9 | 2025-09-18T02:54:05 | 3 | [
{
"id": "p1cclI53pH",
"forum": "wztR0XcNW9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_Sq9q",
"reviewer_name": "Reviewer_Sq9q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The pa... | |
WnRzN4U8Y8 | https://openreview.net/forum?id=WnRzN4U8Y8 | WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation | 5 | 4.5 | [
4,
6,
4,
6
] | [
5,
5,
3,
5
] | 4 | [
"Referring image segmentation",
"parameter efficient tuning",
"computer vision"
] | Existing Parameter-Efficient Tuning (PET) methods for Referring Image Segmentation (RIS) primarily focus on layer-wise feature alignment, often neglecting the crucial role of a neck module for the intermediate fusion of aggregated multi-scale features, which creates a significant performance bottleneck. To address this... | This paper introduces WIMFRIS, a framework that achieves state-of-the-art in referring image segmentation by proposing a novel HMF neck module to efficiently fuse text with visual features , overcoming a key performance bottleneck in prior methods. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=WnRzN4U8Y8 | 2025-09-20T14:00:25 | 4 | [
{
"id": "l3NeqmvthW",
"forum": "WnRzN4U8Y8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_N61Y",
"reviewer_name": "Reviewer_N61Y",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 55