-
SnapKV: LLM Knows What You are Looking for Before Generation
Paper • 2404.14469 • Published • 27 -
Finch: Prompt-guided Key-Value Cache Compression
Paper • 2408.00167 • Published • 18 -
Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge Reasoning
Paper • 2503.04973 • Published • 26 -
A Simple and Effective L_2 Norm-Based Strategy for KV Cache Compression
Paper • 2406.11430 • Published • 25
Collections
Discover the best community collections!
Collections including paper arxiv:2407.21018
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 132 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 626 -
BitNet: Scaling 1-bit Transformers for Large Language Models
Paper • 2310.11453 • Published • 105 -
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Paper • 2404.02258 • Published • 107 -
TransformerFAM: Feedback attention is working memory
Paper • 2404.09173 • Published • 43
-
TroL: Traversal of Layers for Large Language and Vision Models
Paper • 2406.12246 • Published • 35 -
A Closer Look into Mixture-of-Experts in Large Language Models
Paper • 2406.18219 • Published • 17 -
ThinK: Thinner Key Cache by Query-Driven Pruning
Paper • 2407.21018 • Published • 32 -
Meltemi: The first open Large Language Model for Greek
Paper • 2407.20743 • Published • 68
-
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
Paper • 2403.16627 • Published • 22 -
Phased Consistency Model
Paper • 2405.18407 • Published • 48 -
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Paper • 2405.12981 • Published • 33 -
Imp: Highly Capable Large Multimodal Models for Mobile Devices
Paper • 2405.12107 • Published • 29
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48
-
SnapKV: LLM Knows What You are Looking for Before Generation
Paper • 2404.14469 • Published • 27 -
Finch: Prompt-guided Key-Value Cache Compression
Paper • 2408.00167 • Published • 18 -
Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge Reasoning
Paper • 2503.04973 • Published • 26 -
A Simple and Effective L_2 Norm-Based Strategy for KV Cache Compression
Paper • 2406.11430 • Published • 25
-
TroL: Traversal of Layers for Large Language and Vision Models
Paper • 2406.12246 • Published • 35 -
A Closer Look into Mixture-of-Experts in Large Language Models
Paper • 2406.18219 • Published • 17 -
ThinK: Thinner Key Cache by Query-Driven Pruning
Paper • 2407.21018 • Published • 32 -
Meltemi: The first open Large Language Model for Greek
Paper • 2407.20743 • Published • 68
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 132 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
Paper • 2403.16627 • Published • 22 -
Phased Consistency Model
Paper • 2405.18407 • Published • 48 -
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Paper • 2405.12981 • Published • 33 -
Imp: Highly Capable Large Multimodal Models for Mobile Devices
Paper • 2405.12107 • Published • 29
-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 626 -
BitNet: Scaling 1-bit Transformers for Large Language Models
Paper • 2310.11453 • Published • 105 -
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Paper • 2404.02258 • Published • 107 -
TransformerFAM: Feedback attention is working memory
Paper • 2404.09173 • Published • 43
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48