Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction Paper • 2402.02416 • Published Feb 4, 2024 • 4
Kimi k1.5: Scaling Reinforcement Learning with LLMs Paper • 2501.12599 • Published Jan 22, 2025 • 126
DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models Paper • 2512.02556 • Published Dec 2, 2025 • 244
ProgressGym: Alignment with a Millennium of Moral Progress Paper • 2406.20087 • Published Jun 28, 2024 • 4
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset Paper • 2307.04657 • Published Jul 10, 2023 • 6
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset Paper • 2307.04657 • Published Jul 10, 2023 • 6