Do LLMs Estimate Uncertainty Well in Instruction-Following? Apple Machine Learning Research
[[{“value”:”This paper was accepted at the Safe Generative AI Workshop (SGAIW) at NeurIPS 2024. Large language models (LLMs) could be valuable personal AI agents across various domains, provided they can precisely follow user instructions. However, recent studies have shown significant limitations in LLMs’ instruction-following capabilities,… Read More »Do LLMs Estimate Uncertainty Well in Instruction-Following? Apple Machine Learning Research