all about ai. every word by a human.

Alignment post-training for LLMs: lessons learned making it work
Alignment post-training is the art of getting an AI model to complete tasks in a way that meets the criteria of its users.
In this blog, I document what I learned getting this process to work for a production-grade LLM.