LLMs support safety-critical decisions by evaluating structured arguments, identifying gaps, surfacing doubts, and generating defeaters to challenge bias and strengthen trust.