"you're absolutely right" is a crutch llms use to get around partial understanding of problems
jumping in to 1M+ line codebases w/ 4k lines of context on a whim is impossible without assuming the user's prompt is correct
agent mode doesn't solve this. The LLM's prior is "the user is right", which is hard to refute
the next "leap" will be when ai can challenge our assumptions, but I don't think we're getting there with the current patterns.