I've actually had pretty good results from doing exactly that. There was one FP when it tried to be Coverity and failed miserably, but the others were "you need to look at this bit more closely", and in most cases there was something there. Not necessarily a vuln but places where the code could have been written more clearly. It was like having your fourth grade English teacher looking over your shoulder and saying "you need to look at the grammar in this sentence more closely".
And using an LLM to audit your code isn't necessarily a case of turning it into perfect code, it's to keep ahead of the other side also using an LLM. You don't need to outrun the bear, just the other hikers.
This really is not the case.
You have freedom of methodology.
You can also ask it to enumerate various risks and find proof of existence for each of them.
Certainly our LLM audits are not just a prompt per file - so I have a hard time believing that best in class tools would do this.