Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Is that the case?


The LLM models give the most likely respond to a prompt. So if you prompt it with "find security bugs from this code" it will respond with "This may be a security bug" than you "you fucking donkey this curl code has already been eyeballed by hundreds of people, you think a statistic model will find something new?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: