Hacker Timesnew | past | comments | ask | show | jobs | submit | firesaber's commentslogin

Thanks! The last mile injection idea is exactly how I think about it too.

I realized that for 90% of 'summarize this' or 'debug this' tasks the LLM doesn't really need any specific PII or sensitive information, it just needs to know that an entity exists there to understand the structure.

That's why I focused on the reversible mapping, so that we can re-inject the real data locally after the LLM does the heavy lifting. Cool to hear you're using a similar pattern for credentials.


Hey HN, I built SafePrompt because I wanted to paste sensitive docs into ChatGPT but didn't trust OpenAI with the raw data.

It runs 100% in the browser (Next.js + WebAssembly). It uses Regex/Logic (no AI) to strip Names, Emails, and SSNs before they leave your clipboard.

It's a simple MVP right now. Would love to know if this solves a real problem for you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: