HN2new | past | comments | ask | show | jobs | submitlogin

can't you combine instances of 4k tokens in 3.5 to fake it? having one gpt context per code file, for instance and maybe some sort of index?

I'm not super versed on lang chain but that might be kinda what that solves...



LangChain/context prompting can theoetically allow compression of longer conversation, which will likely be the best business strategy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: