> and usually there are other ways to deal with it: using less or similar tools, or just running your massive single-line JSON thing through jq first
Older Unixen at least (SVR3-based) had a tool called bfs - big file scanner. Used it some, e.g. when, as a system engineer, I helped IT staff of one of our customers, a university processing exam results of tens of thousands of students from dozens of colleges affiliated to that univ. IIRC, its UI was something like a read-only ed (google "unix bfs command"). You used it to scan really large (for the time) files, e.g. data files (input/output) in large data processing environments, for purposes like checking if the input or output files anecdotally looked okay, no major noticeable garbage in them.
Haven't checked if it is present in modern Linuxes. Also don't remember if it could handle large files without newlines. Likely not, if based on ed. Didn't have such files to work on then.
Older Unixen at least (SVR3-based) had a tool called bfs - big file scanner. Used it some, e.g. when, as a system engineer, I helped IT staff of one of our customers, a university processing exam results of tens of thousands of students from dozens of colleges affiliated to that univ. IIRC, its UI was something like a read-only ed (google "unix bfs command"). You used it to scan really large (for the time) files, e.g. data files (input/output) in large data processing environments, for purposes like checking if the input or output files anecdotally looked okay, no major noticeable garbage in them. Haven't checked if it is present in modern Linuxes. Also don't remember if it could handle large files without newlines. Likely not, if based on ed. Didn't have such files to work on then.