Hacker Timesnew | past | comments | ask | show | jobs | submit | leftbrainstrain's commentslogin

I thought Carpenter vs United States was that case, but apparently it wasn't. Terry stops by local officers based on tips from regional Fusion Centers via WhatsApp sounds less unusual every day. Parallel construction has become a long-established technique.


Thank you for sharing this benchmark, and the library. I was expecting ideal Go performance to approach that of Java and C#/.NET for large files, which last time I checked (a while ago) was about half the throughput of C code using libxml2. Beating libxml2 by a significant margin is very impressive.


Provided a similar implementation is ported to C and C#, it would have ended up performing faster - Go compiler is relatively weak, and Go the language lacks certain crucial performance primitives that C, C++ and C# (and Rust) have.


To back up the story regarding async a bit, at least on the front end ... A long time ago in the 2000s, on front-end systems we'd have a server farm to handle client connections, since we did all rendering on the server at the time. On the heavyweight front end servers, we used threading with one TCP connection assigned to each thread. Threading was also less efficient (in Linux, at least) than it is now, so a large number of clients necessitated a large number of servers. When interfacing with external systems, standard protocols and/or file formats were preferred. Web services of some kind were starting to become popular, usually just when interfacing with external systems, since they used XML(SOAP) at the time and processing XML is computationally expensive. This was before Google V8 was released, so JavaScript was seen as sort of a (slow) toy language to do only minor DOM modifications, not do significant portions of the rendering. The general guidance was that anything like form validation done on the client side in JS was to be done only for slight efficiency gains and all application logic had to be done on the server. The release of NGINX to resolve the C10K problem, Google V8 to make JS run faster, and Node.js to scale front end systems for large numbers of idle TCP connections (C10k) all impacted this paradigm in the 2000s.

Internally, applications often used proprietary communication protocols, especially when interacting with internal queueing systems. For internal systems, businesses prefer data be retained and intact. At the time, clients still sometimes preferred systems be able to participate in distributed two-phase commit (XA), but I think that preference has faded a bit. When writing a program that services queues, you didn't need to worry about having a large number of threads or TCP connections -- you just pulled a request message from the request queue, processed the message, pushed a response onto the response queue, and moved on to the next request message. I'd argue that easing the strong preference for transactional integrity, the removal of the need for internal services to care about the C10k problem (async), and the need to retain developers that want to work with recent "cool" technologies reduced the driver for internal messaging solutions that guarantee durability and integrity of messages.

Also, AWS's certifications try to reflect how their services are used. The AWS Developer - Associate still covers SQS, so people are still using it, even if it isn't cool. At my last job I saw applications using RabbitMQ, too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: