Hacker Timesnew | past | comments | ask | show | jobs | submit | apimade's commentslogin

Total cost of ownership.

I’d give my entire family these ahead of Windows laptops any day.


> Total cost of ownership.

Mister Gates, is that you ?


Likely aimed at classified/defence environments. In those spaces, hardware typically takes 18–36 months after commercial deployment before it’s approved—due to firmware vetting, side-channel analysis, crypto validation, and similar processes.

Meanwhile, commercial operators have already deployed their hardware for public workloads. Existing Blackwell capacity won’t just be shifted into classified environments—governments don’t repurpose hardware from unclassified infrastructure for secret/TS systems. That deployed stock will stay in the private sector for hosted AI workloads.

For many high-security use cases, new Blackwell systems may effectively be the only viable option, especially given the slow review cycles around new firmware and GPU software stacks. Newer chipsets will also be prioritized for training due to performance gains.

Oracle likely recognizes this dynamic and is betting competitors may eventually need to deploy in their data centers. Governments haven’t historically deployed GPU capacity at this scale-beyond ASIC/FPGA crypto workloads.. and likely don’t have large pools of pristine Blackwell hardware available.

They’re also purchasing late in the cycle, which may work in their favour.


Contract-first. API-first. Domain-driven. Platform driven. Microservice driven.

Tech loves making something a top priority (and forgetting about it several years later); AI is the first one that is applicable to the masses.

.. Well maybe not User-first. But that was even less clear than AI-first.


Isn’t trivial? How is it not completely automated at this point?


Warning: in Enterprise (Grid) your account will likely be flagged as hijacked, and all of your sessions will be killed.

Slack implemented session hijacking detection a while ago, and using LLM’s without throttling will very likely result in alerts. If you’re on Enterprise; I’d suggest re-slopping a re-implementation of this with ghost Chrome puppeteer.


I ended up vibe coding a script that uses slack token from the browser to download my messages locally. It's not been flagged yet. But my account got flagged when I tried slackdump.


I don't use Slack Grid, but if you open an issue I'm more than happy to work with you on it!


I spent the better half of my first professional decade writing RESTful abstractions over SOAP services and XML RPC monstrosities. I’ve done it for probably upwards of 2 or 300 systems (not interfaces, systems).

There’s one improvement XML had over JSON; and that’s comments.

The author laments about features and functionality that were largely broken, or implemented in a ways that countered their documentation. There were very few industries that actually wrote good interfaces and ensured documentation matched implementation, but they were nearly always electrical engineers who’d re-trained as software engineers through the early to late 90s.

Generally speaking namespaces were a frequent source of bugs and convoluted codepaths. Schemas, much like WSDL’s or docs, were largely unimplemented or ultimately dropped to allow for faster service changes. They’re from the bygone era of waterfall development, and they’re most definitely not coming back.

Then there’s the insane XML import functionality, or recursive parsing, which even today results in legacy systems being breached.

Then again, I said “author” at the start of this comment, but it’s probably disingenuous to call an LLM an author. This is 2026 equivalent of blogspam, but even HN seems to be falling for it these days.

The AI seems to also be missing one of the most important points; migration to smaller interfaces, more meaningful data models and services that were actually built to be used by engineers - not just a necessary deliverable as part of the original system implementation. API specs in the early 2000’s were a fucking mess of bloated, Rube-Goldbergesque interdependent specs, often ready to return validation errors with no meaningful explanation.

The implementation of XML was such a mess it spawned an an entire ecosystem of tooling to support it; SoapUI, parsers like Jackson and SAX (and later StAX), LINQ to XML, xmlstarlet, Jing, Saxon..

Was some of this hugely effective and useful? Yes. Was it mostly an unhinged level of abstraction, or a resulting implementation by engineers who themselves didn’t understand the overly complex features? The majority of the time.


They are for large infrastructure projects, especially at large organisations.

It takes companies 3-5 years for migration of these products, all of which are not CapEx funded and so get minimal resourcing without prioritisation by leadership.


Also, for anything with a 3-5 year implementation period, a longer contract aligns incentives.

The vendor isn't incentivized to fuck you over on renewal pricing as soon as the implementation is complete.

And because of the size of the contract, the customer has more leverage at renewal time.


The same way you write malware without it being detected by EDR/antivirus.

Bit by bit.

Over the past six weeks, I’ve been using AI to support penetration testing, vulnerability discovery, reverse engineering, and bug bounty research. What began as a collection of small, ad-hoc tools has evolved into a structured framework: a set of pipelines for decompiling, deconstructing, deobfuscating, and analyzing binaries, JavaScript, Java bytecode, and more, alongside utility scripts that automate discovery and validation workflows.

I primarily use ChatGPT Pro and Gemini. Claude is effective for software development tasks, but its usage limits make it impractical for day-to-day work. From my perspective, Anthropic subsidizes high-intensity users far less than its competitors, which affects how far one can push its models. Although it's becoming more economical across their models recently, and I'd shift to them completely purely because of the performance of their models and infrastructure.

Having said all that, I’ve never had issues with providers regarding this type of work. While my activity is likely monitored for patterns associated with state-aligned actors (similar to recent news reports you may have read), I operate under my real identity and company account. Technically, some of this usage may sit outside standard Terms of Service, but in practice I’m not aware of any penetration testers who have faced repercussions -- and I'd quite happily take the L if I fall afoul of some automated policy, because competitors will quite happily take advantage of that situation. Larger vuln research/pentest firms may deploy private infrastructure for client-side analysis, but most research and development still takes place on commercial AI platforms -- and as far as I'm aware, I've never heard of a single instance of Google, Microsoft, OpenAI or Anthropic shutting down legitimate research use.


I've been using AIs for RE work extensively, and I concur.

The worst AI when it comes to the "safety guardrails" in my experience is ChatGPT. It's far too "safety-pilled" - it brings up "safety" and "legality" in unrelated topics and that makes it require coaxing for some of my tasks. It does weird shit like see a security vulnerability and actively tell me that it's not really a security vulnerability because admitting that an exploitable bug exists is too much for it. Combined with atrocious personality tuning? I really want to avoid it. I know it's capable in some areas, but I only turn to it if I maxed out another AI.

Claude is sharp, doesn't give a fuck, and will dig through questionable disassembled code all day long. I just wish it was cheaper in API and had higher usage limits. And, also that CBRN filter seriously needs to die. That one time I had a medical device and was trying to figure out its business logic? The CBRN filter just kept killing my queries. I pity the fools who work in biotech and got Claude as their corporate LLM of choice.

Gemini is quite decent, but long context gives it brainrot. Far more so than other models - instruction following ability decays too fast, it favors earlier instructions over latter ones or just gets too loopy.


I’d be really interested to see what you’ve been working on :) are you selling anything? Are you open sourcing? Do you have any GitHub links or write ups?


I’ve got about 10 half way through write ups on projects I’ve done over the past few years. My whole “thing” is systemising exploitation.

One day I’ll publish something..


https://addons.mozilla.org/en-US/firefox/addon/container-pro...

You can default route domains through a VPN using a Firefox tab container, you don’t need a separate browser instance running!


You can use the official add-on for that https://addons.mozilla.org/en-US/firefox/addon/multi-account... On the surface the proxy option looks like it is only their own VPN service, but you can set up your own too.


Wow thanks for this, was using the above linked addon myself until I read your comment.


I live in Merri-bek. 50%. 3km north of Melbourne CBD.

I can drive to an ED within 3-5 minutes.

This report doesn’t make me feel good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: