Hacker Timesnew | past | comments | ask | show | jobs | submit | kondbg's commentslogin

Using debit cards means that you need to keep a sufficient balance on a zero interest checking account in order to make transactions.

Using credit cards allows you to keep close to a zero checking account balance and manage your own cash flow, since credit card bill dates are deterministic.

Why would anybody want to keep _any_ amount of money in a non-interest bearing checking account right now especially when the risk free rate of interest (US treasury bills / equivalent money market funds invested in US treasuries) yields 4.00%+ APY now?


> Using debit cards means that you need to keep a sufficient balance on a zero interest checking account in order to make transactions.

Many banks allow linking a savings account to a checking account as a backup funding source.

> Why would anybody want to keep _any_ amount of money in a non-interest bearing checking account right now

Conversely, why would any credit card issuer give you an interest-free loan for a month in a world of 4.00%+ risk-free APYs?

Leaving aside all concerns of repayment risk, somebody is paying for your interest-free loan already.

Depending on how you view it, that's either yourself (via 2-3% of credit card fees baked into all retail prices) or other credit card users that don't pay their credit card bills in full every month, or a combination of both.


> Many banks allow linking a savings account to a checking account as a backup funding source.

And the average American can’t handle a $400 emergency. They don’t have a savings account with thousands of dollars


True, but how high is the chance that somebody that can't handle a $400 emergency today will be able to pay back $400 of credit card debt next month? And if they don't, they usually end up in more problems down the road.

Also, there is nothing that says you couldn't use a debit card for normal spending and resort to using a credit card only for when you actually need credit (assuming that the rewards inefficiency gets fixed).


I was answering in the context of the difference between chargebacks on credit cards vs debit cards. You aren’t out of your money while waiting on a chargeback to be processed when using a credit cards.


You're out of $400 of liquidity (until the issuer provisionally credits your account, which they are required for credit and debit alike) either way.

The only difference is lost interest payments for these ~5 business days (which the issuer might even have to reimburse as well; I'm not too sure about that though), as well as not being able to pay for cash-only expenses using the money in your bank account.

Also, nobody is saying that people can only have one single bank account, and a debit card linked to it with no spending limit attached to it.


Few people with decent credit have a card that has a credit limit that comes anywhere near their monthly spending requirements.


Using Cloudflare to proxy B2 content seems like it directly violates Cloudflare's ToS.

https://www.cloudflare.com/terms/

> 2.8 Limitation on Serving Non-HTML Content

> The Services are offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as part of a Paid Service purchased by you, you agree to use the Services solely for the purpose of (i) serving web pages as viewed through a web browser or other functionally equivalent applications, including rendering Hypertext Markup Language (HTML) or other functional equivalents, and (ii) serving web APIs subject to the restrictions set forth in this Section 2.8. Use of the Services for serving video or a disproportionate percentage of pictures, audio files, or other non-HTML content is prohibited, unless purchased separately as part of a Paid Service or expressly allowed under our Supplemental Terms for a specific Service. If we determine you have breached this Section 2.8, we may immediately suspend or restrict your use of the Services, or limit End User access to certain of your resources through the Services.

If this was truly acceptable and not in some grey area, why doesn't Backblaze simply route all downloads through Cloudflare by default, rather than having each individual customer go through the hassle of setting this up?


Backblaze is part of the "Bandwidth Alliance", different rules apply:

https://www.cloudflare.com/en-gb/bandwidth-alliance/

With this said, I believe there are restrictions for certain types of content (eg: video). Cloudflare needs to be more clear here to avoid confusion.


The page says that the Bandwidth Alliance means that partners will charge less or no egress to cloudflare.

I'm not seeing it saying anything about different rules applying regarding CloudFlare ToS such as "2.8 Limitation on Serving Non-HTML Content" to Bandwidth Alliance sources.

But has that been said somewhere I'm not seeing? Would love to see it!


Doesn't this violate net neutrality ... ?


If you only use this for "standard" CDN assets (like pictures that are part of your website styling rather than as an image host) and you also host your website on Cloudflare, I think it should be ok.


> If this was truly acceptable and not in some grey area, why doesn't Backblaze simply route all downloads through Cloudflare by default, rather than having each individual customer go through the hassle of setting this up?

Because Backblaze makes more money charging you for repeated downloads?


The git repository stores only the packaging related items (the specfile, the custom patches, etc.). The actual source is stored as a binary artifact that is downloaded by a `get_sources.sh` script.

The build process is documented here: https://wiki.centos.org/action/show/Sources?action=show&redi...


That did it, thanks. From a cursory glance, it looks like it fetches indeed the RHEL (rather than CentOS) sources. It looks like the mains questions for the clone builders will be how RH is going to provide RHEL code drops for their point releases (and updates) in the future. Since right now there are separate 8 and Stream branches, but presumably the 8 will be discontinued at some point?


Devil's advocate: why should I choose this yet-to-exist distribution over something already existing, such as Oracle Linux?

The most common argument (Oracle is evil and litigious. Therefore, using Oracle Linux will result in me being sued) honestly seems like FUD.

All RHEL downstream distributions rebuild the same SRPMs that RHEL provides. Doing a quick comparison over some common packages (kernel, httpd, openssl, etc.) between CentOS 8.3 (https://vault.centos.org/8.3.2011/BaseOS/Source/SPackages/) Oracle Linux 8.3 (https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x8...) shows that they are indeed byte identical (with the exception of certain spec files including debranding patches).

What is the value of having a separate RHEL derivative? It isn't as if the "community" can propose/submit any changes, since any changes will cease to make the downstream distribution a "bug for bug" compatible RHEL derivative. If I actually wanted to participate in the larger RHEL-derivative community, I would need to actually submit my changes to the CentOS stream project.


> Devil's advocate: why should I choose this yet-to-exist

Devil's response: nobody cares if you do. A lot of people know why they want it; the answer will in many cases be that it will fill the same niche and not be controlled by a shitty company. (If you think calling Oracle shitty is FUD, unprofessional or similar, that's fine: see 'Devil's response', above.)

It will stand or fall on its own, as a result of many different peoples' choices. For now, it is enough that something is growing in the niche from which Centos was uprooted.


> Devil's advocate: why should I choose this yet-to-exist distribution over something already existing, such as Oracle Linux?

Because there's a whole ecosystem (HPC and Scientific computing to be exact) which depends on CentOS (not RHEL, not Oracle, not Ubuntu, not Debian) primarily. A CentOS compatible distribution is not some FOSS pride thing.

IBM and RH really blew a sucker punch in this regard.


When you say that they depend on CentOS, are they using something CentOS-specific. Centos is supposed to be compatible with RHEL (minus the logos/trademarks) and shouldn't have additional fixes or features. ("bug for bug, feature for feature" <= centos wording :)). No?


CentOS don't have to have a specific feature to be preferred over RH. Being free in both beer and speech is important enough. People (incl. us) install 1000+ server clusters with CentOS. The absence of licensing fee allows us to buy more servers. The absence of licensing fee allows "small researchers" to have a verified platform to work with. If you don't have a verified platform, you cannot trust your results.

CentOS carries a legacy from Scientific Linux (which was RH compatible too) and has a lot of software packages developed for/on it. It might be a regular .tar.gz or RPM distribution but, they're validated and certified on CentOS. This is enough. Some middlewares used in collaborative projects (intentionally or unintentionally) search for CentOS signature. Otherwise installations fail spectacularly (or annoyingly, it depends).

I have to run my own application on every platform with a relatively simple test suite which checks results with 32 significant digit ground truth values. If these tests fail for a reason, then I can't trust my application's results for a particular problem. My code runs fast and it's relatively simple (since it's young). Some software packages' tests can run for days. It's not feasible to re-validate a software every time after compilation on a different set of libraries, etc. CentOS provides this foundation for free.


Thanks for your explanation.

I think I understand a little better your point of view. CentOS became so important for the HPC community that most software is now validated against it. So even if RHEL itself were to become free (as in beer), people won't switch to it (or at least be reluctant).


Exactly.

My all personal systems are Debian, however when I install something research related, it's always CentOS. There's no question. I even manage a couple of research servers at my former university. They're CentOS as well.

Moreover service (web, git, documentation, etc.) servers are CentOS too to keep systems uniform even if there's no requirement. So it powers the whole ecosystem, not the compute foundation. That's a big iceberg.


In 2020 why aren't you packaging your apps as containers? Yeah, it sucks that ibm killed centos, but depending on some single distro's version of libm or libc or whatever is not their fault it's yours. Doing your job properly in this case means shipping you deps with your application, and the easiest way to do that these days is with containers..

Christ; it's either the 90s or kindergarten..


Assuming based on GP that this is in a HPC environment, there is often a delineation between the people writing HPC software and the people maintaining the clusters and the software installed on them. Telling a brand-new graduate student with zero software development experience to just throw everything into a container results in running code that is not optimized for the hardware it's running on, which in turn negatively impacts the other users competing for compute time on HPC clusters.

There is a movement to incorporate technologies like Singularity into the HPC workflow but for established projects, it often looks like a lot of bikeshedding for negative results compared to just running the code on bare metal.


Because a cluster doesn't work like a normal computer.

Your users don't see the nodes. They submit jobs and wait for their turn in the cluster. A sophisticated resource planner / job scheduler tries to empty the queue while optimizing job placement so the system usage can be maximized as much as possible.

Also, users' jobs work in under their own users. You need to isolate them. Giving them access to docker or any root level container engine is completely removing UNIX user security and isolation model and running in Windows95 mode. This also compromises system security since everyone is practically root at that point. Singularity is user-mode and its usage is increasing but then comes the next point.

Performance and hardware access is critical in HPC. GPU and special HBAs like Infiniband requires direct access from processes to run at their maximum performance or work at all. GPU access is much more important than containerizing workloads. Docker GPU is here because nVidia wanted to containerize AI workloads on DGX/HGX systems. These technologies are maturing on HPC now.

In performance front, consider the following: If main loop of your computation loses a second due to these abstractions, considering this loops run thousand times per core on many nodes, lost productivity is eye-watering. My simple application computes 1.7 million integrations per second per core. So, for working on long problems, increasing this number is critical.

Last but not the least, some of the applications run on these systems are developed for 20 years now. So, these applications are not some simple code bases which are extremely tidy and neat. You can't know/guess how these applications behave before running them inside a container. As I've said, you need to be able to trust what you have too. So, we scientists and HPC administrators tend to walk slowly but surely.

Doing my job properly on the HPC side means my cluster works with utmost efficiency and bulletproof user isolation so people can trust the validity of their results and integrity of their privacy. Doing my job properly on the development side means that my code builds with minimum effort and with maximum performance on systems I support. HPC software is not a single service which works like a normal container workload. We need to evolve our software to run with minimum problems with containers and containers should evolve to accommodate our workloads, workflows and meet our other needs.

The cutting edge technology doesn't solve every problem with same elegance. Also we're not a set of lazy academics or sysadmins just because our systems work more traditionally.


I find it interesting that the argument that "X is FUD" is supposed to carry weight.

It's a bit like if I'm in a party, and I briskly walk up to five people and each time I hit them in the face, and then it's your turn and you move away, and I say "what? the idea that I would hit you is FUD".

It's not FUD. It's a pattern of behaviour.

Avoiding overly litigious companies - where other as-good or better choices exist - is not overly cautious, it's just good sense. Where other as-good choices do not exist, it seems perfectly reasonable (depending on your risk profile) to work with others to create the better choice.

Of course, I say all this as someone who has worked in massive multinational corporations and now work in small startups. I'm now likely never going to use Rocky Linux for exactly the reason you've hinted to - in effect, it is not a usecase either of us care about. But for those people who do need this, I'm very happy that someone has championed the cause.


I haven't seen "using Oracle Linux will result in me being sued".

What I've seen is "Oracle is evil", "don't trust Oracle", and something like "my prior history around Oracle has left such a lasting bad taste that I throw up a little in my mouth every time I touch something with Oracle in it, so I'd rather do almost anything but use something from Oracle, since using it on the daily would inevitably lead to permanent esophagus damage."

I mean... Oracle buying up MySQL was enough for MariaDB to be created and move to being the default. (well, and some of what Oracle did right afterwards).


In an earlier thread, some Oracle guy (not in the Oracle Linux team) mentioned that Oracle 8 actually builds from CentOS 8, rather than RHEL 8. I was a bit skeptical, since OL 8 usually releases much earlier than CentOS 8, but couldn't verify things either way. Someone else mentioned that RH actually only releases RHEL8 sources through CentOS8 sources. Again, I don't know how to verify, but if true they raise a lot of new questions about Oracle Linux 8 given the recent CentOS 8 announcement.


RHEL sources can be retrieved in four ways:

1. On an entitled system, enable the source repos and download the packages.

2. In your account online, you can download the SRPMs for individual packages.

3. In your account online you can download a minor version release iso of the SRPMs.

4. You can use https://git.centos.org to clone the actual RPM patches/spec files, and use the get_source.sh script from the centos-git-common repo to pull the package source tarballs from dist-git (useful for projects like the kernel that don’t use actual upstream as their source).

With CentOS stream (particularly C9S that will be launching mid 2021) and the switch over to GitLab which will happen in the future, everything will be out in the open in git form.


RHEL is open source and always will be.


Oracle Linux might change the rules in the future. It kind of just happened with CentOS :)


Well, the same thing might happen with Rocky Linux as well :)


“Anything can happen” is not an argument. It’s about quantified risk.


Yes, and the 2 parties have very different motivates.


Yes but then somebody would make Rocky-2


It's unlikely they would sabotage their only competitive advantage though, whereas Oracle has lots of reasons to maintain an enterprise linux distribution besides just succeeding CentOS.


> Devil's advocate: why should I choose this yet-to-exist distribution over something already existing, such as Oracle Linux?

Because you want what CentOS was and this is basically going to be what CentOS was. Different name, different people, but same prinicple.


Theoretically Oracle Linux and CentOS are identical except the branding, except CentOS has been abandoned and Oracle is just getting started with OL.


But Oracle are the company we probably trust the least with matters like this.


But Oracle Linux has a different principle really, no? I never think of Oracle and think "Making the paid software free".


It just seems weird to willingly associate with a company that is trying to outlaw the very practice that gave birth to GNU/Linux in the first place.


Why not use SUSE then? Oracle is the last option that I would ever trust. They always have something up their sleeve


OpenSSH's SFTP server is significantly slower than FTP over Wireguard (or FTP over TLS) without the OpenSSH HPN patches (which upstream refuses to merge) on connections with >100ms latency.

FTP has sendfile support for data transfers since there is no framing in the data connection. (OpenSSL 3.0 has sendfile support so FTP over TLS would also benefit as well).


If you want to serve private content, you have to encrypt stuff and authenticate users.

If you want to serve public content, you still need to encrypt it to avoid a MITM attack.

SSH solves the first problem trivially, but is slower. HTTPS solves the second problem trivially, and the first problem with some work (login flow or client certificates).

I only see a case for FTP for underpowered hardware like older RPi serving stuff locally. But I suspect the bottleneck would be in the USB-connected NIC, not even im the CPU.


Are you really moving so many files over SFTP that it's a concern? Why not use rsync over SSH?


This isn't a CA certificate -- it is missing the CA basic constraints as well as missing the "certificate sign" key usage from X509v3, so most TLS libraries will not validate a chain that is signed by this certificate.


"most TLS libraries" is vague. Microsoft and Apple each include one with their OS. The Microsoft one definitely accepts total garbage as a valid CA root. I know because my employer pushed such a root to enable their MitM proxy and it worked fine... in Windows (and thus IE/ Edge). They had to replace it because Firefox and other systems threw a fit.

I'm happy to be proved wrong about this, but my experience tells me "most TLS libraries" is misleading even if technically true.


If you add a non-CA enabled certificate to the Trust store and a TLS library decides to trust it to sign a cert chain, that TLS library is horribly broken and needs a critical CVE. File a bug an earn a $10k bounty, but I’m guessing this is FUD and Blizzard did nothing wrong here and exposed exactly no one to any kind of risk.

In fact it appears they did exactly the right thing to get https working correctly in their mixed-mode (localhost + outside world) environment.


The fact you think this makes Microsoft's TLS library (SChannel) "horribly broken" doesn't magically mean I get a $10k bounty award. Microsoft considers that if you put a cert which lacks CA:TRUE into your local trust store you must know what you're doing and want to trust it as a CA anyway. They're entitled to whatever opinion they want, and don't have to pay third parties just because somebody on HN disagrees.

Now, if you want you can argue that Blizzard weren't to know this would happen. And that, depending on what else they've done this might be safe anyway, but I wasn't commenting on either of those, only pointing out that SChannel doesn't care about basic constraints on trusted roots.


Good catch.


Running Linux and i3wm. After updating to 59, Chrome didn't seem to automatically detect my DPI settings, so I had to manually specify the scale with

   --force-device-scale-factor
Previously did not have to do this.


> “The origin of the infection is not confirmed at the moment, but sources close to the company point out that it is being treated as an attack originating in China,” El Mundo writes.

It's amazing how any organization can get away with poor security and backup practices by blaming either Russia or China, without showing any evidence to back their claim.


To my mind, there's a difference between "blaming Russia or China" and saying the attacks originated in Russia or China. The former is a reference to the nation state itself (i.e. state sponsored cyber attacks), while the latter is broader and can also mean private individuals within those respective countries.


And, in fact, given that by now news reports are indicating that the greatest number of affected machines in this wave of attacks is in Russia itself, it's probably private criminal groups.


Well, we have always been at war with Eastasia...


I also built a Ryzen machine for development. It's great when it works, but I've found that Ryzen is unstable on Linux (Ubuntu 16.04). Every once in a while, I get kernel errors like

   NMI watchdog: BUG: soft lockup - CPU#9 stuck for 23s!
which requires a hard reset. This behavior doesn't occur on Windows, though, so if you use Windows for development, you should be good.


Did you try a later kernel? out of the box ubuntu 16.04 is on the longterm 4.4 kernel. It looks like some ryzen features and patches have been added to 4.10, and they probably were not back-ported to the longterm kernels.


Not OP, but just built at Ryzen 1600 box. I've had _more_ instability when running 17.04 and settled on 16.04 which has been mostly fine, but hard crashes occasionally.


> hard crashes occasionally

This sounds disturbingly unacceptable yet accepted


I custom built a machine last week for the first time in over a decade and from a fairly new processor and a just released video card. If I had more confidence in my PC building I guess I'd be more upset, but there are a lot of variables here and I'm still in a honeymoon phase.

I installed the official AMD RX580 drivers and it's been stable since.


Nothing out of the ordinary with a new platform. While Linux often takes longer to work this stuff out, Windows often has had similar issues in the past as well.


Not sure if intent but this is AMD issue.

Having experienced first hand AMD driver instability on both Windows & Linux, AMD have lost my custom for the next 10 years..

Multiple re-occurring driver crashes using their main graphics card product line(RX380) on Windows 10.. So, pretty mainstream and yet having driver crashing (even when doing non-intensive tasks e.g. web browsing)

For the record, I'm not so sure Nvidia is any more stable either. The only (constantly) stable graphics provider over the years has been Intel's on-board graphics.


You can use 16.04 with a newer kernel. There is even a deb package for 4.8


The official 16.04 hwe "edge" kernel is currently at 4.10.x: http://packages.ubuntu.com/xenial/kernel/linux-generic-hwe-1...


Yes, that log entry was from running 4.10.11.


I recently built a system with R7 1800X and Arch is randomly resetting when running its 4.10.* kernels. Fedora, with 4.11 rc builds (the real thing is out now btw) has been rock solid with weeks of uptime, running games & browsers & development stuff.


Good to know Fedora is worth a try, I'm a Xubuntu user and have Ryzen parts coming, won't need it til June so I'm hoping Ubuntu gets it sorted before then but if not I can live with Fedora for a while.


I experience the same issue on my Skylake i5 :-)

IIRC in my case it goes away once I install bumblebee. Apparently something to do with switching between internal Intel graphics and the dedicated Nvidia chip, and not at all related to your issue, except for the symptom.

It's comedic because after bootup I have about 90s until lockup, so I have to be lightening-fast to type the commands to install bumblebee. If I'm too slow: reset machine, try again.

FWIW, this is on a notebook, running Ubuntu 16.04.


Try adding it to a startup script.


Thanks for the suggestion.

I was unclear, this only happens after a fresh install, before I've installed Bumblebee and its dependencies. Once that is out of the way, I don't need to worry about it any more.


I also have been experiencing lock ups with my Ryzen system. I thought it was my RX580, but now I'm thinking it might be the CPU/mobo. Did you go with a B350?


Yes, MSI B350 Tomahawk


Hrm... same here.


Open a bug report?


You might want to try disabling SMT and see if that helps. No promises, could be many other things, but it might be worth a shot.

SMT support on Ryzen is really flakey, it's almost always a wash and often can hurt performance and I wouldn't be surprised to see it cause this kind of bug as well (especially in the early days of kernel support).


The authors of the original paper [1] identified that the set of client cipher suites advertised by each browser can be used to fingerprint and identify a browser.

Caddy records the cipher suite advertised by the client during the TLS handshake and then later examines the client's user agent. Using the fingerprinting techniques mentioned in the paper, Caddy then determines whether or not the advertised user-agent is compatible with the user-agent that it inferred through the client cipher suites.

TLS interception proxies establish their own TLS connection to the server. Depending on what underlying TLS library the proxy uses, it also has its own unique fingerprint. When the TLS proxy forwards the user's request, Caddy detects the mismatch and flags it as a MITM.

[1] https://jhalderm.com/pub/papers/interception-ndss17.pdf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: