Hacker Timesnew | past | comments | ask | show | jobs | submit | joecool1029's commentslogin

Apple’s system mostly works. If you need to reissue an esim without being able to transfer from an existing device on T-Mobile though need to either call in and give imei or get past the chatbot and there’s a page text support can give you to enter details. I was never able to successfully use the shit t-life app’s manage esim option.

> When you request a QR code, even though you provide the EID, they will ask for an IMEI number.

Everything else you say is accurate but they do not require this, T-Mobile is the only major in the US that doesn’t match EID to IMEI. I know because I use a removable esim (esim.me) euicc with multiple phones. I have to read the super long eid off to support to activate it. I cannot activate service on this card with verizon or at&t as its eid doesn’t match to an imei for them.


I think you're misunderstanding. I'm not saying T-Mobile locks the EID and IMEI together. I'm saying their tech support will completely ignore any EID you send them and instead look up an EID in their database based on the IMEI you send them. If you manage to convince the tech support to actually listen to you and use the correct EID then yes, everything will work out fine and you'll be able to move the card across devices.

I was also using a removable esim (from jmp.chat) and they did this to me three times. Each time it went like this:

> Me: Please send me a QR code to download my esim. My EID is XXXXXXXX

> Them: Thanks for providing your EID, please send me your IMEI (the first time this was just a plain message, the 2nd and 3rd time they sent me a link to a form to submit my IMEI to them)

> Me: <sends them my IMEI>

<at this point, the first two representatives initiated a transfer through their app and told me to wait 2 hours and then the transfer would finish. I told them whatever automatic transfer they just initiated will not work and they _need_ to send me the QR code.>

> Them: What is your e-mail address

> Me: My e-mail address is XXXXX@XXXX.XXXX

and then they'd send me a QR code. I'd then attempt to download it to my jmp.chat esim and I'd get an error that the EID was incorrect. Then, I'd try using the QR code to activate the built-in eSIM on the phone with the IMEI that I sent them, and it would work, proving that they were looking up the EID for the IMEI that I sent them rather than paying attention to the EID that I started the chat with.

The 4th and final time, I sent them my Librem 5's IMEI which had never been on T-Mobile and does not support eSIM. They told me that the phone was carrier locked, I assured them it wasn't and explicitly told them "it is important the QR code is for the EID I provided you. The past representatives have ignored that, leading to the error message <pasted the error from EasyLPAC's logs that was something like EID is incorrect>". THAT time they finally listened and sent a QR code for the correct EID, which let me download the eSIM to my jmp.chat card. At that point I was able to move the card across devices without issue.


Good recommendation, also shout out the Casio Lineage. (solar, atomic, sapphire crystal, titanium case/band). I got mine sub-$200. This one: https://www.casio.com/europe/watches/casio/product.LCW-M100T...

> Author’s argument is those hardware improvements could have been had for free with X11 upgrades.

I do NOT miss having tearing all the time with X11. There were always kludgy workarounds. Even if you stopped and said ok, lets not run nvidia, let's do intel they have great FOSS driver support, we look back at X11 2D acceleration history. EXA, SNA, UMA, XAA? Oh right all replaced with GLAMOR, OK run modesetting driver, right need a compositor on top of our window manager still because we don't vsync without it.

Do you have monitors with a different refresh rate? Do you have muxes with different cards driving different outputs? All this stuff X11 sucks at. Ok the turd has been polished well now after decades, it doesn't need to run as root/suid anymore, doesn't listen for connections on your network, but the security model still sucks compared to wayland, and once you mix multiple video cards all bets are off.

But yeah, clipboard works reliably, big W for X11.


Makes sense, it’s high in protein.

The sheer arrogance that you think someone manipulated successfully will just re-think the situation and ask their friends/family. The naivety to assume all scammers are impulsive fools and don't do this for a living, as their primary line of work.

So Google's going to add some nonsense abstraction layer and when this fails to curb the problem after a 24 hour wait, it will be extended more maybe a week, and more information must be collected to release it. We all know how this goes.


phosphors and capacitors are a thing that mask that, so is high frequency switching way above this rate…

Anyway, an old HN submission I still use when buying light bulbs: https://hackertimes.com/item?id=14023196


> the catting issue might be more an implementation of bzip program problem than algorithm (it could expect an array of compressed files). that would only be impossible if the program cannot reason about the length of data from file header, which again is technically not something about compression algo but rather file format its carried through.

Long comment to just say: ‘I have no idea about what I’m writing about’

These compression algorithms do not have anything to do with filesystem structure. Anyway the reason you can’t cat together parts of bzip2 but you can with zstd (and gzip) is because zstd does everything in frames and everything in those frames can be decompressed separately (so you can seek and decompress parts). Bzip2 doesn’t do that.

So like, another place bzip2 sucks ass is working with large archives because you need to seek the entire archive before you can decompress it and it makes situations without parity data way more likely to cause dataloss of the whole archive. Really, don’t use it unless you have a super specific use case and know the tradeoffs, for the average person it was great when we would spend the time compressing to save the time sending over dialup.


> zstd does everything in frames and everything in those frames can be decompressed separately (so you can seek and decompress parts). Bzip2 doesn’t do that.

This isn't accurate.

1) Most zstd streams consist of a single frame. The compressor only creates multiple frames if specifically directed to do so.

2) bzip2 blocks, by contrast, are fully independent - by default, the compressor works on 900 kB blocks of input, and each one is stored with no interdependencies between blocks. (However, software support for seeking within the archive is practically nonexistent.)


So... it's actually a reasonable objection over bzip2? I mean, you explained why it does not work with bzip2.

I think their argument is sound and it makes using bzip2 less useful in certain situations. I was once saved in resolving a problem we had when I figured out that concatening gzipped files just works out of the box. If not, it would have meant a bit more code, lots of additional testing, etc.


totally agree with the statement though i feel its not an objection over bzip 2 rather than how it was implemented in programs that apply it. but i'm not really 100% since admittedly i did not personally reverse engineer bzip capable programs to see the current state of afairs. I am simply going by descriptions posted in comments and general system knowlesge.

how to compress data has little to no relation to how this compression can be implemented in programs. How its implemented, will reflect on how the quality of the algorithm is perceived, becaus e the two are not seperate from a user perspective.


you misread my comment. i exactly implied the catting issue has relation to FS structure, and hence is not an issue against the bzip algo. sorry if i was unclear.

Just use zstd unless you absolutely need to save a tiny bit more space. bzip2 and xz are extremely slow to compress.

> bzip2 and xz are extremely slow to compress

This depends on the setting. At setting -19 (not even using --long or other tuning), Zstd is 10x slower to compress than bzip2, and 20x slower than xz, and it still gets a worse compression ratio for anything that vaguely looks like text!

But I agree if you look at the decompression side of things. Bzip2 and xz are just no competition for zstd or the gzip family (but then gzip and friends have worse ratios again, so we're left with zstd). Overall I agree with your point ("just use zstd") but not for the fast compression speed, if you care somewhat about ratios at least


In the LZ high compression regime where LZ can compete in terms of ratio, BWT compressors are faster to compress and slower to decompress than LZ codecs. BWT compressors are also more amenable to parallelization (check bsc and kanzi for modern implementations besides bzip3).

I'd argue it's more workload dependent, and everything is a tradeoff.

In my own testing of compressing internal generic json blobs, I found brotli a clear winner when comparing space and time.

If I want higher compatibility and fast speeds, I'd probably just reach for gzip.

zstd is good for many use cases, too, perhaps even most...but I think just telling everyone to always use it isn't necessarily the best advice.


> If I want higher compatibility and fast speeds, I'd probably just reach for gzip.

It’s slower and compresses less than zstd. gzip should only be reached for as a compatibility option, that’s the only place it wins, it’s everywhere.

EDIT: If you must use it, use the modern implementation, https://www.zlib.net/pigz/


Any claims about compressing programs are extremely data-dependent so any general claims will be false for certain test cases.

I do a lot of data compression and decompression, and I would have liked a lot to find a magic algorithm that works better than all others, to simplify my work.

After extensive tests I have found no such algorithm. Depending on the input files and depending on the compromise between compression ratio and execution times that is desired, I must use various algorithms, including zstd and xz, but also bzip2, bzip3 and even gzip.

I use quite frequently gzip (executed after lrzip preprocessing) for some very large files, where it provides better compression at a given execution time, or faster execution at a given compression ratio, than zstd with any options.

Of course for other kinds of files, zstd wins, but all the claims that zstd should be used for ALL applications are extremely wrong.

Whenever you must compress or decompress frequently, you must test all available algorithms with various parameters, to determine what works best for you. For big files, something like lrzip preprocessing must also be tested, as it can change a lot the performance of a compression algorithm.


why would one even care about compression speed on minecraft ComputerCraft machine?

size and decompression are the main limitations


I do wonder if those animals have things like valves in their veins, as I understand it if the circulatory system wasn't as complex as it is, heart would have to pump a lot harder to move the volume it does. This isn't an area I know much of anything about, I just know veins have valves and can expand and contract to different stimuli much as a heart can... so even though mammals have one heart it's not like the rest of the system is a static not helping to pump blood.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: