Hacker Timesnew | past | comments | ask | show | jobs | submit | gmokki's commentslogin

I would recommend using PTP on all clouds. The accuracy is more than 10x to NTP. It consumes less CPU. It does not use network traffic and thus can not be attacked, even if UDP is open to internet or if network stack is under DoS.

All clouds except AWS is easy: just `modprobe ptp_kvm` and point chrony at /dev/ptp0

On AWS it depends on instance type: some older do not support it at all, some support it via the network driver, some via the kvm PTP driver.


Wikipedia says PTP uses UDP, that is network traffic? or did you mean Internet traffic?

https://en.wikipedia.org/wiki/Precision_Time_Protocol

Edit: ah, you are talking about virtual PTP, which is presumably PTP to the hypervisor instead of network servers. The hypervisor would then keep its own time in sync using GPS, network PTP or NTP.

https://kimmo.suominen.com/blog/2022/09/virtual-ptp-hardware...


When I pull the trigger and the bullet kills an another person, it is just how technology works. Why would I be responsible if I choose to use it or not?

I'm going to need a copy of your latest bank statement before i can accurately answer that.

https://www.phoronix.com/news/SDL-Lands-Wayland-Pointer-Warp

Wayland and SDL got support this summer.

And Xwayland has had support for past 10 years: https://www.phoronix.com/news/XWayland-Pointer-Confinement


Don't you always need a database after reading events from Kafka to deduplication?

So the competing solutions are: PostgreSQL or Kafka+PostgreSQL

Kafka does provide some extrs there, handling load spikes, more clients that PG can handle natively and resilience to some DB downtime. But is it worth the complexity, in most cases no.


Actually you can avoid having a separate DB! You can build a materialized view of the data using [KTables](https://developer.confluent.io/courses/kafka-streams/ktable/) or use [interactive queries](https://developer.confluent.io/courses/kafka-streams/interac...). The "table" is built up from a backing kafka topic so you don't need maintain another datastore if the data view you want is entirely derived from one or more Kafka topics.


My understanding is this is pretty niche and can be complex.


I think he was trying to say that phone theft can benefit the same way as credit card theft. The thief uses the phone to buy stuff before the user reports it stolen. In this case the stuff that is bought is mobile services that are billed for example 100€ for each SMS message. The victims mobile subscription plan gets the bill and the associates of the thief get the money.


Yes, exactly, and the mobile operator takes a cut. I'm afraid I don't have references, I know this because it happened to a friend of mine.


Right. I guess the thing is I don't know how one spends 100s on SMS messages.

I guess that also means you either need the SIM card or an unlocked phone?


Premium SMS messages (to a thief-controlled destination) cost 2-3 (or whatever) each, and they send hundreds of SMS messages as soon as they can.

Yes, they need an unlocked phone, that's why they grab the phone while you're using it.

Android recently added protection to auto-lock if it detects sudden acceleration.


Ah, makes sense. Thanks!


I have bookmarked the play store update view as separate icon by long pressing the play store icon, then long pressing/dragging the my apps section to an own "app".

That way I can skip the store garbage and directly go click update all apps button.

I just tried on apple device s few weeks ago and it took me many minutes to find the listing where I can update installed apps and it was missing the update all button...


If there is no sprained ankle diagnostics and doctors just tell you to ignore not being well: just jump and run around as normal there is nothing seriously wrong.

And doctors only react when you can no longer use your legs for a year, otherwise they must be amputated.

Or would you rather have an earlier disgnostic with instructions to reduce extreme loads and try to take it easy. Let's check again in a week.


I've used https://instaguide.io/info.html?type=c5a.24xlarge#tab=lstopo

to browse the info. It is getting a bit old though.


That is nice. But no detail after gen 5, so mostly historical interest.


I've been thinking of using git filter to split the huge asset files (that are just internally a collection of assets bundler to 200M-1GB files) into smaller ones. That way when artist modifies one sub-asset in a huge file only the small change is recorded in history. There is an example filter for doing this with zip files.

The above should work. But does git support multiple filters for a file? For example first the above asset split filter and then store the files in LFS which is another filter.


I mean it might work but you'll still get pull time-outs constantly with LFS. It's expensive to wait two or three days before you can start working on a project. Go way for two weeks, it will be a day before you can "pulled" up to date.

I hope this "new" system works but I think Perforce is safe for now.


There are occasionally 61 seconds in a minute when they insert leap seconds - or did they stop that to avoid crashing millions of computer systems?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: