I've heard it feels a lot bigger once you're in freefall. Imagine if you could use all of your room's surfaces as floor space. I would think your room would feel a lot bigger.
You don't get it. Your 400sqft apartment needs to be shrunk by a factor of 6 to have the same area as the Orion. Try living in an 8x8 foot square for a couple weeks.
Not in a storm you can't! Granted I didn't do ten days. But I was with two other people for close to a week and it was...fine. We're old friends. There were moments it got annoying. But it was never boring or restrictive. We just played games, drank, looked out of the portholes, cursed hangovers and talked the one person who occasionally wanted to call it.
Starlink uses phased arrays pointed at the ground but lasers between satellites. So it wouldn’t be impossible to spin one around and have it bounce traffic to earth through the swarm pointing down.
But these satellites are very close to earth compared to the moon. It wouldn’t only save 0.3% transmit power vs just sending right to the surface. It’s very unlikely the consumer antennas could manage hitting an earth satellite from the moon.
> To meet these extreme requirements, CERN has deliberately moved away from conventional GPU or TPU-based artificial intelligence architectures.
This isn't quite right either: CERN is using more GPUs than ever. The data processing has quite a few steps and physicists are more than happy to just buy COTS GPUs and CPUs when they work.
- All the experiments use GPUs which come straight from the vendors.
- Most of the computing isn't even on site, it's distributed around the world in various computing centers. Yes they also overflow into cloud computing but various publicly funded datacenters tend to be cheaper (or effectively "free" because they were allocated to CERN experiments).
Some very specific elements (those in the detector) need to be radiation hard and need O(microsecond) latency. These custom electronics are built all over the world by contributing national labs and universities.
CERN builds almost next to nothing anymore. Half a century ago they really did do RF cavities, cooling, electronics etc. Not anymore. It is either COTS (DELL, Alterra etc.) or chiefly vendor bidding for some custom parts. Much like what NASA (from Rocketdyne, TRW to Boeing and SpaceX) or copycat ESA (Airbus, DLR, BAE's suppliers) does today.
It is a project bureau. Everything is essentially outsourced, leaving a management shell institute to parade for VIPs. Actually they are close to completely forgetting what they already knew in the hard sciences domain.
Everyone needs to agree on a place to put the LHC, and a lot of the accelerator team is on sight and probably should be payed by CERN, but they have a clear set of KPIs for that: they need to get the machine up to design energy and luminosity and hold it there. The CERN accelerator and civil engineering teams are pretty impressive and have mostly done their job.
The rest of the scientific community can (and does) organize into pseudo-autonomous collaborations that draft proposals for what to do with the real-estate around the collision points and beam dumps. The vast majority of these people don't work for CERN.
Because every principle investigator in academia works in sales.
Some tried to hold out and keep calling it "ML" or just "neural networks" but eventually their colleagues start asking them why they aren't doing any AI research like the other people they read about. For a while some would say "I just say AI for the grant proposals", but it's hard to avoid buzzwords when you're writing it 3 times a day I guess.
Although note that the paper doesn't say "AI". The buzzword there is "anomaly detection" which is even weirder: somehow in collider physics it's now the preferred word for "autoencoder", even though the experiments have always thrown out 99.998% of their data with "classical" algorithms.
The paper [1] referenced in your link follows the lagacy of the paper on the HIGGS dataset, and does not operate with quantities like accuracy and/or perplexity. HIGGS dataset paper provided area under ROC, from which one had to approximate accuracy. I used accuracy from the ADMM paper [2] to compare my results with. As I checked later, area under ROC in [1] mostly agrees with [2] SGD training results on HIGGS.
I think that perplexity measure is appropriate there in [1] because we need to discern between three outcomes. This calls for softmax and for perplexity as a standard measure.
So, my questions are: 1) what perplexity should I target when dealing with "mc-flavtag-ttbar-small" dataset? And 2) what is the split of train/validate/test ratio there?
For better or worse the people working on this don't really use perplexity or accuracy to evaluate models. The target is whatever you'd get for those metrics if you used the discriminants that were provided in the dataset (i.e. the GN2v01 values).
As for why accuracy and perplexity aren't reported: the experiments generally choose a threshold to consider something a "b-hadron" (basically picking a point along the ROC curve) and quantify the TPR and FPR at that point. There are reasons for this, mostly that picking a standard point lets them verify that the simulation actually reflects data. See, for example, the FPR [1] and TPR [2] "calibrations".
It's a good point, though, the physicists should probably try harder to report standard metrics that the rest of the ML community uses.
Perplexity, aka measuring how much a network is sure about its answer. Which might be wrong. It would not pass the pier review of any particle physics journal. (Real) science is about being right, not about being sure about itself.
FUN FACT: Aviation rules require that any plane carrying a parachute must have at least one for every person on board. Hopefully the reason is obvious.
Now given that, do you really want to pay the extra cost of flying with 300 parachutes just so mr-full-volume-phone can have one?
That is an incredibly fun fact. Does this only apply to commercial or also a little Cessna? Presumably there is no actual enforcement on the private planes.
I made it too fun: what I said was at best an over-genarlization. The actual rules [1] apply to acrobatics and say that parachutes are required for everyone when non-crew passenger is on the plane:
Unless each occupant of the aircraft is wearing an approved parachute, no pilot of a civil aircraft carrying any person (other than a crewmember) may execute any intentional [acrobatic] maneuver...
So without the passenger no one needs a parachute, with them everyone does.
It's perfectly legal for a 787 to carry a few parachutes just for the full-volume passengers.
- The prizes are accessible to young scientists who actually need the career boost from the publicity (as opposed to established scientists who are mostly boosting the prestige of the prize)
- They promote awareness of how diverse and awesome science is.
I've almost caved and bought bluetooth because most stores stopped stocking wired headphones above crap-grade. But maybe I can just wait this out, if wired really is making a comeback.
reply