There are several different levels of ballistic missiles.
ICBMs, for which the GBI is intended, are the most challenging to defend against and show the least interceptor success.
In contrast, we do have some pretty definitive evidence that theater and "lower" MRBM/IRMB ballistic missiles can be intercepted successfully. If you define "effective defense" as "most missiles that would cause damage are intercepted", then it is clearly possible with current technology. If you define "effective defense" as "all missiles are intercepted", then it remains beyond the current technology.
If you define "effective" in terms of cost ratios: R = (cost of defense system + cost from failed intercepts) / (cost of attack system)
then N < 100 is well beyond current technology, regardless of whether the defense system is perfect or non-existent.
There's no magic Pareto-optimal point where investing the right amount in missile defense means that starting a war against a medium-sized country makes economic sense. Russia figured this out in Ukraine, and the US figured it out in Iran.
Israel's genocide worked pretty well tactically, but is a long-term strategic disaster. If the US continues to be a democracy, polls say that it will cause us to withdraw support sometime this decade. Also, it only works if you have an incredibly asymmetric fight.
In reality the "swiss cheese" holes for major accidents often turn out to be large holes that were thought to be small at the time.
> [Fukushima] No small alignment of circumstances needed.
The tsunami is what initiated the accident, but the consequences were so severe precisely because of decades of bad decisions, many of which would have been assumed to be minor decisions at the time they were made. E.g.
- The design earthquake and tsunami threat
- Not reassessing the design earthquake and tsunami threat in light of experience
- At a national level, not identifying that different plants were being built to different design tsunami threats (an otherwise similar plant avoid damage by virtue of its taller seawall)
- At a national level, having too much trust in nuclear power industry companies, and not reconsidering that confidence after a number of serious incidents
- Design locations of emergency equipment in the plant complex (e.g. putting pumps and generators needed for emergency cooling in areas that would flood)
- Not reassessing the locations and types of emergency equipment in the plant (i.e. identifying that a flood of the complex could disable emergency cooling systems)
- At a company and national level, not having emergency plans to provide backup power and cooling flow to a damaged power plant
- At a company and national level, not having a clear hierarchy of control and objective during serious emergencies (e.g. not making/being able to make the prompt decision to start emergency cooling with sea water)
Many or all of these failures were necessary in combination for the accident to become the disaster it was. Remove just a few of those failures and the accident is prevented entirely (e.g. a taller seawall is built or retrofitted) or greatly reduced (e.g. the plant is still rendered inoperable but without multiple meltdowns and with minimal radioactive release).
To be blunt; that isn't an appropriate application of the swiss cheese model to Fukushima. It isn't a swiss cheese failure if it was hit by an out-of-design-spec event. Risk models won't help there. Every engineered system has design tolerances. And that system will eventually be hit by a situation outside the tolerances and fail. Risk models aren't to overcome that reality - they are one of a number of tools for making sure that systems can tolerate situations that they were designed for.
If Japan gets traumatised and changes their risk tolerance in response then sure, that is something they could do. But from an engineering perspective it isn't a series of small circumstances leading to a failure - it is a single event that the design was never built to tolerate leading to a failure. There is a lot to learn, but there isn't a chain of small defence failures leading to an unexpected outcome. By choice, they never built defences against this so the defences aren't there to fail.
> Many or all of these failures were necessary in combination for the accident to become the disaster it was.
Most of those items on your list aren't even mistakes. Japan could reasonably re-do everything they did all over again in the same way that they could simply rebuild all the other buildings that were destroyed in much the same way they did the first time. They probably won't, but it is a perfectly reasonable option.
Again I'm going from memory with the numbers but doubling the cost of a rare disaster in a way that injures ... pretty much nobody ... is a great trade for cheap secure energy. It isn't a clear case that anything needs to change or even went wrong in the design process. Massive earthquakes and tsunamis aren't easy to deal with.
> It isn't a swiss cheese failure if it was hit by an out-of-design-spec event
First of all, the design basis accident is a design choice by the developers of the plant and regulators. The decision process that produced that DBA was clearly faulty - the economic and social costs of the disaster so clearly have exceeded those of a building to a more serious DBA.
> Again I'm going from memory with the numbers but doubling the cost of a rare disaster in a way that injures ... pretty much nobody ... is a great trade for cheap secure energy. It isn't a clear case that anything needs to change or even went wrong in the design process. Massive earthquakes and tsunamis aren't easy to deal with.
This is absolute nonsense. For the cost of maybe maybe tens of millions at most in additional concrete to build the seawall a few meters higher, the entire disaster would have been avoided entirely (i.e. plant restored to operation). With backup cooling that could have survived the tsunami (a lower expense than building a higher seawall), all that would have happened at Fukushima Daiichi is what happened at its neighbor Fukushima Daini (plant rendered inoperable, no meltdown, no significant radioactive release). Instead, we are talking about a disaster that will cost a (current) estimated $180 billion USD to clean up (and there is no way this estimate is realistic, when the methods required to perform the cleanup barely exist yet).
> The decision process that produced that DBA was clearly faulty - the economic and social costs of the disaster so clearly have exceeded those of a building to a more serious DBA.
That isn't clear at all. We're effectively sampling from the entire globe and we've had 2-3x bad nuclear disasters since the 70s. Our safety standards appear to be overcautious given the relatively small amount of damage done vs ... pretty much every alternative. The designs seem to be fine. I'm still waiting to see the justification for the evacuations from Fukushima; they seemed excessive. People died.
> For the cost of maybe maybe tens of millions at most...
You haven't thought for long enough before you typed that. For this particular disaster, sure. But hardening against all the possible disasters is what needs to happen when you become less risk tolerant. It is the millions of dollars to prevent against this disaster multiplied by the number of potential disasters that you have to consider. Safety is expensive.
The numbers aren't small, safety of that magnitude might not even be economically feasible. To say nothing of whether it is actually sensible. And once you get into one in 500 or thousand year events, some really catastrophic stuff starts happening that just can't be reasonably defended against. San Francisco and its fault springs to mind, I forget what sort of even that is but it is probably once a millennium or more often.
Fukushima was designed to be constructed on a hill 30-35 meters above the ocean, but someones decided would be cheaper to construct it at sea level in order to reduce costs in water pumping, others decided to approve this, and much latter, one decade before the disaster when was requested to reinforce the security measures within all the reactors at Japan, those in charge of Fukushima decided to ignore it, again, pushing for extensions year after year until it all blew up. Decades of bad decisions with a strong smell to corruption.
I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering. Eventually everything fails; time is generally against a design engineer.
Caveating that I'm not really sure it was even an out-of-design event, but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure. If you hit a design with things it wasn't designed to handle then it may reasonably fail because of that.
[0] https://en.wikipedia.org/wiki/Megatsunami homework for the interested, it is cool stuff. Japan has seen some quite large waves, 57 meters seems to be the record in recent history.
In Japan they have the "Tsunami Stones" [0] across the coast, memorials to remind future generations of the highest point the water reached.
It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded, and for such case they had ten years to design protections after being requested to reinforce measures (along with the other Japanese plants), but I can imagine the ones that should put the money was not very collaborative (I even doubt if such responsible learnt the lesson).
If it was a cheese model or not I do not enter (notice that parent of parent and me are different users), their negligence breaks all the possible logic we could apply without introducing the corruption's variable behind such decades of bad decisions.
> It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded,
So why did they build it there? It isn't a gentleman in a clown hat hitting himself on the head with a rubber mallet, they had a reason. These things are always trade-offs.
Maybe if they'd built it up on the hill there'd have been an earthquake, a landslide then the plant slides into the sea and gets waterlogged. I dunno. If we're talking about things without a clearly defined bounds of risk tolerance that is the sort of scenario that can be bought up. You're talking about negligence, but you aren't saying what tolerances this plant was built with, what you want it to be built to or what the trade-offs you want made are going to be. Once you start getting in to those details it becomes a lot less obvious that Fukushima is even a bad thing (probably is, the tech is pretty old and we wouldn't build a plant that way any more is my understanding). It isn't possible to just demand that engineers prevent all bad outcomes, reality is too messy. It isn't negligent if there are reasonable design constraints, then something outside the design considerations happens and causes a failure, is the theoretical point I'm bringing up. It is just bad luck.
The whole affair seems pretty responsible from where I sit a long way away. Fukushima is possibly the gentlest engineering disaster to ever enter the canon. It is much better than a major dam or bridge failure for example, and again assuming the event that caused the whole thing was unexpected not even evidence of bad management. Most engineering failures involve a chain of horrific choices the leave the reader with tears in their eyes, not just a fairly mild "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb". And bear in mind we're scouring the world for the worst nuclear disaster in the 21st century.
> "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb"
This is a bit of a wild understatement. (1) the tsunami was by no means wild, as multiple posts here have referenced, and (2) the incident resulted in a number of significant injuries, not including for deaths involved in the evacuation. And those deaths very much count - you can't hand-wave away the consequences of the evacuation on the basis of hindsight that the evacuation was larger than the final outcome necessitated.
> And those deaths very much count - you can't hand-wave away the consequences
I don't. If it is what it looks like, the government officials that ordered/organised the evacuations should be harshly censured and the next time evacuation orders should be more risk-based and executed in a safer way. What little I've gleaned suggests an appalling situation where a bunch of presumably old people were forced from their homes to their deaths. The main thing keeping me quiet on the topic is I don't speak Japanese and I don't really know what happened in detail there.
<< The Fukushima Daiichi Nuclear Power Plant construction was based on the seismological knowledge of more than 40 years ago. As research continued over the years, researchers repeatedly pointed out the high possibility of tsunami levels reaching beyond the assumptions made at the time of construction, as well as the possibility of reactor core damage in the case of such a tsunami. However, TEPCO downplayed this danger. Their countermeasures were insufficient, with no safety margin.>>
<< By 2006, NISA and TEPCO shared information on the possibility of a station blackout occurring at the Fukushima Daiichi plant should tsunami levels reach the site. They also shared an awareness of the risk of potential reactor core damage from a breakdown of sea water pumps if the magnitude of a tsunami striking the plant turned out to be greater than the assessment made by the Japan Society of Civil Engineers.>>
Even leaving aside they ignored the original placement in order to reduce costs by using biased seismological reports of their convenience, TEPCO knew the plant was at risk, they was warned successively it was at risk. And the supposed regulator NISA [0] closed the eyes conveniently (conveniently for someones).
<< TEPCO was clearly aware of the danger of an accident. It was pointed out to them many times since 2002 that there was a high possibility that a tsunami would be larger than had been postulated, and that such a tsunami would easily cause core damage.>>
From the other url I put (I updated it with a cached url, I didn't noticed the article was deleted),
<< there appear to have been deficiencies in tsunami modeling procedures, resulting in an insufficient margin of safety at Fukushima Daiichi. A nuclear power plant built on a slope by the sea must be designed so that it is not damaged as a tsunami runs up the slope.>>
EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima, this is not a gentlest gesture to the Europeans. Japanese citizens also received their dose, at time the more vulnerable ones was recruited by the Yakuza to clean up the zone.
No, I'm just trusting that you'll be honest about what it is saying. I don't need to read a report to persuade myself that a 40 year old plant was designed based on the best available knowledge of 40 years ago. That seems like something of a given. I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.
You're not saying what tolerances you want them to design to. We both agree that there are scenarios that can and might happen. Obviously is is possible for a tsunami to take out buildings built near the shore in Japan so it doesn't surprise me that people raised it as a risk. A lot of buildings got taken out that day. That doesn't obviously suggest negligence to me; obviously a lot of people were happy living with the risk.
> EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima
Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.
> I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.
You didn't read the report or search for information about the matter, but I have not problem to repeat it for you,
The General Electric's design was originally designed to be placed 30-35 meters above the ocean, instead of this TEPCO modified such design and constructed at sea level (almost) recurring to studies convenient to their purpose, cheaper, this in one of the more tsunami-prone countries, with an history of ones reaching 20-30 meters. When those -for them- convenient studies was not longer justifiable, as deeper studies did finally refute them, they decided to just keep ignoring all the warnings and requests to reinforce the safety. They knew the nuclear plant was in danger, they always knew it, General Electric didn't designed at 30-35 meters above the ocean by coincidence, and this happened with a supposed regulator always closing the eyes to this, conveniently, across those years, ignoring even pipes with fissures.
Well, this obviously suggest negligence to me. Decades of bad decisions with a strong smell to corruption.
> You're not saying what tolerances you want them to design to.
What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.
> Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.
Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.
> What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.
I wasn't going to reply but that seems like it moves the conversation forward; so why not?
It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal. The design is still fundamentally vulnerable. Moving the reactor up 35m still leaves it vulnerable to a large enough tsunami and a big enough earthquake.
If your solution is moving the site uphill, then your design goal should be talking in terms of a 1 in X year event. If you want the risk completely mitigated then in this case it isn't relevant where the site is since the obvious way to achieve that design goal is just build something that doesn't fail when flooded. Coincidentally that seems to be the approach that the newer generation designs use - change how the cooling works so that it can't melt down in any reasonable circumstances, tsunami or otherwise.
I will note that there is a reading of your comment where you want the design to be able to tolerate this specific event. I'm ignoring that reading as unreasonable since it requires hindsight, but in the unlikely event that is what you meant then just pretend I didn't reply.
> Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.
Which one do you think was gentler and a story of similar popularity as Fukushima? It is pretty usual to have multiple people actually die and it be the engineer's responsibility once something becomes international news. Even something as basic as a port explosion usually has a number of missing people in addition to a chunk of city being taken out. To anchor this in reality, Fukushima at a class 7 meltdown might have done less damage than a coal plant in normal operation. Coal plants aren't pretty places and air pollution is nasty, nasty stuff.
> It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal.
My goal? My solution? My design!? you must be now kidding,
- GE original design 30-35 meters above the sea.
- Warnings about reinforce safety along one decade.
- Tsunami at Fukushima's nuclear plant, 15 meters above the sea.
> I wasn't going to reply but that seems like it moves the conversation forward; so why not?
Foward to... nothing it seems. You just replied with hypotheticals like if the event didn't happened, and as if such event would have been impossible to avoid, with some kind of dissociative reflexions that surpass the cynicism. I'm the one that is not going to reply.
> Caveating that I'm not really sure it was even an out-of-design event but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure.
This is not how safe systems are designed and operated. Safety is not a one-time item, it is a process. All safety-critical systems receive attention throughout their operating lives to identify and mitigate potential safety risks. Throughout history, many safety-critical systems have received significant changes during their operating lives as a result of newly-discovered threats or recognition that threats identified during the initial design were not adequately addressed. Many (if not most) commercial aircraft have required significant modifications to address problems that were not understood at the time they were initially built and certified. Likewise, nuclear power plants in many countries have received major modifications over the years to address potential safety issues that were not understood or properly modeled at the time of their design. Sometimes, this process determines that there is no safe way to continue operation - usually that there is no economically viable way to mitigate the potential failure mode - and the system is simply shut down. This has happened to a few aircraft over the years, as well as several nuclear power plants (in many cases justified, in others not so much).
Fukushima existed in just such a system, and that the disaster occurred was the result of failures throughout the system, not a one-off failure at the design stage.
> I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering.
I think you are missing the point. Obviously it is possible that a tsunami higher than any possible design threshold could occur (it is, after all, possible that an asteroid will strike in the pacific and kick up a wave of debris that wipes everything off the home islands). However, the tsunami that struct Fukushima Daiichi was no higher than a number of tsunamis that were recorded in Japan within the last century. The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon. This was not a cases of "a bigger wave is always possible", it was a case where the design, operation, and supervision were wrong, and known (by some) to be so prior to the accident.
> The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon.
Not much of a swiss cheese failure then though. The failure is just that they committed hard to an assumption that was wrong.
My point is that unless it is actually an example of multiple failures lining up then this is a bad example of a swiss-cheese model. Seems to be an example of a tsunami hitting a plant that wasn't designed to cope with it. And a plant with owners who were committed to not designing against that tsunami despite being told that it could happen. It is a one-hole cheese if the plant was performing as it was designed to. The stance was that if a certain scenario eventuated then the plant was expected to fail and that is what happened.
Swiss cheese failures are there are supposed to be a number of independent or semi-independent controls in different systems that all fail leading to an outcome. This is just that they explicitly chose not to prepare for a certain outcome. Not a lot of systems failing; it even seems like a pretty reasonable place to draw the line for failure if we look at the outcomes. Expensive, unlikely, not much actual harm done to people and likely to be forgotten in a few decades.
I don't think you understand how a swiss cheese failure happens. They're not independent or semi-independent. Latent failures, expose active failures, like:
"Committed hard to an assumption that was wrong"
Then causes damage to the seawater pumps along the shoreline, and flooded emergency diesel generators.
That causes total loss of AC and DC power.
Loss of AC and DC power causes the reactor to overheat.
There was a strong corporate cultural component to Fukushima as well. Tepco had spent decades telling the Japanese public that nuclear power was completely safe. A tall order in Japan obviously, but by and large it worked.
During the operation of Fukushima Daiichi, various studies had been done that recommended upgraded safety features like enlarging the seawall, moving the emergency generators above ground so they couldn't be flooded, etc.
In every case, management rejected the recommendations because:
1. They would cost money.
2. Upgrading safety would be tantamount to admitting the reactors were less than safe before, and we can't have that.
> Having to have thread safe code all over the place just for the 1% of users who need to have multi-threading in Python and can't use subinterpreters for some reason is nuts.
Way more than 1% of the community, particularly of the community actively developing Python, wants free-threaded. The problem here is that the Python community consists of several different groups:
1. Basically pure Python code with no threading
2. Basically pure Python with appropriate thread safety
3. Basically pure Python code with already broken threaded code, just getting lucky for now
4. Mixed Python and C/C++/Rust code, with appropriate threading behavior in the C or C++ components
5. Mixed Python and C or C++ code, with C and C++ components depending on GIL behavior
Group 1 gets a slightly reduced performance. Groups 2 and 4 get a major win with free-threaded Python, being able to use threading through their interfaces to C/C++/Rust components. Group 3 is already writing buggy code and will probably see worse consequences from their existing bugs. Group 5 will have to either avoid threading in their Python code or rewrite their C/C++ components.
Right now, a big portion of the Python language developer base consists of Groups 2 and 4. Group 5 is basically perceived as holding Python-the-language and Python-the-implementations back.
Where is the major win? Sorry but I just don't see the use case for free-threading.
Native code can already be multi-threaded so if you are using Python to drive parallelized native code, there's no win there. If your Python code is the bottleneck, well then you could have subinterpreters with shared buffers and locks. If you really need to have shared objects, do you actually need to mutate them from multiple interpreters? If not, what about exploring language support for frozen objects or proxies?
The only thing that free threading gives you is concurrent mutations to Python objects, which is like, whatever. In all my years of writing Python I have never once found myself thinking "I wish I could mutate the same object from two different threads".
> Native code can already be multi-threaded so if you are using Python to drive parallelized native code, there's no win there.
When using something like boost::python or pybind11 to expose your native API in Python, it is not uncommon to have situations where the native API is extensible via inheritance or callbacks (which are easy to represent in these binding tools). Today with the GIL you are effectively forced to choose between exposing the native API parallelism or exposing the native API extensibility; e.g. you can expose a method that performs parallel evaluation of some inputs, OR you can expose a user-provided callback to be run on the output of each evaluation, but you cannot evaluate those inputs and run a user-provided callback in parallel.
The "dumbest" form of this is with logging; people want to redirect whatever logging the native code may perform through whatever they are using for logging in Python, and that essentially creates a Python callback on every native logging call that currently requires a GIL acquire/release.
Could some of this be addressed with various Python-specific workarounds/tools? Probably. But doing so is probably also going to tie the native code much more tightly to problematic/weird Pythonisms (in many cases, the native library in question is an entirely standalone project).
> The only thing that free threading gives you is concurrent mutations to Python objects, which is like, whatever.
The big benefit is that you get concurrency without the overhead of multi-process. Shared memory is always going to be faster than having to serialize for inter-process communication (let alone that not all Python objects are easily serializable).
> The big benefit is that you get concurrency without the overhead of multi-process.
Bigger thing imo is that multiprocessing is just really annoying. In/out has to be pickleable, anything global gets rerun in each worker which often requires code restructuring, it doesn't work with certain frameworks, and other weird stuff happens with it.
> In practice CPython reliably calls it cuz it reference counts ... In a world where more people were using PyPy we could have pressure from that perspective to avoid leaning into it
A big part of the problem is that much of the power of the Python ecosystem comes specifically from extensions/bindings written in languages with manual (C) or RAII/ref-counted (C++, Rust) memory management, and having predictable Python-level cleanup behavior can be pretty necessary to making cleanup behavior in bound C/C++/Rust objects work. Breaking this behavior or causing too much of a performance hit is basically a non-starter for a lot of Python users, even if doing so would improve the performance of "pure" Python programs.
> That cleanup can be explicit when needed by using context managers.
It certainly can be, but if a large part of the Python code you are writing involves native objects exposed through bindings then using context managers everywhere results in an incredible mess.
> Mixing resource handling with object lifetime is a bad design choice
It is a choice made successfully by a number of other high-performance languages/runtimes. Unfortunately for Python-the-language, so much of the utility of Python-the-ecosystem depends on components written in those languages (unlike, for example, JVM or CLR languages where the runtime is usually fast enough to require a fairly small portion of non-managed code).
The point is that almost all of the signatories considered themselves to be immune to a "real war" in their futures at the time they signed. E.g. basically all of the European signatories assumed that the end of the cold war and existence of NATO would ensure the end of any possible threat. Given that assumption, as obviously flawed as it was, signing on to a ban was cheap PR (literally cheap, too, because it meant they could divest those weapons and their delivery mechanisms to reduce defense expenditures).
> Given that assumption, as obviously flawed as it was, signing on to a ban was cheap PR (literally cheap, too, because it meant they could divest those weapons and their delivery mechanisms to reduce defense expenditures).
Doubly so, since they understood themselves to be backed up by a non-signatory (the US).
I think the story is a bit more complicated. Core succeeded precisely because Intel had both the low-power experience with Pentium-M and the high-power experience with Netburst. The P4 architecture told them a lot about what was and wasn't viable and at what complexity. When you look at the successor generations from Core, what you see are a lot of more complex P4-like features being re-added, but with the benefits of improved microarch and fab processes. Obviously we will never know, but I don't think you would get to Haswell or Skylake in the form they were without the learning experience of the P4.
In comparison, I think Arm is actually a very strong cautionary tale that focusing on power will not get you to performance. Arm processors remained pretty poor performance until designers from other CPU families entirely (PowerPC and Intel) took it on at Apple and basically dragged Arm to the performance level they are today.
> In comparison, I think Arm is actually a very strong cautionary tale that focusing on power will not get you to performance.
Hugely underappreciated. Someone involved fully understood that "you don't get to the moon by climbing progressively taller trees".
The other two times Arm had great performance were the StrongArm, when it was implemented by DEC people off the Alpha project, and the initial ones, which were quite esoteric and unusually suited to the situation of the late 80s.
If you look at the V-22 safety record in the context of the level of technical development, it is pretty good (e.g. compare to helicopters and aircraft from the 60s). The first production generation of a brand new type of vehicle is always going to be complicated, and virtually all of the V-22 mishaps come from the "new" components and procedures.
The fundamental tradeoff with tiltrotor platforms is that you trade significantly increased speed for significantly increased complexity. What that means is your battlefield survivability goes up when dealing with any opponent with meaningful air defenses, but at the cost of increasing your "resting" accident rate when most peacetime accidents are consequences of maintenance and/or procedural issues.
Because Sikorski can't make them work. Sure, they can take off and fly fast in a straight line, but they haven't been able to demonstrate sufficient maneuverability due to vibration problems in the rotor head. They are also very tall, prohibitively so for existing shipboard hangar, which would otherwise seem to be their advantage over tiltrotor platforms.
> The hardware is built around a stackable 10×10cm compute module with two ARM Cortex-A55 SBCs — one for ROS 2 navigation/EKF localisation, one dedicated to vision/YOLO inference — connected via a single ethernet cable.
I will preface this by saying that I have nothing against ARM per se, that my employer/team supported a good chunk of the work for making ROS 2 actually work on arm64, and that there is some good hardware out there.
I really don't understand why startups and research projects keep using weird ARM SBCs for their robots. The best of these SBCs is still vastly shittier in terms of software support and stability than any random Chinese Intel ADL-N box. The only reasons to use (weird) ARM SBCs in robots are that either (1) you are using a Jetson for Jetson things (i.e. Nvidia libraries), or (2) you have a product which requires serious cost optimization to be produced at a large scale. Otherwise you are just committing yourselves and your users/customers to a future of terrible-to-nonexistent support and adding significantly to the amount of work you need to bring up the new system and port existing tools to it.
> The only reasons to use ARM SBCs in robots are...
Obviously, anyone can have there own opinion on this.
I work in robotics, we are quite happy with our A53 and M4. Though, we use a SOM, not a SBC, if you feel like splitting hairs.
You probably aren't using some weird SOM, though. There is a bit of an unstated exception of "unless said SBC/SOM has specific hardware that is necessary/particularly valuable for your product/project". For example, if you need GMSL you are probably not going to be picking Intel, even though ADL-N and the bigger processors support MIPI, simply because no one else does and the documentation/support for it is basically nonexistent. Designs with closely-coupled A/M/R cores, or CPU/MCU/FPGA hybrids like Zynq would be others.
But generally projects which are choosing some random SBC aren't using any of these features, and are just suffering the pain/imposing it on their users for no good reason.
again, just an oppinion, but it feels really weird to hear you find "exception after exception", when the net result that you've ruled out more real world robotics projects on ARM than likely exist on x86 that you're suggesting should be the "norm".
you've ruled out the entire NXP ecosystem, the entire Nvidia Jetson ecosystem, the entire AMD/FPGA/Zynq ecosystem, even perfectly good options like beagle-board .... who else?
incidentally, you've also ruled out this project - as they are using an M7 microcontroller to meet their hard-real-time timing constraints...
The other poster had said nothing about microcontrollers, e.g. about the various MCU models based on Cortex-M cores.
Some things are best done with a microcontroller, and those are not suitable for being done with a general-purpose CPU either based on Intel/AMD or on Cortex-A cores. Actually there are many projects that mistakenly use something like a Raspberry Pi instead of a better and cheaper implementation with a microcontroller, e.g. one based on Cortex-M7 or its successor, Cortex-M85.
The other poster said that where you do not want a microcontroller, but you want to run a standard operating system, e.g. Linux, then the best choice is much more frequently a SBC with an Intel Alder Lake N or Twin Lake CPU, as these not only have a better performance per dollar than the ARM-based SBCs, but they also avoid any software problems and future maintainability problems.
Unfortunately, during the last few months the price of Intel-based SBCs has been affected by the fact that most of them do not have soldered memory but they use one SODIMM memory module. While you can buy an Intel Alder Lake N based SBC for $100, buying today a SODIMM for it may cost as much or more, depending on the amount of memory with which you are content.
The ARM SBCs that come with soldered LPDDR memory have initially been less affected by the price hikes, though now even for them the prices are rising.
I think you're missing my point entirely. If your project needs specific hardware, you have to use that specific hardware (the obvious examples of which would be Jetsons or Zynq/Zynq-like or something ASIL-D or something that tightly couples "A"/M/R cores together, or you are stuck using a SoC from Qualcomm for cell connectivity). There are a lot of projects that do fall into that category.
There are also a (much smaller) number of projects that will legitimately see the kind of scale of production that justifies aggressive cost optimization for the compute platform, either in terms of designing their own around a SoC or picking some SBC/SoM that they can get a good deal on, where the significant additional up-front engineering cost is outweighed by the production savings (and where the desire/need to keep a fixed platform means the often limited platform support from the vendor is less restrictive).
But a large number of robotics projects (basically everything in the research sphere) - this one very much included - just need "some computer" for general-purpose use. They are already separating realtime control onto a separate microcontroller board. For these projects, it is almost always committing a "premature pessimization" of picking some weird SBC. You are signing up for worse CPU and GPU performance, stability, and development future for very little reward.
There are a variety of x86 products with Coreboot support, if what you are looking for is firmware openness. If what you are looking for is PCB design openness, the options are much fewer, but at that point you are probably optimizing for an overly niche objective.
> Part of the point of this for me is to see what's possible with open hardware (down to chip level at least)
I appreciate the idea, but this is essentially saying "this project will prioritize a specific choice of one (core) piece of hardware to the detriment of everything else, users included". Approximately none of your potential users are going to benefit from the "openness" of the SBC versus that of a more broadly-supported platform (I say "openness" because the reality of SBCs is that actually finding a usefully performant one that is completely blob-free is almost impossible). Open hardware means very little if it isn't running an upstream kernel and userland.
The South-Korean Hardkernel ODROID H4 models are open hardware. There is no need to send one to you, as you can order one yourself from their on-line shop or from local shops.
You get their schematics/PCB documentation and their BIOS has features that are missing in most mini-PCs and laptops with Alder Lake N/Twin Lake, e.g. you can enable in-band ECC for the memory. You can choose various variants of the SBC and you can buy cheaply various accessories, e.g. several case variants and additional peripheral interfaces. Those ODROID H4 SBCs are also correctly designed for cooling inside a box like that used in this project, because the PCB is attached to a big heatsink and you can attach the heatsink directly to an aluminum wall from inside the box, ensuring good thermal contact with pads or grease, so that the electronics will be cooled well.
Most technical information can be found in their Korean site, but there is a UK distributor (though the prices appear greatly inflated here; so much that it might be cheaper to buy from South Korea, depending on shipping costs and applicable taxes):
Also the Chinese Radxa has a Raspberry Pi sized SBC with an Intel N100, which is open hardware, with complete schematics/PCB documentation (but unlike ODROID H4, which has excellent cooling and it can be used without a fan, it is unclear how easy is to cool the Radxa SBC).
Moreover, unlike for many Intel/AMD CPUs, which no longer have public documentation, for Alder Lake N Intel still provides public datasheets, which contain e.g. the thousands of control registers for the on-chip peripherals. Most ARM Cortex-A based CPUs are undocumented, with few exceptions like Rockchip RK3588 and the very expensive NVDIA Orin/Thor (or the obsolete Xavier). All Cortex-A based CPUs have secret boot loaders, so you can never be certain that your programs really run on bare metal, as the CPU vendor can implement the equivalent of the Intel System Management Mode, where the proprietary vendor firmware can take control from your own operating system whenever it wants.
There are somewhat more ARM-based SBCs than Intel-based SBCs that are open hardware, but there are also plenty of undocumented ARM SBCs that are much worse from this PoV than the Intel/AMD based computers, where at least the IBM PC standards and the later standards pushed by Intel, e.g. ACPI/UEFI, apply. The Allwinner CPU used in this robot has almost non-existent documentation, in comparison with Intel Alder Lake N, so it is much farther from "open hardware".
You have mentioned the NVIDIA Jetson modules, which are based on Thor/Orin/Xavier. Those have excellent documentation, but you have to register at NVIDIA, for a free account, in order to access it. The documentation is not the problem with them, but the fact that they are greatly overpriced, like almost anything made by NVIDIA. Unless your application critically depends on some feature provided by NVIDIA, for which no acceptable alternatives exist, choosing Jetson is a very bad decision, because the alternatives are usually both better and cheaper.
The SBCs based on Cortex-A55 are the cheapest for the purpose of running Linux that still have a decent performance and they may be sufficient for many applications.
However, the SBCs based on either Intel Alder Lake N or on ARM Cortex-A7x cores are in a completely different class of performance, so they are more future-proof as they can enable the implementation of applications that were not taken into consideration in the beginning. Moreover, as pointed by the other poster, none of the Cortex-A55 SBCs implements any kind of standard, so migration to any different SBC may require significant work, unlike with the Intel/AMD SBCs, which are mostly interchangeable.
The Intel Alder Lake N/Twin Lake cores (Gracemont cores) have a performance similar to the ARM Cortex-A78 cores, which for now can be found only in few SBCs, which use Qualcomm, Mediatek or NVIDIA CPUs. The Cortex-A76 cores, which are used in Rockchip 3588 and in the latest Raspberry Pi, have a speed of only around 2/3 of the Gracemont/Cortex-A78 speed, at the same clock frequency.
Cortex-A55 cores are many times slower than any of these bigger cores. A single Intel SBC (or Cortex-A7x based SBC), can replace both Cortex-A55 SBCs of this design, at about the same cost, improving the cooling and probably lowering the power consumption, while also providing a significant performance headroom for future extensions.
While using 1 Cortex-A55 SBC for minimum cost may make sense, using 2 is a definite mistake, as they should be replaced by 1 better SBC.
I have mentioned the open-hardware Intel-based ODROID H4. The same company makes several models of ARM-based SBCs, which I would trust much more in an outdoors robot, than the choice done in the parent article, because the cooling behavior of all of them is carefully tested and reported on their site, and because it is a company that has been around for many years, demonstrating reliable hardware. Avaota provides much less information about their product than Hardkernel, i.e. they only give schematics/PCB information, without any information about power consumption, and especially about thermal behavior, which is essential in a robot application.
ICBMs, for which the GBI is intended, are the most challenging to defend against and show the least interceptor success.
In contrast, we do have some pretty definitive evidence that theater and "lower" MRBM/IRMB ballistic missiles can be intercepted successfully. If you define "effective defense" as "most missiles that would cause damage are intercepted", then it is clearly possible with current technology. If you define "effective defense" as "all missiles are intercepted", then it remains beyond the current technology.
reply