According to my doctor, raising vit D through diet is not reasonable. It’s sunlight or pills/shots if sunlight won’t do it (genetics or climate).
In peak summer with being outdoors 2+ hours every day my Vit D was 30 (<30 is inadequate) and drank fortified almond milk daily. In winter it drops to 20 with similar outdoor time. Been on a 50,000 pill once a week since.
Maybe someone will share a well informed diet that contradicts my doctor.
I get the sense that you have to eat a lot of fish, which introduces heavy metal concerns because of modern fishing, which is why my doctor went the route he did. And how do you know the one type of mushroom actually gets the right light to have natural Vit D—-raw ingredients like mushrooms don’t usually have nutrition labels.
FYI, spiking Vitamin D levels in the blood weekly might not be the best idea, though it's not exactly proven. There's a theory that spiking Vitamin D like that can promote blood vessel calcification. There's some more theory that Vitamin K administered at the same time might help.
It could be safer to do 5,000 IU seven days a week than spikes of 50K once a week.
Watch out, though. I was on a similar daily dose and ended up with Vitamin D levels touching the upper limit. Too much Vitamin D is not good for you.
> and drank fortified almond milk daily.
Can't say without seeing the labels, but I wouldn't expect a cup of almond milk to have more than 10-20% of your daily value.
Cod liver oil is probably the best choice, as it also includes DHA, EPA and vitamin A, providing most things that would not be provided by vegetable oil (the only essential fatty substance that is neither in vegetable oil nor in cod liver oil is vitamin K2).
Well made cod liver oil is tasty and you can add it to food together with whatever else kind of oil you prefer (after food is cooked, not before, as it is heat sensitive). No more than 10 mL/day is necessary.
At least at the analysis reports that I have seen in the EU, fish oil has never been found with high content of mercury, even if the fish from which it has been extracted are likely to have been contaminated with mercury. Moreover, cod liver oil is sold in the EU as recommended for children and pregnant women. I doubt that any company would have the guts to sell such products here without taking care to make frequent chemical analyses to ensure that the product is never contaminated.
Chicken liver is also rich in vitamin D, but it is not advisable to eat great quantities, because it may contain too much vitamin A (which is toxic in excessive amounts). The amount of vitamin A in chicken liver or turkey liver is pretty much unpredictable, because it may vary by more than an order of magnitude between various producers, depending on how they feed the birds.
Most vitamin D3 pills contain vitamin D3 that is produced from sheep wool (i.e. from lanolin).
The substance in mushrooms (ergocalciferol) has a structure similar to the true vitamin D (cholecalciferol a.k.a. vitamin D3).
Nevertheless, it seems that it is not able to substitute vitamin D in all its functions. Therefore it is not advisable to count on it as a source of vitamin D.
There has been a company that has claimed that they have discovered a species of lichen that contains true vitamin D. Nevertheless, their advertising has seemed highly suspicious and it looked more like a scheme to separate naive vegans from their money.
Even if it were true, exploiting wild lichen would be much more unethical than eating the normal vitamin D3 supplements made from sheep wool. The reason is that wild lichens grow very slowly and exploiting a species for a food supplement would cause a very high risk of extinction for that species.
In any vertebrate animal, the liver is the part with the greatest content of vitamin D.
AFAIK, Apple's CoreFoundation CFArray also works similarly[0].
NSMutableArray works little differently (using a circular buffer). From the always excellent Cichenowski[1].
Sounds like an interesting idea at first until the point where they will probably create a Swift/ObjC API around it instead of the standard webgpu.h C API, and at that point you can just as well use Metal - which is actually a bit less awkward than the WebGPU API in some areas.
I recently used Odin in a commercial project and had a great experience. For me the biggest hurdle was not the language, but having to write programs without an IDE like Visual Studio/Xcode. Having to write my own build scripts (shell or batch files) etc and maintaining them is a PITA.
But I'm glad I did it because it checks off all the "C but nicer" checkboxes.
The fact that it not only doesn’t have a built in build tool/package manager, but that the author has also said he doesn’t believe in them and will never make one was very off putting to me. I love how languages like Rust have cargo, or gleam has it built right into the compiler. I’m so fed up with how in C++ or Python there’s a billion competing tools. The language looked really cool, but without good author-blessed tooling, I doubt I’ll ever use it myself.
> The fact that it not only doesn’t have a built in build tool/package manager...
I've been programming in Odin for a few months now, and I've come to actually like this choice.
I still use the occasional dependency, and installing it is even easier than with a package manager!
I download a repo as zip from github, extract it in my project, and voilà, it's ready to use. No compilation, transpilation, peer dependencies, locking versions, etc.
Another positive of this approach is that I can now easily read the dependency source code and if needed modify it, as it's become a part of my project, not some transpiled and minified version of that code sitting in an unversioned folder.
Overall, in Odin I use dependencies much more sparingly than when I work with JS. The reason is that the core and vendor packages of the language already include a surprising amount of things you'd normally reach npm/cargo for. Need linear algebra primitives? Specialized data structures like a priority queue? SDL2? stbi? It's all included in the language (and so much more), ready to use.
I've come to realize that more often than not it's fine to reinvent the wheel to solve your specific problem rather than relying on a generalized (and thus unoptimized) 3rd party library.
Thanks for taking the time to tell us about your experience! I’m happy it works well for you! With that said, that doesn’t sound like it’s for me.
This sounds like what I already do in C++ (except I use git submodules for dependencies because it makes it easier to pull new versions, check for versions, etc), and tbh I don’t much like it. I do it out of necessity. I’d much rather keep tabs of all my libraries and versions on a project file and have a tool that will download the version for me, build it, tell me when new versions are available, update to the latest version (if I so choose), and so on. In C++, I have to manually do to each dependency and check if there’s a new version, and pull it if I decide I want it. In, eg, Gleam, I can ask gleam what’s new.
In Odin, it sounds like I have to do the same — either use submodules or download the release files by hand. I have to manually check for updates and then replace my local files.
It’s just not something I personally like to do. The author is entitled to be opinionated about this, but it clashes with my own opinion, so that means I probably won’t try the language even though it looks pretty good from a language design point of view.
As someone who writes C# for a living, I see some great advantages with package managers. We have setup a system where our own libraries are published to our local nuget repository and the results have been positive.
With that said, could we do this without a package manager? I mean.. yes.. instead of a nuget folder structure it would be a dll folder structure. Certainly possible, generally speaking.
I guess there is negatives on either side. Without a package manager you have to be more manual. With it updating is easy.. and I know developers that update them without looking into the details. There was one example where nuget said there was an update.. so the developer updated it and it caused errors. Thats because the update was for a later .NET version.
However, when it comes to odin, I have found it to be a pleasant experience doing a `git pull` of any odin library I need to a 'thirdparty' directory and to import them into my odin code.
So we have the builtin ones like core, vendor, etc... then I have thirdparty. If any of those libraries in thirdparty made its way into odins vendor.. it would be a simple change in the code.
I use Odin in my own personal projects but if I used it at my current workplace, I would likely setup a structure similar to my C# setup.. with a shared directory holding libraries we need -- simply git pulls, etc.
These dependency managers are something of a double-edged sword. They avoid a lot of work if your project has a lot of external packages, but they also encourage pulling in lots of external dependencies without much thought. Every one of those, and every one of their sub-dependencies, exposes users of the software to significantly more risk. It's a breeding ground for common vulnerabilities and supply chain attacks.
Partly because of this, I try to avoid external dependencies as much as possible. When I need something that's not built in to the language, I choose in the following order of preference:
1. The standard library and target platform libraries, augmenting any inadequate features with my own extensions if necessary.
2. A very well known, well maintained, and widely used library with few dependencies of its own. Something that could almost be mistaken for #1.
3. Write my own minimal version of what I need, if I can do so with a reasonable level of effort.
4. A lesser-known third-party library, if it appears well maintained, and if I am willing to audit it and every future update to it.
A happy side effect is that a language with no built-in dependency manager is still perfectly viable for me, since it wouldn't be saving me much work anyway.
Well… we can’t save people from themselves. It’s always prudent to ensure the libraries you use are high quality, stable, and well maintained and your list is a good order of reference to live by, in any language.
It those things on Pypi depend on binary libraries that you need to compile yourself, there is going to be some fun, depending on the state of the current system.
Static sites are great until you need to have a contact form or want to add basic comments. Yes, you can deploy javascript that uses external services to add such functionality to static sites- but with a basic WordPress site, you get everything right out of the box.
A contact form is a terrible alternative to an email address. Many sites have dropped comments altogether. Yes, these things might be nice-to-haves, but they shouldn't be the factor that determines whether you have a static site or a scripted one.
I have to politely disagree. If you ever run a website for business/portfolio etc. the number of people more likely to contact you using a contact form is far greater than just telling them to email you. Contact forms also scale well if you need to categorize and channel the queries to different people or need more specific info.
Fair enough. I would argue that we're probably talking about different use cases. If you're at the stage of categorising and channelling queries to different people, your site is probably already "heavyweight" enough to justify a backend anyway.
I came here to write this exact comment. The article is wrong in assuming that WP is wasteful. It gives huge optionality to the users: engineers probably can afford going with a static page and then changing the entire architecture of their webpage once they need some interactivity, but non-engineers want to go with a scalable solution: where they start with a contact info and slowly end up with a personal shop or whatnot without reinventing the setup at each phase transition.
Speaking of optionality and opportunity costs: many engineers are trained to see the unseen opportunity costs in technology ("YAGNI" and "tech debt" are often used terms), but often fail to see the economic opportunity costs: those that would waste time and cognitive effort of human beings, not the machines. Example: many engineers like to fantasize about micropayments architectures "because efficiency", but people cannot calculate those. They are better off with a nice round monthly subscription just to minimize number of microdecisions they have to go through daily.
It uses Microui lib written by rxi. Checkout other single header only C libs written by rxi[0]. They're really elegant C in my view.
[0] - https://github.com/rxi
Can mostly survive on autopilot, & it's s0o0o0o much easier to shut something down for not having > X0 million users, than it is to shut it down a clear charity case because it doesn't make money. (it's not supposed to!)
After 7 years there, the greatest lesson I walked away with is motivated reasoning really matters.
Very very little at Google survives on autopilot. There’s a constant drumbeat of required changes, whether they be accessibility, regulatory (DMA, hurrah), internally motivated changes (material design++) etc. For anything to survive there must be someone who cares and is willing to invest in it. This is a large part of why stuff gets killed, the default there is for things to die. Yes there are strategy changes too, but very often things die because the person who could champion a thing is no longer there to do so, or has changed roles significantly enough that they can’t plausibly do so anymore.