> Not sure about this event but in Europe you have to hire a professional security company.
This is not universal across Europe.
I've been part of organizing computer events in the 5k participant range without any hired security or medical staff. I think it greatly depends on the standing and culture of volunteer work in your country.
This comment scares me a little, having recently re-watched The Handmaid's Tale...
I don't really understand the premise. We're already pushing several limits for global sustainability due to the human (over-)population (climate change, food supply, loss of fauna diversity).
Why is it not a good thing to reduce the population size?
Why should we not work towards a sub-2 TFR for a few generations, to reduce human impact on our planet a little?
Why should governments actively work against people's right to choose whether to have a child or not, by increasing fertility through additives to the drinking water?
> I don't really understand the premise. We're already pushing several limits for global sustainability due to the human (over-)population (climate change, food supply, loss of fauna diversity).
> Why is it not a good thing to reduce the population size?
Because the shareholders expect constant growth (of their investments)
Once fertility declines, it's extremely hard to reverse the decline without a massive cultural shift. Countries like S.Korea and Hungary have made heroic efforts -- without much success. People raised in a generational culture of small families tend to have small families themselves. So either your culture shifts, or your culture is quickly replaced by high-fertility subcultures such as the Amish, Quiverfull, and Haredim. ("Quickly" in generational terms, of course.)
My thinking is that if you value your culture and its values, you should want to see it persist. The only way for it to persist in meaningful terms is for it to reproduce to at least a replacement level. The alternative -- which might be better for the environment in the short term, but not on very long timescales -- represents not a temporary lull but probably a permanent cultural decline.
Ministry of Defence, Ministry of Justice and Public Security and Ministry of Foreign Affairs are the only ministries not affected. Press conference did not mention which platform or supplier was affected, but they let slip that they're working with Microsoft to address the issue.
This was not what I expected when I started reading, but an interesting article nonetheless!
My first reaction is that the article is focusing on a rather specific (while generic) way of conceptualizing knowledge and skills, which are at best fuzzy attributes to start with. However, I don't feel that the specific framework and examples (including IAM roles) can be adopted well as-is; there's a lot of assumptions to the team and org structure surrounding this, along with processes leading up to defining the roles, which I fear can easily lead to cargo-culting.
My takeaway from the article is that it could act as input for teams to better understand their needs and roles, but to not act as a direct blueprint to be implemented :)
My current employer use OKRs to align the different levels of the org and to help each team structure their work. We use an approach that is mostly owned on the lowest level; company leadership set long and medium term objectives for the company, area/department heads set goals for each 4-month period, and each team gets to work out their own OKRs for the next period. The team objectives do not not need to match area objectives 1-to-1; area objectives are mostly guiding.
I started rather recently, and I'm in my second OKR-period currently. This has been my first experience with OKR, and so far it's been on the strong side of positive; we get some guiding principles to work towards (but nothing too concrete or checkbox-final) and it works well. Sure, we won't be able to solve everything we write down, but our team is aligned on our own direction and a course that we control ourselves , to a large degree, but still within the overall goals of the company.
In my experience, software engineering is about 20% creating the solution, 15% tuning and debugging the solution and 65% understanding the problem.
Within this perspective, the work of talking through the problems that your team is working to solve, and contextualizing why you're solving them, is highly valuable and counts towards understanding the problem. The process of defining OKRs, within the correct frame of reference and expectations, can work very well for this.
IMO, using the backlog to define upcoming work can enrich this process as well; it brings context, but should not becone the final OKR "product" alone.
I've only ever encountered OKRs on a team level, but I cannot grasp the value they bring as individual goals. The real value in OKRs lies in the process leading up to defining them, not the objectives and results themselves.
A recurring theme in the horror stories I've read regarding corporate strategies is that they tend to be implemented with a goal of rigidity rather than fluidity. And making rigid processes that aren't compatible with team autonomy brings with helplessness and alienation between the goal-setters and those working to deliver.
> The real value in OKRs lies in the process leading up to defining them, not the objectives and results themselves.
I couldn't agree more, and the article is headed in completely the wrong direction.
Companies fail at using OKRs when they are rigid about treating OKRs as a measure of successfulness of the team. In my experience, the true goals almost always become clearer as the quarter progresses, and hitting the OKR objectives you set months ago is a sign that your team is not flexible enough to solve the real problems. Oversimplifying your work into key results also hides the true status. It overemphasizes measurable, but meaningless, metrics over truly checking the work for quality.
I find it really sweet that the author of the article illustrates the perspective of our place in the universe as her father is known for just the same rhing :D
We're a small company (~50 customers) delivering SaaS using Django/Postgres/uWSGI for a niche B2B market where privacy and data confidentiality is paramount.
Currently we deploy one DB + unique uWSGI instances for each customer. This has some drawbacks which has made us look a bit into multi-tenancy as well. Everything is served on dedicated hardware, using common codebase, and each customer is served on a unique sub-domain.
The two primary drawbacks of running unique instances for each customer are ease of deployment and utilization of resources.
When a new customer is deployed we need to set up the database, run migrations, set up DNS, deploy the application, deploy the task runner, set up DNS and configure the HTTP vhost. Most of this is painfully manual right now, but we're looking into automating at least parts of the deployment.
In the future, we aim to offer an online solution for signup and onboarding, where (potential) customers can trigger the provisioning of a new instance, even for a limited demo. If we were doing multi-tenancy that would just require a new row in the database + some seed data, which would make the deployment process exceptionally simpler.
The other issue is the utilization of resources. Running a few instances of the application with a big worker pool would be much easier to scale than running 50+ instances with their own isolated worker pool.
We're considering maybe going for a hybrid multi-tenant architecture, where each customer has their own isolated DB, but with a DB router in the application. That would give us a compromise between security (isolated databases - SQL queries don't cross into another customer's data) and utilization (shared workers across customers). But this would add another level of complexity and new challenges for deployment.
We're deploying unique kubernetes cluster per client with their own application, database, task runner and, yes, DNS. Unlike your situation, most of it is completely automated ( :) ) on Azure and AWS using terraform, good old bash scripts and a custom go CLI we maintain.
Each client is billed for their own resource usage and we can have version disparities between clusters.
On the downsides, maintenance, upgrades and deployments take more time, but we are thinking about potential solutions for managing a fleet of k8s clusters.
This approach makes a lot of sense for B2B customers, and I would add that it's better to separate everything down to the infrastructure level, rather than stopping at the database schema. I would probably do it again in a similar situation !
The BanID SIM-application has to be installed over the air and activated through online banking. It's bound to one physical SIM, so an attacker would need to get into the online banking in the first place to reinstall the SIM app onto the new card. I believe the auth keys are stored on the SIM as part of this solution, and regenerated every time it's reactivated, invalidating the existing SIM.
In Sweden the banks issue the BankID instead, the certificate is tied to the phone/pc it’s downloaded to and is not connected to the phone# at all. You can however connect for example ”Swish” to your bank accounts for seamless transactions through youe phone# but it too has to be authenticated with bankid.
I’ve never really heard of a case when the bankid/authentication to any Swedish banks has been compromised with the exception of the users signing in fraudulent actors.
This is .. remarkably sensible, and a good example of using the secure elements of the SIM card for the intended purpose. Makes me wonder why more places don't do this.
It's possible to use OTP and password as well, which requires a physical OTP generator. But that's actually more cumbersome than using the SIM alternative in my experience.
I believe using the SIM adds layers of security that OTP apps can't compete with, including increased difficulty cloning the private key. I assume that accessing the relevant parts of the SIM is way harder and requires completely different vectors than attacking the OS.
Since the early 2000's, banks in Europe gave physical OTP devices. While somewhat inconvenient if you don't have it with you, I still liked it better than alternatives that are popping up lately:
SMS based authentication, an app that generates a code from a QR-like pattern displayed on your computer screen (neat but they didn't think of the case where the screen displaying the QR pattern would be the phone itself, or the fact that you're letting their app see what else is on your computer screen) and paper cards with a finite amount of numbers on them.
In fact I'd prefer TOTP as supported by authenticator as a better phone based alternative since it's standard and you can control if and how you want to securely back up the codes rather than have a plethora of different systems.
A SIM card contains a crypto module that can perform operations (signing, encrypting, etc) while not allowing the device to read the private key. Some phones include a chip like that too, but many don't.
How does this actually work on something like iOS, which I believe is a lot more restrictive and may not allow access to the SIM except through carrier services (which are in turn susceptible to attacks, including bribes, social engineering, etc.)?
The carrier is involved in transmitting and triggering the challenge as well, and I'm pretty confident that it works on iOS, though I've never tried myself.
The authentication works like this:
1. User fills out form with enough public and semi-private infoemation to securely identify the user (usually phone number and date of birth or social security number)
2. The user is presented with a random two-word string
3. The same message appears on the user's phone. If the words are the same, the user proceeds to input a PIN. The PIN is only stored on the SIM, and is chosen by the user.
4. A response is sent from the phone and the user gets logged in.
I assume that the challenge response employs asymmetric authentication, storing a private key for the SIM and public key for BankID on the SIM.
I'm not familiar enough with how the underlying crypto works to guess what kind of attacks they'd be suceptible to, but considering that the authentication is used for most public services in Norway (including taxes, welfare, medical records and document signing) as well as some private services (banking, insurance), I'll believe that the proper due diligence has been done.
There is a big focus on using these platforms securely, and BankID recently ran an at campaign with some TV spots, telling how people should never share their BankID login, not even with their loved ones - https://youtu.be/OFJmX7A--w4
This is not universal across Europe.
I've been part of organizing computer events in the 5k participant range without any hired security or medical staff. I think it greatly depends on the standing and culture of volunteer work in your country.