Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

This is entirely not related to what we're talking about here. It's not about trust, it's about good practice. If someone needs access to something, they request it, you grant it. Simple. But there should be common sense controls on data access, for everyone's benefit.

You're talking about a hostile work environment. I'm talking about a secure one.



That's more than just an oversimplification.

Security is inherently hostile, I don't see any other way of putting things. We tolerate a certain amount of hostility in order to reap the benefits that security gives us, and we tolerate a certain amount of vulnerability in order to reap the benefits that laxness gives us.

> If someone needs access to something, they request it, you grant it. Simple.

The saying goes that for every complex problem, like security, there is a simple solution, like that one, and that solution is wrong. The cost of such a process is just too high for most companies. You have to request access, explain why you need access, someone has to review it, then grant it. That's how it worked at the company I was complaining about. Processes that should take minutes took hours, those that should take hours took days.

Security, like so many things, is subject to cost-benefit analysis. Better security systems use the full triad: prevention, detection, and response. From the article, it sounds like these are working as intended. The security team detected the exfiltration and responded with a lawsuit. Trying to rely only on prevention will just lead to paralysis.

I might also be jaded, because when I hear phrases like "good practice" my instinct is that it means "omitting the cost-benefit analysis."


The problem is, it doesn't seem like prevention or detection was working here at all. Even though they clearly collected enough data to reconstruct what happened, they both failed to prevent this, by not having even common sense security protections, nor did they detect it when it occurred, only finding out because a supplier ratted out another client.

These are not high cost processes, these are basic, common sense practices we're talking about, that nobody really has any excuse to not have in place.

You are continuing to let your experience with a single, hostile work environment cloud your openness to something that... isn't even controversial. If you aren't locking down your files to only those who need them, you aren't equipped to be in business. If this is somehow uncommon among Silicon Valley, it explains why so many "they stole our trade secrets" lawsuits are going on right now.


> You are continuing to let your experience with a single, hostile work environment cloud your openness to something that... isn't even controversial.

I'm flattered that you want to talk about me, but really, I'm not the subject of the discussion here, and it's inappropriate to talk about what's going through my head or to try and psychoanalyze me.

> These are not high cost processes, these are basic, common sense practices we're talking about, that nobody really has any excuse to not have in place.

I've worked at a few different places on this spectrum in my career. Three of them have been fairly open, internally, like the way Google apparently operates. Maybe there are some high-value IP repositories you don't have access to, but you mostly have access to any source code you want to look at without getting access reviewed first. These companies were very open about the risks that this entailed, and openly discussed the fact that leaks were possible. The benefits became rather clear the longer I worked at each place. Whenever a system I worked on interacted with another system, I could follow what the other system was doing and even submit patches to other systems if necessary.

Saying that restrictive security is "common sense" or "not even controversial" is begging the question and argumentum ad populum, respectively. My argument here is that there are benefits to open access to most company IP, and that these benefits are important enough that the decision should be made on a company-by-company basis.

The access controls that would have prevented this particular case from happening would have to be rather draconian indeed. Anthony Levandowski's work was basically the genesis of autonomous vehicles at Google. Google purchased Levandowski's autonomous driving startup, 510 systems, in 2011. I don't know what kind of access controls you'd need to prevent a startup founder from accessing the technology built on top of his company's IP.


So, the head of my department doesn't have access to... most of what I do. That isn't to say she isn't in charge, or doesn't have every right to see that information. But it isn't her job, and she doesn't need that access, so she doesn't have it. If she needed it, she'd get it. There's no trust issue here, she is completely trustworthy. But her not having the access protects her just as much as it protects the rest of the organization. Because she doesn't have to worry about any risks to that access through her credentials.

In this case, Waymo has a design server, Anthony clearly didn't work on it's contents, because he didn't already have the software to access that server on his computer. Therefore, regardless of the source of the IP (which isn't his, he sold it), he really shouldn't have ever been given access to it. When the server was first spun up, access should've been given to... the people who would be using it, and nobody else.

Of course, if at some point he did need to access those files, he could ask, and be granted that access. And that doesn't need to be a difficult process (granting access to things takes an IT person a minute or two), but there is now an additional person that knows that user has been recently added to access. Even informally, this is a pretty good security measure, because in most cases, it should be fairly obvious why someone needs something. And if it's not obvious, and maybe that employee has been, as the article says, talking about leaving the company and replicating the technology elsewhere... suddenly that IT person maybe has a reason to mention the issue up the chain.


Where I work, I can make changes to what I'm working on that break things far away, from time to time. In well designed systems this doesn't happen too often, but you might be surprised sometimes how a seemingly insignificant change can make a system fail somewhere else because someone made an assumption that is no longer true.

So I can make a change, see that it breaks some test somewhere else (failed CI test), and peer into the diffs on the opposite side of the code base to decide what to do about that. It's proven quite useful, from time to time. I've seen weird problems like hitting pessimal access patterns for software developed halfway around the world.

> Anthony clearly didn't work on it's contents, because he didn't already have the software to access that server on his computer.

That doesn't follow. I work on a source code repository every day, but I'd need special software to exfiltrate a copy. Same with the various design documents and things I work with—all stored in a private cloud. If I wanted to exfiltrate it I'd get a script to do it automatically.

Remember, this wasn't just the guy who started the autonomous driver project. He wasn't just the "department head". He's an industrial engineer who founded a Lidar startup. The idea that he should be denied access to Lidar design documents is patently absurd.


Apple implements very restrictive internal secrecy, yet leaks haven't been stopped. So what's the point? Live in a police state, still get leaks, or live with freedom and benefits, and get leaks. I'll take the latter.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: