Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I was a very happy FastMail customer until a hacker asked them to reset my password. After _incorrectly_ answering a handful of questions asked by the FastMail support, the recovery email address was changed and a password reset link sent. From there, the hacker attempted password resets on other services.

Initially, FastMail was dismissive that this was a simple "mix-up" and didn't disable access to the hacker for 7.5 hours after my report.

To their credit, FastMail gave me a list of the email accessed and the message headers of the messages the hacker sent from my account (and then deleted -- unrecoverable).

Until and unless FastMail addresses the human factor of security, their technical security mindset is of secondary importance.



Now to write a more detailed response. If this winds up out of order later, I first posted: https://hackertimes.com/item?id=15856609

Again, ghouse, I'm really really sorry about what happened to your account. It was wrong and we screwed up. As other comments have already noted, it was during the transition to a new security system which was designed precisely to remove the human factor from decision making.

I'm an Australian, and I'm a great fan of our "100 points of ID" system, which is designed to remove the human factor from identifying people.

https://en.wikipedia.org/wiki/100_point_check

While I wasn't aware of your account's issue at the time (and we'll be having some discussions internally about why not!), FastMail management were all aware that we needed to get the human factor out of decision making about account access, particularly since somebody tried to pull a similar swindle on our domains!

https://blog.fastmail.com/2014/04/10/when-two-factor-authent...

We spent a lot of 2016 and 2017 working on an automated account recovery system which allows recovery of locked accounts via a carefully audited set of automated steps, which includes a 24 hour lockout to allow the owner to notice an attempt on their account.

If this had existed in 2016 then we would have sent you there rather than having a human make a (poor in this case) judgement call!

https://www.fastmail.com/help/account/icantlogin.html


I use Fastmail and I like it but this is extremely disturbing. I appreciate you being relatively candid with us, but it doesn’t change the fact that you allowed a customer’s account to be compromised by the most basic attack out there.

Complexity and vulnerability go hand-in-hand. A product providing a critical service like email should be opt-in to any form of recovery not requiring pure secrets provided directly by the user (or provided to an already logged-in user for this specific purpose). Failing that, there should at least be an opt-out for such dangerous recovery methods.

I’m going to watch this thread and your blog for a while, and I hope you can provide some real assurances for security-minded technical folks like me for whom email really is the “keys to the kingdom”. Failing that, I may have to look for another email provider.


https://blog.fastmail.com/2017/12/06/security-account-recove... is probably what you're looking for. I've also made additional responses in this thread.


I don't understand this response. I'm glad you're working to minimize human factors. Can you explain how exactly you're doing that? I asked some specific questions here:

https://hackertimes.com/item?id=15856755


Please consider adding a option to never ever allow recovering of the account without password, similar to how gandi does it.

My email account / domain is my central hub for all my accounts. All of them can be taken over through fastmail (with the exception of my domain and other extremely crucial services) if an attacker happens to obtain access to it. I want to have the security that this attack can not happen to me.


I'm assuming you have 2FA turned on already.

It sounds from what you're saying like you (and at least a few other hacker news posters) want is an even stricter "no seriously, I promise I won't ever screw up" mode.

We try not to have those kinds of modes, because (for example):

https://ianix.com/pub/dnssec-outages.html

It turns out, black and white security models lead to massive losses of availability when people screw them up - and people do. Though I have to confess to being amazed to see Tony Finch amongst the recent "oops". NASA is maybe not so much of a surprise.

Having said that - if there's enough demand, that would be a worthwhile feature. Accounts that aren't being used are cheap for us to run, and that flag would make the security team's job really easy - just say "no, go find your own way in" without having to review anything!


Bron, I think your concerns are justified and understandable. Thanks for entertaining the idea.

I am one of those advocates and would enable such option if given. That said, I did have an instance when I had to call AWS support because of their own screw-up. I closed the AWS portion of my account but not the Amazon.com shopping portion. I later found out that I can no longer remove 2fa on the AWS portion because I no longer have it. I no longer have it because I already closed the account and thought it was safe to remove. However, because of their faulty system design, a closed account was enforcing 2fa on my Amazon.com portion preventing me from accessing it. In this case, the support agent helped me to regain access.

That support agent's ability to fix their faulty system design is both good and a potential liability. I wouldn't want a "I won't ever screw up" mode there.

In the case of email though, when certain conditions are met, it becomes a safer thing to do compared to getting screwed over by support staff.

The pre-conditions are: 1) The user is using custom domains only 2) The user has past emails backed up on his/her own devices

When these conditions are met, the user has complete control of their email destiny. In the case of losing FastMail account access, they can continue to receive email because they control the domain. They also have complete email history because they back it up.

That said, I believe your clearer response elsewhere in this thread is good enough for me personally. I was concerned before because of the vague responses. I think for FastMail, the risk perhaps outweighs the better security for me personally even if I would welcome it.


There is no demand for password only protection without recovery because it is not available on the mass market. Just like there was zero demand for cryptokitties a few months ago and now there is significant demand. You can only see a demand if there is and option for something and people use/don't use it or after conducting a poll.


I get your point...

though I'm not sure that cryptokitties are a great way to sell your idea here. They're the kind of tulip/fidget spinner craze that we'd invest a ton of effort into, sell a few for a while, have to support for the next 10 years and still face a noisy backlash from a few annoyed users when we finally retired it. Overall, net loss.

In the case of "no recovery allowed" accounts, the development effort is minimal, but the number of people who would turn it on "it says higher security and someone on hacker news told me to, it must be good" and then proceed to lose their account... I bet they'd be noisy when the realised they'd not only lost all their email, they'd lost their payment to us, because they'd have no authority to request a refund.

Oh wait, they would - chargebacks. Notoriously hard to fight with an online service, particularly when you're not providing said service any more. And it's always the full amount charged back too, not just the unused portion of the service.

I'll float the idea of allowing people to push right hard up to the "do not resuscitate" tattoo on their account, but I'm not going to pretend it doesn't come with some risks to us.


As a paying fastmail customer, I would appreciate such an option.

However, you do have to make sure that if I lose access to that account, I should be able to create a new FastMail account and have new traffic to my domain be directed to the new account. i.e. you do need some way to migrate a custom domain to a new account, if the new user can prove ownership and/or control of the domain name.


Yes, that has to be an option anyway, if somebody sells a domain and doesn't release it from FastMail. I have a blog post for this advent series (already written and everything) about why we don't allow split billing on a single domain.


> a 24 hour lockout to allow the owner to notice an attempt on their account.

I've been pretty careful to ensure that I don't lock myself out of my account (multiple U2F keys, strong password saved in password manager with backups)

But if a determined attacker kicks this off just as I'm stepping on a flight from Sydney to London, 24 hours isn't going to be enough.

(I should add also - I'm a mostly happy Fastmail customer)


You can't even get to the 24 hour lockout unless you've successfully passed the security checks.

We add the 24 hour lockout as an additional level of protection for 2fa accounts (even though they've given two factors of recovery by then) or if we can't confirm that you are resetting from a computer which has successfully logged in to that account before.


It sounds like if I use Fastmail, and I go on vacation (and thus go a day without checking my email), someone can max out the automated system and then get a human being at Fastmail to potentially reset my recovery email. Is this the case?


You can't max out the automated system.

Our procedures have to balance the concerns of very different groups of people.

Some people have explicitly directed us to enforce stringent account security requirements by enabling multi factor authentication. For those people, we assume that they have their own security practices and are diligent in maintaining them. Those people are aware of the risk of losing access to their mail if they lose their credentials.

The other, much larger group of our customers, come to use because they want email that has support. Many of these customers forget their passwords and still need to get to their email (which is more common than you might imagine if you are surrounded by a hacker-news demographic!)

Our procedures have to balance between those two sets of needs, and they evolve over time. This incident came up in a period of transition. It should never had happened, and it's a great object lesson to us about how to do better in future transitions.

Having said that, based on this conversation today we are reviewing all our processes around re-establishing access for regular people who haven't requested additional security by enabling second factors. We absolutely can and will do better than we did in 2016.


For an attacker to exploit this, they will have to know that you are going on such a trip. This means that attackers who don't know much about you already are less likely to bother, and also raises the bar for even the focus attackers.

Nothing is foolproof, but many things can be useful.


If a well-resourced attacker was targeting me specifically, it wouldn't be too difficult for them to find out about my short-to-medium term travel plans. A bit of social engineering with the airlines could tell them exactly which flight I'm on.

They could also compromise other people who need to know my plans and don't have the same security practises as me.

I think about this stuff and minimise as best I can, but my account security shouldn't be dependant on it.


If a "well-resourced attacker" was targeting you specifically, you're toast. Period.


Attackers come in all shapes and sizes. If the NSA comes after you, you’re toast, but there’s a lot of high level organized crime out there as well.


I wish more people understood this.


This is like the employer I used to work with who said "well google has been hacked so therefore us storing passwords in plaintext is okay". Maybe I'm toast if the NSA takes an interest, but there are an awful lot of bad actors out there without the level of funding or resources of the NSA. Of course I'll never be "100% secure", but making it as impossible, or at least as difficult as possible, for someone in Russia to socially engineer their way into my email is worth spending time and money on.


What I'm hearing is that the human aspect remains and there is absolutely no prevention of this happening in the future.

The other response contains weasel words like "For instance, _some cases_ take 24 hours before the reset password goes into effect". Why "some cases"? Why isn't it all cases?

I think we as customers deserve complete transparency on this and know what prevention will be in place.


I've had conflicting advice about complete transparency - if we give the entire algorithm, then that helps attackers find the exact surface that will get them in, so we don't publish the full ruleset we use.

Here's an example of some inputs that go into it though: we store a cryptographic token in a cookie which tells us the first time your account successfully authenticated from a computer. If we have a history of you using the same computer over multiple years, that's different than a new computer. But cookies can be cloned, so it's only a signal, not proof in itself.

If it's from the same IP address as multiple successful logins in the past, that's a signal.

We're not the only site that uses methods like this to help identify people when they've lost their password. People make mistakes. Taking a hard line "you lose your password, you lose your entire email account with all its history, and you don't get your money back either" might sound attractive to a certain demographic. They are not the bulk of our userbase. Even locking people out for 24 hours is a pretty big imposition that you want to avoid if you're really confident (algorithmically) that it's the same person.

If people in the "security is more important than easy recovery" demographic haven't turned on 2FA yet, then they certainly haven't signaled that they want things locked down in case of doubt. Even of those who HAVE turned on 2FA, you'd be surprised at how many lose one or both of their factors.

It's easy to say "I won't mess up", but people do. Which is why our post today says in bold "If that happens, you will lose access to your account permanently."


Thank you. I appreciate your response and understand that you've made improvements since this incident in 2016.


The blog post was in 2014. This security bypass happened in 2016.

I think what we're witnessing here is that despite best intentions and past experience, humans are going to be humans. I actually felt good after reading that blog post in 2014 thinking that you guys are going to be better than most companies here.

Nope.

But I think a lesson can be learned here. The lesson is simply that humans are the weakest link. As much as you might try to add process and try to minimize, the best is having zero human capability at all. So when tptacek asks _who_ has ability to change things about an account, we really do want to know. Because those people are the weakest links. (don't mean naming names, but understanding who in general has those powers)

I mentioned elsewhere. I own my domain. I backup my emails. It's way more likely for a FastMail human loophole to screw me over than for me to need human assistance on login (which is never).


https://blog.fastmail.com/2017/12/06/security-account-recove... - it's currently 3 people who have that ability. It had to be more before we had the automated tooling, those three people couldn't handle 3 figures per day (I'm not kidding) of regular password losses by regular users.


'I'm an Australian, and I'm a great fan of our "100 points of ID" system, which is designed to remove the human factor from identifying people.'

According the page you linked, a birth certificate and bank statement (or even a 'Document issued by <SNIP> or registered corporations.') would be enough. So if I get your birth certificate and have an Australian corporation, I can issue a letter saying you're a customer for a year. So I have 100 points and can pretend to be you?

That doesn't seem secure at all. The birth certificate has no photo (or, if it does, it won't be useful except to determine ethnicity), and the document from a registered corporation can be trivially faked.


We don't use the 100 points of ID system of course, because we're an online service. The 100 points of ID is something that's used in person to decide whether you can open a new bank account using that name.

The concept behind the 100 points of ID is that there's a fixed standard and it's not a per-time decision made by a human, it's a consistent set of rules applied without fear or favour.


"We don't use the 100 points of ID system of course, because we're an online service.

Right, but you said you're a 'great fan' of it. My reading of the wikipedia description that you linked suggests that the system is wholly inadequate (mainly as you can satisfy the 100 points without photo ID).

So I'd like to know: are you really a fan of that system (the particulars of its rules), or just a fan of the idea behind it (that there exists a consistent set of rules)?


Tell me how you bootstrap photo ID in your country, and I'll tell you whether I think photo ID means anything.

In our case, we don't care at all what you look like, just that you're the same person we were talking to earlier - and ideally that you're the owner of the method being used to pay, though that's not always true or necessary. So the photo is meaningless to us.

Besides: who is looking at the photo and confirming that it's the same person as the one in front of them? Yep, an human. The whole point of this discussion is stopping the human making human-factor judgement calls.


I think we're talking past each other. I'm saying the Australian 100 points system isn't adequate for bank account opening, because it doesn't have photo ID, so I'm not sure why you admire it.

You're saying that you don't need photo ID for your use case. I agree with that for your purpose, but it's not relevant to my criticism of the 100 points system for its purpose.


I'm wondering how exactly you GET a photo ID in the first place. You need to identify yourself to whoever is taking that photo.

I lived in Norway for a couple of years. There I just opened a bank account online, giving them my person number - and they posted something to my address as registered with the government. But in Australia our privacy advocates killed the "Australia Card" idea, so instead we have a tax file number with all the disadvantages of a national ID number and none of the advantages...

Anyway, back the main point. To be totally frank with you I think photo IDs are largely bullshit security theatre. You're asking a human factor[tm] to look at a fuzzy photo taken 10 years ago and confirm that it looks similar enough to the person in front of them.


"I'm wondering how exactly you GET a photo ID in the first place."

In the UK, for a passport, there's a chain of trust. You need a professional or some other trusted community member (vicar, doctor, lawyer etc.) to sign the back of the photo saying it's you, and to provide their contact info for further verification.

Not perfect, but I don't think many people are skilled enough to successfully procure a passport where the photo isn't of the named person).

"To be totally frank with you I think photo IDs are largely bullshit security theatre."

They're not 100% reliable, sure, but they're extremely useful in establishing whether the person in front of you matches a particular identity.

One excellent use case for photo ID: consumer lending. If you lend someone money, you need to establish that the person you give the money to is actually agreeing to pay you back, and to pay you interest.


It's a consistent, fixed standard that you hope you have trained your employees to adhere. Fingers crossed. /s


It's not really clear to me. As a paying fastmail customer, can someone still contact your support and get the recovery email address changed?


This is death. Your email provider absolutely cannot under any circumstances have this vulnerability. Wow.

Just the idea that there's a human in the process making subjective decisions about security questions and answers that can, on their own recognizance, change a recovery email address. Forget the immediate mistake that one rep made, and go down a couple levels deeper into the company policy design mistakes at play here.

Thank you for relating this.


I've been a fastmail customer for a bit over a year and I agree.

The offerings over at https://protonmail.com/signup have been nagging me to give it a try.

I now have a reason to try and switch. I'll lose functionality found in fastmail but gain a lot in security.


Unless you have a set of objectives that are very different from what I consider "as secure as e-mail gets", please consider GSuite and not Protonmail. (I don't speak for 'tptacek, but I'm pretty sure he'd agree.)

As a corollary: if you really care, use Signal for stuff you can't say over e-mail. Whatsapp's fine too. But they solve a very different security problem than the one you need e-mail to solve, which is mostly "don't leak my emails" and also "don't reset my password for attackers who ask nicely".


Just gonna drive by mention https://landing.google.com/advancedprotection/, which is a physical-2fa-security-key-only version of gmail. To my knowledge it also disallows mail forwarding, and the account recovery procedure in the event of losing both second factors is intended to be a long process that involves proof of identity and multiple attempts to notify the account owner.

(I work on gmail, but I'm not intimately familiar with this option, other than knowing that it exists and is intended for high value targets like celebrities and politicians).


Yep. I don’t recommend it by default (most people I work with use GSuite in a work context, so recovery is normally administrator-mediated), but the fact that this exists is pretty awesome.


The pricing is very unclear from even three clicks in from that link, only that it is some sort of add on service.


I don't use it, but from what I can see, it appears to be free, apart from the keys themselves needing to be purchased.


I know that GSuite would like to differentiate its enterprise products by features, but allowing basic/business plans to force U2F would be great. Since it's also available in GCP's Cloud Identity product (which is free), I hope this is coming down the road.


That also means you cannot this account as your Android phone account without an NFC U2F token, correct?


That looks amazing. Does any other company provide anything like this?


>"Whatsapp's fine too. But they solve a very different security problem than the one you need e-mail to solve,"

Can you elaborate on this? What security problem are they trying to solve?


Sure! Signal and WhatsApp are good at having private conversations. Email is very tough to add private conversation capability to, for a variety of reasons. What you do need your mail provider (and by extension your DNS provider) to do is to not give up access to an attacker who asks nicely, because for most services, email access is account takeover.

This makes discussions about email security confusing, because most security professionals I know are thinking about a very different threat model (pop all of your services) than what a lot of people think about (confidentiality). Google is pretty good at not letting random people auth to GSuite as you. (Still turn off SMS recovery, though.)

Does that answer your question?


I get the impression that when non-security people talk about "security" these days it's almost always in the context of preventing government surveillance.

So even though Google has a great track record of keeping hackers from taking over your accounts, the news stories about them cooperating with governments makes them seem less "secure" to some people.

What's weird is when it leads to a fallacy where people trust services that are less verified and tested in terms of security just because there isn't the association with government cooperation.


This is irrational. It might be a complicated question if the foreign-jurisdiction alternatives were more secure, rather than drastically less secure. But since that's not the case, switching from Google Mail actually gets you the worst of both worlds: a mail service that is materially less secure, operating in a jurisdiction where there are literally no rules preventing USG-level adversaries from exploiting it.


Agreed. Don't throw the baby out with the bathwater.


Yes, thanks for the clarification on that point. Cheers.


You have been a customer for over a year but based on someone else's anecdote you feel the need to post a signup link for a competitor?


> This is death.

Even allowing for some hyperbole, I think this is an overreaction. I agree that they made a mistake, but we only know of one user it affected. They didn't leak an entire database of user data or expose a vulnerability for which the attack can be automated.

You've commented many times that email is inherently insecure and that (IIRC the conclusion precisely) there is little point in focusing on securing it. Instead, use a secure messaging system such as Signal. Email just isn't going to be secure. Fastmail seems to put more effort into their security than most mail providers. For example, the proxied images [0] sound fantastic and they address a threat that affects almost every email user daily.

[0] https://blog.fastmail.com/2014/09/16/better-security-and-pri...


The inherent insecurity of e-mail doesn't change the fact that popping someone's email account means popping most of their services. Therefore it makes sense to hold e-mail providers to a higher standard than a median company.

Furthermore, while we only know 1 user affected, in the rest of the thread, Fastmail has been cagey at best about answering what they feel the process is now, let alone what it was back when this incident occurred.


You're aware this is how the vast majority of email providers legacy operated right? (And sadly a few still do)

E.g. in this case likely a 2 point auth system (security question and e.g. payment details (last four of latest payment meth/etc))

Seems you're shocked that a lower tier support agent can auth this kind of request when the reality for most email hosts is that they can.

They(likely a new employee) got socialed.

Yes, they should have systems in place to prevent this from being possible in the first place; no, I do not find your incredulity genuine, albeit rational.


FastMail isn't some random legacy email provider. It's a premium one that bills itself as secure. It's not some free mailbox you got with your budget domain registrar. Hence, it's reasonable to hold them to a higher standard rather than fatalistically observing that the median email provider sucks.


Good morning. I'm going to be here to answer specific questions, and I owe you a personal response to this as well, which I'm about to start working on!

There is no doubt that in this specific case our human factor screwed up, and I'm really sorry about that.

First I'm going to post the standard response that our team has written for any new support tickets that come in about this today, then write my own personal apology and response here.

---

Thanks for getting in touch with us about the report on Hacker News about our security procedures.

As we say in our recent post about security at https://blog.fastmail.com/2017/12/05/the-fastmail-security-m..., security is a process, not a checkbox. We do our best to be continually improving and upgrading our security procedures, and offering our security-minded customers the most robust, industry-standard options possible.

However, we have been less diligent about forcing older accounts to upgrade their security settings. With a range of possible security option states, customer support is occasionally placed in a position to make a judgement call. As the post indicates, the incident in question happened immediately after a major round of security changes. There’s no way around it; someone made an exception they shouldn't have.

Social engineering is always one of our biggest concerns. As any number of well-known break-ins have demonstrated, the "best" security hack is often to sidestep it. Since that incident, we have taken substantially more aggressive steps to close off avenues of attack and human review. We are constantly trying to narrow the number of accounts that even can go to a human for review, and for those that must go to a human to provide as much notice as possible to the account owner before possibly allowing the attacker to have access. For instance, some cases take 24 hours before the reset password goes into effect. If you are a legitimate account owner, this has the often frustrating side effect of locking you out of your account for 24 hours. But, if you have been attacked, this gives you the opportunity to keep the attacker out.

Thank you for sharing your concern with us, and I hope we’ve addressed yours.


Exactly which employees in your organization have the ability to alter recovery email settings?

How many of those employees are there?

In what fashion do you audit and track the activities of those employees?

What training are these employees given to avoid social engineering? What firm provides the courseware?

What's the escalation process for complicated, non-no-brainer reset situations? If a support person isn't absolutely sure whether they should reset something, how do they get a second opinion?

Are the support people who are entitled and able to make these changes incentivized to close tickets as quickly as possible?

Do you monitor "out-of-process" changes to recovery email and password settings, so that you can see trends over time and by particular staff members?

Has any third party security firm assessed your service recently specifically for this attack vector, for instance by conducting social engineering testing against your support staff? What's the firm?

How are you MINIMIZING, rather than just improving, the "human factors" involved in assessing whether accounts can be altered based on anonymous incoming callers and requesters?

This is a HUGE, TERRIFYING vulnerability. Email providers are the single most important security service people use; if your email is compromised, many (most!) of your other services are compromised as well.


Replying directly because I'm not sure if you'll get notified otherwise, but here's a longer response to this: https://hackertimes.com/item?id=15859024


Wow, that's a lot of questions, and I can't answer all of them without creating security risks!

Our absolute focus is on minimizing the human factors.

In the past year and a bit since that incident, we have improved our escalation policies and support training, as well as let some support staff go.

But more importantly, we now have an automated account recovery system which can be used to verify ownership of the account using a number of different factors (not all of which I'd like to talk about in public - again, if an attacker knows the full algorithm it helps them game it).


Wow, that's a lot of questions, and I can't answer all of them without creating security risks

Questions like these are not unreasonable for a customer to ask a service provider with respect to identity management and protection of that customer’s proprietary and confidential information.

With respect to the first question “Exactly which employees have the ability to alter recovery email settings.” Not being able to have a prepared answer for this question suggests that you don’t have a formal policy or standard procedure around role based capabilities in your operation.

The second question is an extension of the first.

“In what fashion do you audit and track the activities of these employees?” Not being able to answer that question suggests that you don’t have an auditing process around employee actions with respect to account changes.

“What training are these employees given to avoid social engineering?” Not being able to answer this question suggests that you don’t have such training in place.

“What’s the escalation process for non-no-brainer reset situations?” If your processes are written down and staff are trained in them, a very simple description here would not create a security’s risk of any kind. Not doing so suggests that the process is not formally specified or is quite ad hoc.

“Are the support people who are enabled and entitled to lose the tickets incentivized to close the tickets as soon as possible?” It seems that your internal security posture would make that clear, and it is unclear how stating that correctness is more important than speed in user account modification poses a security risk.

I’ll pause here and summarize. Answering any of these questions is not going to pose a security risk unless such answers expose to your users reasonable measures that you are not taking or haven’t thought of.


Which employees? At the time of this compromise, that list was all support staff as well as the technical staff in Melbourne. It is a specific role that's granted to specific people, to answer your question about having a procedure or policy.

Today, that role is granted to a much more limited set of senior security staff (currently 3 people). Regular support staff can not alter security-sensitive details about accounts. If your account is owned by someone else (e.g. family or business, or part of a resold package) then they can still alter recovery options, as they own the account.

In 2016 before we had automated account recovery, lost password was in the top 3 categories of ticket every single week! Every member of the support team dealt with multiple account-loss tickets per day, both forgotten password or stolen account.

Stolen account losses are way down now we have app passwords, we often only have to block a single app password and notify the user rather than locking the entire account. Forgotten passwords have not reduced, but most people are able to recover using the automated tooling.

---

In what fashion do we audit and track? A few ways - we log every API call at the lowest level. We log each override when the support person accesses user accounts against the ticket that they come in through, so we can see why they were accessing that user.

We could always do with better tooling to introspect logs, but the data is all captured and can be followed through after the fact. Support staff have no way to wipe their audit trail.

---

Training - in 2016 we didn't have much formal training for our support staff - they learned on the job from each other. We are very aware that this was a failing at that time.

We have more training now. We did a lot of work at unifying our support teams across the FastMail and Pobox/Listbox family throughout 2017, and that led to better training and induction materials, as well as better internal reference material for support staff to use.

Early in the induction process for all new support staff is a description of how social engineering works and a warning that urgency is often used in social engineering attempts, so when in doubt, slow down and get a second opinion (which leads to complaints about slow support, but that's the tradeoff here.)

---

Escalation process - as mentioned earlier, if you had 2fa enabled then it has always gone straight to the senior security team, which is based in Melbourne and consists of our most experienced and trusted people. Neil (author of the blog post this HN refers to) is of course one of these people.

With lost passwords no longer a highly common support request, all support tickets requesting manual account recovery are escalated to our senior team for review.

---

Support people have no incentive to close tickets quickly. Absolutely. That is a bad metric, and it's not a metric we have ever used.

Time to first response and time to followup responses are tracked, but there's no incentive to close tickets.

This answer is a no brainer and I should have answered it in the first response - sorry. I was still rushing through initial responses at that time, and there were too many points in that post to think about them all at once and still respond quickly. The real-time nature of this hacker-news medium encourages fast answers above complete answers. I hope this longer response helps clear up remaining questions, at least to those who see it!

---

There's another blog post coming soon about the account recovery system in particular, which addresses exactly how we've minimising human involvement in recovery decisions while not excessively punishing real human frailty amongst our customers.


hi Bron, thank you for this response. Much clearer and I think this is what everyone wanted to see.

Can I just clarify some things for peace of mind?

1) When you say regular support staff cannot alter security-sensitive details. How is that done? Do they only perform changes through a limited set of UI?

2) When you say if 2fa is enabled it goes to senior security team, is that an automated process such that support staff don't see that ticket at all? The support ticket interface doesn't seem to have anything that helps to automatically route password reset requests.

3) Was the security incident involving ghouse through support tickets?

4) Do the senior security team have direct data access? i.e. do they also change things through a UI or do they have capability to directly change data?

Thanks


1) yes, support staff have a limited UI. There is always a balance between limiting support access and having them able to provide meaningful help. I have the same level of access as a support staffer, and I still get tagged into to work on some issues (particularly calendaring issues, a lot of people have died on the hill of calendaring and I'm currently still our primary expert on some parts of it), and often I need to view people's calendars and the emails related to scheduling in order to debug their issue. The nature of the job is that many issues can only be understood and resolved "in situ". Have I mentioned yet how horrible calendaring is? Thanks for reminding me :(

The UI given to support staff doesn't have the ability to update security credentials for users because they no longer have the "can update security credentials" role like they did in 2016. I don't even have it any more.

2) front line support still see all the tickets first, and they route them as appropriate. Sure this takes longer, we don't have 24 hour coverage of senior security staff (not entirely true, we have 24 hour coverage for emergencies. Somebody forgetting their password is not an emergency in this context)

3) the security incident involving ghouse was entirely via support tickets. His description was accurate, front line support send the pro-forma "we need a bunch of these details", got back some pretty half-arsed details that didn't meet the bar of what was supposed to be provided, and helpfully made the change despite our policy. The helpfulness of humans is a major bug with any security system, and this particular human tried to be too helpful.

4) The senior security team also use a UI. Operationally, they all have the ability to write code that directly changes things under the hood, but that code also has an audit trail and goes through review. It's always quicker and easier to use the UI, so that's what they do.

The UI is not just available to those three people, it's also available to anybody who has a multi-user account and needs to administer their own users. It's still a standard part of our system, just restricted in who can use it at an "any arbitrary Fastmail customer" level.


I'm a FastMail customer. Your response is troubling to me in that it didn't answer most of tptacek's questions. It's troubling enough for me to start looking at other email providers. :-(

I would like FM to provide something akin to Google's advanced protection program. Those of us who are careful not to lose our login credentials should not have to suffer a weak recovery process for the convenience of those who do. I personally would rather opt my account into a stronger recovery process even if I can't access my account for several days or a week or more.


I have now responded in more detail - at the time I was busy trying to spread the love around, and also support my team as they dealt with the support requests and digesting the response on here.

Check out the longer response here:

https://hackertimes.com/item?id=15859024


Thank you for the additional clarity. I apologize for not being more patient in allowing you to reply.


Which of these questions can you not answer without creating security risks? I didn't ask you anything about your automated system.

Is it possible under any set of circumstances for your human employees to alter accounts? If the automated system fails, are accountholders out of luck?


If the automated system fails and you have 2fa, then it gets escalated to the two most senior members of the security team.

In some cases we haven't had sufficient information on the account to ever verify that account's owner, and they never got their account back. Some users refuse to give us enough information to allow us to later positively identify them - so yes, those people will be out of luck if they lose their credentials.


At this point, you should write a blog post to address the myriad of concerns popping up in this thread.


We're working on that! It may not be finished today.


That's fair.

I respect that, as CEO, you're genuinely responding to your customers in this thread instead of fobbing it off to someone else.

Hopefully, this can all be explained, resolved, and /or remedied in good time.


I agree with hitekker, I'm feeling pretty nervous about being a FastMail customer right now and will start looking for a more secure alternative now. The main reason I moved to FastMail is because I stopped trusting Google to keep my mail secure.


Google is the gold standard for email account service. Nobody in the industry does a better job at that one thing than Google does.


I've just switched away from Chrome (because I'd like to support Firefox) and am a FastMail customer.

But I've started to think about moving back to Chrome for "high security mail".

My private mail is pretty bland and uninteresting, so I don't care too much about not using GMail there, but for my Apple account, Google account, Microsoft account etc. it might be a good idea to compartmentalize those "high value" things from everyday mail and go to GMail with the Advanced Protection Program (so no access from smartphone or iPad, I guess).

And looking at their web site I've learned that GSuite Business is affordable and allows adding domains hosted elsewhere. Good.

What do people think about this?

But then the next step: what about losing my domain? My registrar is a reputable German domain hoster, but certainly no Google. On the other hand, Google doesn't register domains, but has "domain partners" like "domaindiscount24" (that I've never heard of before), so I guess not much to win there.


Could you share some info/links on what makes it the Gold Standard?


They have one of the largest information security teams in the world, that team includes what is probably the best corporate vulnerability research team in the world. They're one of a small number of companies that is actively defining modern TLS and thus modern transport encryption; their operations and security teams are almost certainly the world's most sophisticated users of TLS. They ship the most secure browser in the world (if it's not, it's a dead-even tie with Edge --- but, since Google outclasses every other major vendor in vulnerability research, I doubt it's really a tie) and thus have a far better understanding of browser security and the interaction between serverside applications and clientside JS/HTTP applications than any other company. They spend more per year on external vulnerability assessment than most startups do... for everything. They're a constant state-level adversary target and have, over the last decade, evolved a secops and monitoring team to match those adversaries.

How many engineering employees does Fastmail even have? How much better would each of them have to be than one of the best-paying security teams in the entire industry for them to match up?

I could go on, but to me, you don't really even have to think hard about this.


So it's because Google has deep/best skills in security? It automatically applies and makes all their products more secure than everyone else's, even if their design is weakened as a result of their business model? e.g. Does Google's 1st class security team + unencrypted emails + tracking makes it more secure than a company like Proton Mail that's focused on providing Secure mail?

Does that make the claims that Protonmail is more secure than Gmail false? - https://protonmail.com/blog/protonmail-vs-gmail-security/


Sorry, I missed an important sentence.

${All the things I said previously}. And, Google Mail is one of their flagship products.

Most of what is on that ProtonMail page is nonsensical. The claim that is relevant to the discussion here --- that ProtonMail has a "smaller attack surface" and is thus structurally more secure than Google Mail --- assumes significant facts not in evidence.

See downthread for my response to the claim that using a mail services outside the US somehow insulates you from NSA snooping.


They have every incentive to ensure the highest security possible. Their entire business model and most of their revenue is predicated on consumers and businesses moving not just some, but all of their data, straight over to Google's custody and control. Indeed, it damn well had better be secure.

But I think they're compromised by those same business models. Google wants to provide intelligence, and probably more important to them, marketing data. This requires that the consumer is an open book to them, and their business decisions incorporate that. Up until recently, they were actively scanning email for marketing insights. In addition, Google's operating complexity, both business and technical, increases the opportunity for failure. And their other business objectives compromise their security work. That's glaringly apparent for their Android platform. There's more surface. And in a Google world, the email account grants direct access to everything — location data, purchasing history, passwords, documents... everything.

For another dedicated email provider, what they have to protect is also simpler. There are fewer moving parts. There's less to protect, which means that there don't need to be as many engineers. That means a careful and well thought out email provider /can/ be as secure, by carefully limiting their exposure, doing one thing, and doing it well.

There's something to be said for careful application of open standards and open source software, a smaller and more responsive team, and not building a massive single point of failure. I am a current Fastmail customer, and hope to remain, depending on the outcome of this review.


s/They/Apple/g

Err... This could of been said about Apple Inc a few weeks ago then they go and have the root password issue.


Can you think of some info/links that would suggest the opposite?


I'm not looking to discredit the claim, I'm genuinely curious to learn about what they've done to earn the Gold Standard from @tptacek

Google were previously reading our emails for Ad purposes and some of their employees are still able to read our Emails, their privacy policy also indictates they will hand over our emails if requested by law enforcement which suggests it's weaker than protonmail.com end-to-end encryption:

> All emails are secured automatically with end-to-end encryption. This means even we cannot decrypt and read your emails. As a result, your encrypted emails cannot be shared with third parties.

If this is the case, how is Google being held as the Gold Standard?


Elsewhere in the thread I mentioned advanced protection[0]. Gmail/Google is also the only company to my knowledge that gives you a warning like this one[1], and it was certainly the first to do so.

A lot of this comes down to your threat model. If you are most worried about

Unless your threat model is "The NSA gives my hosting provider a court order" or "an employee of my hosting provider goes rogue", its pretty clear that GMail is categorically the best option. And in those two cases, its not clear that there are significantly better options.

[0]: https://landing.google.com/advancedprotection/

[1]:https://techcrunch.com/2017/03/24/what-to-do-about-those-gov...


I'm genuinely curious to learn

I get and am not questioning that. It's just that your curiosity doesn't seem to have motivated you to do a first pass of, I don't want to call it 'research', but just basic poking around on the topic. You want links and info from some dude on the internet because what he says contradicts stuff you know from... something a vendor said about their product.

It's a totally sensible question but it's not some particularly arcane mystery to dig into. In tptacek's case, in a jiffy, you can bring up the 60-odd comments of his that mention 'Gmail' and get a reasonable idea of what he thinks of it and why. And if you think he's got it wrong, you can say, hey, tptacek, I think you're full of poop when you said [...]. And then maybe you can hash it out and one or both of you will learn something. But 'Citation, please', especially on trivially searchable topics mostly says 'I'm kind of curious, but I don't really care'. The person you're asking probably isn't going to care either.


I was hoping there was a quick resource of someone having done a deep analysis dive into advanced techniques Gmail does that makes it more secure than everyone else but judging by tptacek's response it sounds like it's because they have the best security team and by extension all products they make are naturally more secure.

If all we have are the same claim being repeated with the only way to learn about what makes Gmail the most secure email provider is having to trawl through 1000's of comments. It means Gmail is always going to perceived as more secure even when they may not be, because relatively no-one is going to trawl through 1000's of comments to make an informed assessment otherwise.


trawl through 1000's of comments. It means Gmail is always going to perceived as more secure even when they may not be, because relatively no-one is going to trawl through 1000's of comments to make an informed assessment otherwise.

60ish is not 1000s. 69ish if you add the 9 about Protonmail. The guy posts on HN so much you can fairly safely go to https://hn.algolia.com and type author:tptacek [topic of interest] and find out what he thinks about it. If there was, inexplicably, a comic universe about HN mutants, he'd be The Citation.


If there was, inexplicably, a comic universe about HN mutants, he'd be The Citation.

This is getting weird. But I'll allow it.


I think you're conflating several different things here. Their vulnerability to hackers is not at all related to the extent to which they are willing to cooperate with the US Government or to exactly how their GMail ads work. You have to define exactly what your threat model is, and no service can really be the best at all of them. It's perfectly consistent with the worst interpretation of your other assertions that Google is still the gold standard for making sure that no hacker can ever compromise your GMail account, reset your passwords to your services, and hold your data and accounts on other services hostage.


This can be enough for me to consider leaving depending on how it's fixed.

This response says absolutely nothing about how the vulnerability is prevented in the future. It's just a bunch of vague promises and mumbo jumbo. What specific procedures are in place to prevent it? At a minimum, I expect to see something specific like when you guys almost lost your domain because of Gandi [1].

And even then, can I have an option to select absolutely no human intervention possible? Having any human intervention is simply not acceptable.

I already have multiple ways of recovering my account, and I never, ever want human assistance on this. I use a password manager, and I will never, ever need FastMail assistance on login.

[1]: https://blog.fastmail.com/2014/04/10/when-two-factor-authent...


At a minimum, normally when speaking of the other big mail providers, you wouldn’t get an explanation on a public forum at all.

Especially because technical folks like us don’t know to communicate well, words can be misinterpreted, etc. It’s actually not a good strategy to respond to such concerns in public.

Also in my opinion, people that make threats of leaving in public unless certain demands are met usually have their mind set already and can’t be swayed.

As for never needing human assistance, never say never — if relying solely on your password manager, I hope you have a digital last will for your spouse or children.


Oh, and also, I use my own domain on top of having a backup of my emails.

What this means is that if all recovery options are not working and I'm actually locked out, I can fix it.

I own the domain, I own my past emails, I can still get emails. Maybe I'll lose some emails for a day, but that's it. If I want to prove my identity to FastMail, I can also prove that I own the domain.

But the point is, getting locked out of something as important as email is not gonna happen due to my screw-up. It's more likely for a support loophole to screw me over.


I understand what you're saying, but I think perhaps we can agree that the current response is insufficient?

Have you taken a look at the link I supplied above where FastMail wrote about how 2fa protection could be bypassed at Gandi? They were very specific and clear about the recommendations being implemented.

Now, compare that to their current response. I think definitely the difference can be seen.

This is serious stuff.

And, I'm absolutely serious about never needing human assistance. I already have mechanism setup for my family to retrieve my digital assets should I disappear tomorrow. I worked on this together with my wife. I know most people have not thought about this and you're right in your skepticism, but I'm serious.


For what is worth, I’d also like a toggle in the settings to never involve human assistance.

But we technical folks are very odd and I’m assuming they have users that really need human assistance.

I do agree the response is insufficient, my point is that such discussions in public are dangerous for the company and it’s not the norm for company reps to give detailed explanations without prior preparations.


> And even then, can I have an option to select absolutely no human intervention possible?

> I already have multiple ways of recovering my account, and I never, ever want human assistance on this.

Yep, I'll second this feature request. Put as many disclaimers and confirmation mechanisms on it as you need to in order to keep people from accidentally enabling it. I will happily assume responsibility for it.


Also, could you please add ability to get a phone call (instead of text message) to receive recovery options?

That way I can setup my grandparents' phone number or something obscure as yet another recovery option.

And then, please let me lock down any possibility of your support staff screwing up.


That's a possibility. We've talked about using something like twillio to read the message out. We haven't had the feedback that this is in high demand.

It also brings issues of its own. Home phones are hard to block number on, and it could be used to troll people in the middle of their night. We need to consider those risks too - it's not a simple and obvious win.


Hi Bron. I'm a customer of both Fastmail and GSuite, and I have enjoyed your service for a few years now. I still use Fastmail for some things, like sieve, and very much will continue paying just for the ongoing development of open-standard email like JMAP. But there are definitely a few things that I can't shake when I learned about them that very much pertains to the security mindset that prevents me from moving my primary emails onto Fastmail.

Security paradigms have been steadily moving beyond a hard-boundary-soft-center, to a defense-in-depth, distrust-your-own-services model. I was alarmed to learn last year, for example, that you use OpenVPN with fixed symmetric keys (--secret) rather than TLS with any forward secrecy (--tls-auth) for VPN between your NYI and AMS datacenters. https://blog.fastmail.com/2016/12/19/secure-datacentre-inter...

Presumably, running datalinks like this means you would have to have perfect trust in your long term key management and rotation. Is that something you plan on improving in the future?

Similarly -- I stumbled on this entirely by accident after your blog post about moving datacenters -- that your head of security ops & infrastructure tweeted "I will probably root my phone soon because Samsung's emoji set is worse than not having convenient OTA updates" https://twitter.com/robn/status/919194089920311296

I don't want to conflate anything -- a tweet on an engineer's own time about their personal devices isn't by itself a security problem. But it does reflect on the security mindset. If you had a BYOD policy, and this phone did end up being flashed to Lineage and be 3 patch levels behind (esp with Android's track record of RCE-via-media CVEs), this could definitely become a weakness on your entire infrastructure, and thereby on all of us as customers.

This is the type of thing I couldn't shake after learning about it. Of course, trust has to be placed somewhere. You have to be able to place trust on your ops and your infrastructure, but that's also a process, not a checkbox. People and devices can be trusted a little less in the overall security system, to provide redundant security. Could you clarify your position on how your staff is trained about the human weak points, security as a lifestyle if you're security and ops, and how your security mindset incorporates defense in depth?


If an Android phone connecting to the company’s WiFi or the user’s email and whatnot is enough to compromise the infrastructure, then the company has bigger problems.

I’ve worked in companies with liberal BYOD policies for portable devices, but also tasted really restricted environments and such environments are basically highly regulated security theaters.

Users do stupid things of course and in corporations it’s worth it to restrict their devices, but restricting developers on what they can install and do on their own devices has a negative ROI and doesn’t go well. If you can’t trust a dev to manage his own phone, you can’t trust him to build your infrastructure either.

And yes, we make mistakes as we are only human, which is why a phone should not be enough to compromise that infrastructure anyway.

PS: your mention of that Twitter account is creepy.


Absolutely! Our wifi network in the office is treated like an untrusted network. All authentication is done directly from our work laptop or desktop machines and requires a second factor (TOTP, not SMS!)


> PS: your mention of that Twitter account is creepy.

With no context, I agree. But I'm not exactly stalking engineers here - there was literally a direct link to that twitter from the Fastmail updates mailing list that went out, when customers were notified of the NYI datacenter move. Made me do a double take.


We don't consider looking at our staff public twitter accounts to be creepy FYI. We mention that we're at FastMail, and we do indeed link to our own twitter accounts occasionally.

Cheers.


Flippant comments on twitter definitely don't reflect security policies! That phone doesn't have production access for obvious reasons.

You're right that security is a process. We're always working to harden and segment our internal services, as is best practice these days.

Ongoing professional development and training is important for our security staff (indeed, all our staff, because everyone matters for security). The security landscape is always changing, and it's not something that's ever "solved" - it's a situation to stay on top of.


Dear Fastmal, I am a happy customer, but very concerned by this report. Would you mind to comment?


I just forwarded this comment to their customer support. Let’s see what would be their comment.


Please tag me when they answer. (does HN have tags/notifications?)


> does HN have tags/notifications?

No, but if you visit your Threads page (link at the top of every page) you can see any replies to any of your comments. There's nothing special that marks a new reply, though.

I have a habit of upvoting nearly every reply anyone makes to any comment of mine, as a way of thanking them for the comment. This also happens to help when I skim my Threads page, since it's easy to spot comments that still have the voting button(s).


Dan Grossman's http://www.hnreplies.com/ works wonders


Likewise. Social engineering is a big concern. I understand the risks of getting locked out of my account, but would much prefer a stricter system -- along with published guidelines on Fastmail's process for handling these cases.

Being able to persuade a customer service rep to provide access to an account (even if indirectly by changing a recovery email) should never be possible.


Just curious...was this incident before or after they re-architected their authentication system? I believe that was done last July[1]. The new system is really nice, now implementing separate app-specific passwords as well as new emergency recovery mechanisms. I wonder if they updated their internal support policies with respect to assisted account recovery when they implemented the new system...seems like they should have made the bar higher...

[1]: https://blog.fastmail.com/2016/07/25/two-step-verification-a...


The incident was late July, 2016, but before the date on that blog post.


I don't actually recall when the new system went live for all users, but the old system was live simultaneously for at least a couple months to ease the transition. I wonder if customer support actually lowered the bar during the transition because of a perceived or actual increase in customers locking themselves out.


That is a very weird thing to do, and easily fixed. Just do an average of log-ins per day/week, and do not accept any reset passwords (from customer support) before that avg time has elapsed (+ an uncertainty) since the last time you checked the email.

How come they accepted the reset? Were you not logging in your account?


I was logging into my account. I discovered and reported the incident within 45 minutes of the compromise.


I've been a long-term customer of them, but I'm continually under-whelmed by them.

They admit, if you push them, that they economize on front-line support. I think what you relate is a consequence of that.

In a previous thread, I went on a massive whinge-fest about how they had "sun-setted" the one-time $15 payment member account that I set up for my father and that they had previously advertised using the words "never expires". I stand by that because they were in breach of contract.

On the other hand, I don't think they are charging enough really. I would probably be prepared to pay more than I do if I had confidence that they weren't using low-skilled labor for front-line support.


That's very unacceptable, and enough to make me consider leaving Fastmail - I've used them happily for over 7 years and have recommended them to many people, but their support having the ability to do that is giving me pause.


> I was a very happy FastMail customer until a hacker asked them to reset my password. After _incorrectly_ answering a handful of questions asked by the FastMail support, the recovery email address was changed and a password reset link sent. From there, the hacker attempted password resets on other services.

When did this happen?


July 2016


Just wondering, is your FastMail login email the same email as what you typically use?


Did you also have 2fa?


No, I did not. And I certainly should have.

However, 2fa would not have prevented the problem. The problem is twofold -- 1) account recovery (using email, SMS, or anything other than a secret key) is an effective attack vector. Especially SMS. 2) a human who will change the account recovery settings (in my case, FM changing the account recovery email address).


Hmm you think they would have bypassed your 2fa as well? I wonder if FM can comment on that - it would be concerning. The "sms backdoor" is the same with gmail, etc. unless you explicitly disable it.


Our account recovery process won't allow you through at all if you lose your password, and your 2FA, and your recovery key, then you're not getting that account back.


What is the sms backdoor?


Probably this [0]. They can get phone companies to issue a SIM for your number, which will then receive the SMS reset or 2FA code.

[0]: https://www.wired.com/2016/06/hey-stop-using-texts-two-facto...



How recently was this?


"FastMail has always been an engineering-focused company, from the top down. As such there is a strong culture of no-bullshit, and an intense dislike of security theatre."

After OP comment, that's hilarious. And yes, I am a fastmail user as well. How long will it take to an "engineering-focused company" to understand that humans are humans?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: