Hacker Timesnew | past | comments | ask | show | jobs | submit | gitlab-security's commentslogin

Thank you for your post and questions. We would welcome you to apply for a role at GitLab that looks interesting to you. Our guideline for security researchers is that if you are a GitLab employee, you would not be eligible for a bounty award through our public bug bounty program. Other than that, independent security research is fine, as long as it does not call out our customers in a negative manner. Here is our hiring page: https://about.gitlab.com/jobs/faq/


Thank you for your feedback and suggestions. Unfortunately, for each of these proposals, we're likely to have users asking us why we are restricting and/or blocking access.

A better defense-in-depth strategy would be to scan each public repo for credentials, and act accordingly when credentials are discovered in repos. We are working on this strategy, currently.


That doesn’t help stop the attacks using breach lists that are even more prevalent.

You could start with email warnings of suspicious activity and fine tune the model parameters based on feedback from false positives. But generally a login from a device that has no previous cookie, from an ASN the account has never used before, especially if that ASN is a known data center, that then immediately attempts a destructive action, should be a pretty big warning flag.


To provide additional context, on GitLab.com, we maintain two weeks' backups. The last time we restored a single project repo, it was a significant effort that utilized many hours of an SRE's time to complete.


What’s crazy isn’t the loss of data (that happens, and I don’t really expect most cloud services to be save me from explicit deletion requests) but rather framing it as no data being lost because your customers have it elsewhere.


"many hours" is too high. Regardless of this incident, if backups take too long to restore, you may as well not have them.


I agree, the longer it takes to recover the less valuable the backups are. However, in this context where we're restoring individual repo's (and all the metadata baggage involved) it's much different than a DB restore or other disaster recovery/prevention mechanisms.

The per-repo restore time today is not where it should be. We're working to speed this up so we can help users recover and get back to a productive state quickly.


Does support have the ability to restore keeparound refs? Internally (at least as of late 10 series), anything show in the UI (ie, merge requests) is also copied to refs/keeparound/sha1, which isn't presented to users.


Confirming the previous poster's comment on GitLab Pages domain ownership verification functionality, which was rolled out in February of 2018. https://about.gitlab.com/2018/02/05/gitlab-pages-custom-doma...


Thank you for your feedback and comment. The message is generated by our automation capability. We want to keep hackers engaged by making sure they know that the issue is successfully submitted. We usually review the report sooner than promised, but want to set expectations accordingly. The automation calculates the number of business days based on current number of reports pending, so it is not always going to be the same message.


Thank you for the feedback. One of our core values is transparency. Very few companies are as transparent as GitLab. We take our users' data security extremely seriously. Since total CVE count is only one metric to measure the security maturity of an organization, allow me to provide you with other metrics that may help you understand what we're doing on our users' behalf.

Over the last 7 months, we have been focusing on mitigating security vulnerabilities that highly impact our users, where at least 25% of our users are affected. Since then, we've been able to bring the mean-time-to-mitigation (MTTM) for new, high-impact vulnerabilities to less than 30 days, which is below industry average for security vulnerability mitigations. However, we are not done securing GitLab of course, and are also working on maturing the security vulnerability mitigation process. Here are some goals that we've achieved over the last 6 months:

1. Developed and put into place two separate security release processes - a monthly non-critical security release process, focusing on reducing security debt, and a critical release process (on demand, as needed) when there is a new vulnerability discovered that impacts a significant number of users. https://gitlab.com/gitlab-org/release/docs/blob/master/gener...

2. GitLab continues to work with security researchers from the HackerOne program to recognize and reward bounties for their contributions. We have plans in place to expand on the existing HackerOne program by the end of 2018. The HackerOne program has been effective in assisting us with scaling our work with security vulnerability mitigations, because we have a small security team at GitLab, currently. https://hackerone.com/gitlab

3. Our 2018 (and beyond) Security Vision and Hiring Plan includes growing GitLab's internal security team further, and we will be making security research hires, in order to accelerate the security vulnerability mitigation efforts that we are working on maturing. https://about.gitlab.com/handbook/engineering/security/

If you have any further questions, please feel free to contact us directly at security@gitlab.com


The monthly security release for GitLab was today, and this release was coordinated with the Git security release. https://about.gitlab.com/2018/05/29/security-release-gitlab-...

In addition to our recently implemented monthly non-critical security release process (we already had a critical release process before), we are making a number of changes in how we secure GitLab.com, which includes expanding our HackerOne program this year to be a public bounty program. As always, we appreciate the contributions of security researchers.


We apologize that you haven't received an email notification. We've sent email notifications to as many customers as possible, but obviously did not get everyone.

Rest assured that when domain verification rolls out, there will be an email notification you'll receive regarding a grace period to address your required user actions prior to re-verification.

If you have any further questions about this plan, feel free to contact us directly at security@gitlab.com


This is a good suggestion, and we have created an issue to implement this https://gitlab.com/gitlab-org/gitlab-ce/issues/43186. Thank you for the constructive feedback.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: