I'm going to add to your message box unnecessarily, but I want to say I love GitLab and it's a shining example of a transparent company. I still have ambitions to work there someday, and this event is hopefully a net gain in the end, in that everyone here and there learns about backups.
Thanks for the transparency. Doesn't always feel good to have missteps aired in public, but it makes us all a little better as a community to be clear about where mistakes can be made.
I've been there myself, it was at the start of my career and almost ended it. I know how incredibly emotional this kind of thing can be. Just understand you aren't the first, you wont be the last and shit happens. If you are ever in Philly I'll buy you a beer if you drink, and a dinner if you don't.
Was just trying to push earlier today and found out about the issue. Sorry man! Drinks on me in Montevideo, Uruguay. This stuff happens, more than most of us are willing to accept so, here is for your transparency and you know, fix it, learn it and on you go!
Dude, we've all been there. You are neither the first nor the last. It's never a single person. One day the technical side of our little startup collectively went to a conference (back then it was only two engineers plus a technical leader) while our server DoS'd itself by broken mail processing... we had a rough night in a Belgium hotel figuring out who attacked the site to realize it was ourselves. The 10k block of missing image IDs always stood as a reference not to leave even a low traffic site unmonitored. It happens.
Hey man, don't beat yourself up over this. It's shitty but you found some flaws in the process, in the setup, and y'all can make things better because of that.
Yorick! Thank you for the transparency, I know how tough incidents like these can be. Stop the bleeding, figure out how to handle this better in the future, but most of all, take care!
As much as I appreciate GitLabs extreme openness, that's maybe something that by policy shouldn't be part of published reports. Internal process is one thing, if something goes really bad customers might not be so good at "blameless postmortems" if they have a name to blame.
It seems to me that, as a customer, it is blame-shifting away from the company to a particular person. Blameless post-mortems are great, but when speaking to people outside the company I think it is important to own it collectively, "after a second or two we notice we ran it on db1.cluster.gitlab.com, instead of db2.cluster.gitlab.com." I believe this isn't your intention, but that is how I interpreted it.
In our postmortems we explicitly avoid referring to names and only refer to "engineers" or specific teams. There is no reason to refer to specific names if your intention is a systems/process fix.
To me those "Engineers" read as faceless replaceable cogs. This initials make it personal, its better, we can now say "YP" thats exactly you, hey, chin up. Sounds better than "engineering team 42".
You write CEOs name on all your publications, of course always taking credit/glory, but why not let engineers do the same, take credit/ownership when doing a nice commits, and when fucking up. We're all people first, and prefer to speak/talk to people and not Engineering Team MailBox at Enterprise Corporation.