Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We self-host GitHub using GitHub Enterprise Server. It is a mature product that requires next-to-no maintenance and is remarkably stable. (We did have a period of downtime caused by running it on an underprovisioned VM for our needs, but since resolving that it hasn't had problems.)

Of course we have a small and mostly unchanging number of users, don't have to deal with DDoS attacks, and can schedule the fairly-infrequent updates during maintenance windows that are convenient for us (since we don't need 100% availability outside of US working hours).

I don't have the metrics in front of me, but I would say we've easily exceeded github.com's uptime in the last 12 months.



Things start to go sideways when you have tens of thousands of users.


> Things start to go sideways when you have tens of thousands of users.

If that’s really the case, run another GitHub instance then. Not all tens of thousands of users need access to the same codebases. In the kind of environment described someone would want identity boundaries established around each project anyway…


It’s fairly stable, but with a large codebase I’ve seen it take a day + to rebuild the search index, not to mention GHES relies on GitHub.com for the allowed actions list functionality which is a huge PITA. It should not rely on the cloud hosted version for any functionality. That having been said, I don’t think there’s much of an alternative and I quite like it.


you don't have to manage access to Actions that way.

on GHES you can use https://github.com/actions/actions-sync/ to pull the actions you want down to your local GHES instance, turn off the ability to automatically use actions from github.com via GitHub Connect, and use the list of actions you sync locally as your whitelist.

My employer did this for years. It worked very well. Once a day, pull each action that we had whitelisted into GHES and the runners would use those instead of the actions on github.com.


I would have thought if you had tens of thousands of developers all needing access to the same git repos, then you'd probably have a follow-the-sun team of maybe 50 or 100 engineers working on your git infra.


> Things start to go sideways when you have tens of thousands of users.

Hm not really. I manage the GHES instance at my employer and we have 15k active users. We haven't needed to scale horizontally, yet.

GHES is amazingly reliable. Every outage we have ever had has been self-inflicted; either we were too cheap to give it the resources it needed to handle the amount of users who were using it, or we tried to outsmart the recommended and supported procedures by doing things in a non-supported way.

Along the way we have learned to never deviate from the supported ways to do things, and to keep user API quota as small as possible (the team which managed this service prior to my team would increase quota per user anytime anyone asked, which was a capital-M Mistake.)


Most self hosted instances would not have tens of thousands of users.


Agreed, that’s why products of that nature start to break when you do.


I was the administrator of a GitHub Enterprise Server instance back in 2015-2016 (I think 2014 too).

Rock-solid stability, for a company with 300+ microservices, 10+ big environments, 50+ microenvironments, who knows how many Jenkins pipelines (more than 900, I’ll tell you that). We deployed several times a day, each service on average had 3 weekly deployments.

As a company, I think GitHub (public) should do better, much better, given this is happening more frequently as of late, but if big companies (even medium ones) don’t have their own package caches, they are all in for a ride.

At a previous Startup we had GitHub + GitHub Actions, and we were on AWS. We setup some OCI images cache. Sure, if GitHub went down we could not deploy new stuff, but at least it wouldn’t take us down. If we really needed the pipelines, I suppose we could have setup some backup CLI or AWS CodePipeline (eww) workflows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: