I was the administrator of a GitHub Enterprise Server instance back in 2015-2016 (I think 2014 too).
Rock-solid stability, for a company with 300+ microservices, 10+ big environments, 50+ microenvironments, who knows how many Jenkins pipelines (more than 900, I’ll tell you that). We deployed several times a day, each service on average had 3 weekly deployments.
As a company, I think GitHub (public) should do better, much better, given this is happening more frequently as of late, but if big companies (even medium ones) don’t have their own package caches, they are all in for a ride.
At a previous Startup we had GitHub + GitHub Actions, and we were on AWS. We setup some OCI images cache. Sure, if GitHub went down we could not deploy new stuff, but at least it wouldn’t take us down. If we really needed the pipelines, I suppose we could have setup some backup CLI or AWS CodePipeline (eww) workflows.
Rock-solid stability, for a company with 300+ microservices, 10+ big environments, 50+ microenvironments, who knows how many Jenkins pipelines (more than 900, I’ll tell you that). We deployed several times a day, each service on average had 3 weekly deployments.
As a company, I think GitHub (public) should do better, much better, given this is happening more frequently as of late, but if big companies (even medium ones) don’t have their own package caches, they are all in for a ride.
At a previous Startup we had GitHub + GitHub Actions, and we were on AWS. We setup some OCI images cache. Sure, if GitHub went down we could not deploy new stuff, but at least it wouldn’t take us down. If we really needed the pipelines, I suppose we could have setup some backup CLI or AWS CodePipeline (eww) workflows.