Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is it not best practice to do so?

The naive solution using the golang image is nearly 1GB. Why carry this extra complexity around?



It is best practice and I wish it was common practice, but there's a lot of friction in the way and some people will actively work against you.

Static linking glibc is painful because the former maintainer has some strong opinions: https://www.akkadia.org/drepper/no_static_linking.html

Most distros force you to dynamically link every dependency if you want them to package it. So the default build for most projects is dynamic

You've stumbled on a holy war between distros and guys like you, me, and Linus Torvalds[1] that want to deploy a binary and just have it work everywhere.

1: https://youtu.be/5PmHRSeA2c8?t=295 (Highly recommend watching Linus's answer ~6 minutes in)


Sure this is understandable when we are building an OS.

But here we are using Docker so we have full control over the application we are building. Why this craziness of having these huge images containing who knows what?

Is it just because we can? And the cloud providers like us when we do it?


It mostly happened because the we carried over the previous assumptions, practices and limitations when moving into containers.

I agree with you and the parent commenter, this should be the default, but some people are against static-linking, even in cases dynamic-linking provides no advantages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: