I am waiting until 3.0 when they roll out their plan for ABI compatibility (it currently masquerades as jpeg62 but is not compatible), and decide on a defacto way to detect if you are building against vanilla libjpeg(-turbo) or mozjpeg (currently you can check for some of their struct members, but they have not stated if they will be going away or not when ABI compatibility is addressed).
At that point I plan to submit a patch to ImageMagick to detect it and enable the user options if mozjpeg is being used. I believe downstream support in libraries like ImageMagick is key to adoption, rather than adding makeup to a pig (input support for N formats in cjpeg).
It's interesting that Mozilla has chosen to use an image with so much red, which makes JPEG (at their settings) look relatively poor even at quality 100. You cannot optimise the limitations of 2x2 chroma subsampling.
I am also not convinced by Trellis yet. In my tests, it introduced a faint blur. In fairness though, I only tried 4:4:4 and not 4:2:0 as Mozilla apparently does. In the latter case, additional blur may not be noticeable.
You can 'optimize' encode-time by using better chroma downsampling, but really the issue is usually on the client side upsampling (a lot of libs use terrible nearest neighbor for 'speed').
As for trellis, you should play with the different metrics available (MS-SSIM etc), but I agree I have not been wowed yet.
Yes, optimise is the wrong word. I meant optimise away really, but that's still not quite right. The WebP authors are actually working on improving chroma subsampling, because VP8 limits WebP to 2x2.
I'd love to be able to use jpegoptim or mozjpeg but the trouble is choosing the right quality level. As Steve Souders says in his comment on TFA, it's not possible to experiment with varying quality levels when you have hundreds of images to optimize.
Because we haven't been able to solve that problem yet, we use JPEGmini, a proprietary (but not that expensive) vendor solution.
jpegoptim optimizes Huffman tables. Mozjpeg's cjpeg always does it, plus:
* jpegcrush/jpegrescan trick: tweaks details of progressive JPEG for maximum compression (each scan gets its own Huffman table, and JPEG can arbitrarily divide data into scans). That's 5%-10% improvement over jpegoptim.
* if you're creating a new JPEG or lowering quality of an existing one, then it uses trellis quantization. In the lossy compression step instead of naively throwing away data, it evaluates lots of combinations to find best bang-for-the-buck combination. That's an extra 5% improvement in quality/filesize ratio.
"cjpeg" is the basic command-line tool for making use of the mozjpeg library to create JPEG images.
libjpeg-turbo and IJG JPEG libraries also have "cjpeg" tools as their basic command-line encoder. The tool for mozjpeg works almost exactly the same way, but it has some extra option flags, can take JPEG input (the others can't), and re-defines the "-baseline" option to something more intuitive (see v2.1 release notes).
Hate progressive jpegs. Always sit there and squint for a second with Google images till a second later when hi-res pops-in. Just let it render and I'll take the 4% efficiency hit.
You can use progressive jpegs without progressive rendering. Google images isn't even using this, they are displaying a very low-res photo immediately, and loading a higher quality one in the background.
(and FYI, most sites that optimize their images (e.g. Facebook) use progressive jpegs for efficiency).
Pngout, optipng, deflopt, jpegtran, gifsicle, cwebp, zopfli, and so forth are all command line tools.
I'm not really sure why you think that this stuff is meant for designers. Recompressing images is a frontend optimization. It's something you automate. It's a task for machines, not humans.
At that point I plan to submit a patch to ImageMagick to detect it and enable the user options if mozjpeg is being used. I believe downstream support in libraries like ImageMagick is key to adoption, rather than adding makeup to a pig (input support for N formats in cjpeg).