Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does that work? Can I just dumb two terabytes of video into Backblaze B2, setup a Cloudflare account and have people watch those videos with it costing me only $10 a month? Because that doesn't sound right.


The Cloudflare ToS explicitly exclude that use-case.

2.8 Limitation on Serving Non-HTML Content The Service is offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as a part of a Paid Service purchased by you, you agree to use the Service solely for the purpose of serving web pages as viewed through a web browser or other functionally equivalent applications and rendering Hypertext Markup Language (HTML) or other functional equivalents. Use of the Service for serving video (unless purchased separately as a Paid Service) or a disproportionate percentage of pictures, audio files, or other non-HTML content, is prohibited.


In a previous HN post [1] where someone wrote up how to use B2 as an image host, the Cloudflare CEO chimed in and addressed rule 2.8 specifically, saying if Cloudflare workers are used for URL prettifying and redirecting, a different ToS is applied and that use-case would be fine.

Does that mean your video use-case would also be fine? I have no idea. An HN comment from the CEO doesn't seem like it would hold up if Cloudflare suddenly shut down your free account.

I'd love for Cloudfront to officially clarify the limits of the Cloudfront/B2 alliance in terms of external traffic. The confounding issue here is that B2, as a storage service, is not really intended for "serving web pages and websites" — it's for larger files, binaries, etc. — and therefore any traffic from B2 going through Cloudfront is sort of de facto in violation of 2.8.

[1]: https://news.ycombinator.com/item?id=20790857


Disclaimer: I work at Backblaze so I'm biased. :-)

> B2, as a storage service, is not really intended for "serving web pages and websites" — it's for larger files, binaries, etc

It might be missing a couple features (which is a pet peeve of mine) but we SURELY intend for it to be used for serving web pages. That's one of the largest differences between "Backblaze Personal Backup" (our original product line) and Backblaze B2. The largest parts of the redesign/refit when we originally did B2 was around the concept of what we call "Friendly URLs" (web page names, folder names) instead of just ugly 82 character hexadecimal file names like Backblaze Personal Backup stores all your files in.

For full disclosure, Backblaze B2 isn't a great "hosting" solution for something like WordPress because we lack two or three things, one of which is comically easy to fix and I keep trying to convince everyone to do it. The issue is URLs that end in a "/" (trailing slash) basically need to "guess" that after that is an ".html" or ".php" or whatever. So the URL: https://f001.backblazeb2.com/file/ski-epic-c/full/2015_scotl... does not work, but the URL: https://f001.backblazeb2.com/file/ski-epic-c/full/2015_scotl... does work. All modern web servers do this automatically filling in of the "index.html", but it is missing from Backblaze B2 currently. And it would take just a day or two for one of our developers to fix it. And dang it, I'm going to get it done one of these days.


S3's use of separate servers for website hosting is actually very sensible. Options like usage of index.html and error.html only apply on the website servers and won't cause any surprises for people using the service as a key-object store.

That said, I would absolutely not consider using B2 without support for index.html, error.html, and Website-Redirect-Location.


You can trivially use Cloudflare Workers to implement that functionality on top of B2.


Please be inspired by how Netlify handles url rewriting through their netlify.toml file, in doing so, if it will be possible. :-)


Sounds like workers are fine in general, especially on the non-free tier that only increases the budget by $5 a month.

That covers 10 million requests, which could each be 10MB chunks, or 512MB cached files, or possibly larger.

(A raw mp4 needs about 3 requests to start plus one per seek.)

But I wouldn't be surprised if that doesn't scale to enormous amounts of data.


To clarify, the key line is "a disproportionate amount of non-HTML content". Serving videos isn't prohibited but CF still has to prevent free plan websites from becoming huge loss-leaders.

Another big asterisk is that this applies to all proxied (DDOS-protected) content, not just the ones that use the CF cache. CF pays for all of their uplink bandwidth out-of-pocket regardless of if it's cached. You can see what happens when you proxy multiple terabytes of content on the free plan in this thread[0] (again all proxied bandwidth costs CF, which is why this user had their zone unproxied).

0: https://community.cloudflare.com/t/the-way-you-handle-bandwi...


From what I can tell that's more about their caching service. It mentions exceptions for hosting from a paid service and correct me if I'm wrong but wouldn't Backblaze be that paid service?


They are talking about their own paid service, obviously. I don't think they care whether you are paying for someone else's product or not.


Perhaps they are talking about a paid service that they offer,like Cloudflare Stream https://www.cloudflare.com/products/cloudflare-stream/


This is huge if true - folks keep on saying Cloudflare can replace cloudfront for free. If we could put 20TB of binary content for software distro / videos etc on B2, then stream by cloudflare I'd actually believe the claims.

Every time we've tried this with other "free" providers there always seems to be "fine print" when you start pushing huge bandwidth on these "unlimited" plans. It seriously is not worth the time at some point.


Cloudflare's main service excludes video.


This is key for many.


The way I understand, you can do this with images, eastdakota confirmed that here earlier https://news.ycombinator.com/item?id=20791660 not sure about videos.



I believe you still pay Cloudflare costs, but the traffic between Cloudflare and B2 is free on both platforms. But it might be worth double-checking the fine print.


Isn't the Cloudflare CDN included in the free plan as well?


It is, but they cap out the file size they will cache on the free plan to be 512MB. So you would need to chunk up your videos to get free bandwidth.


> So you would need to chunk up your videos to get free bandwidth.

Having a functional video player on your site or in your app (e.g. one where you can skip to arbitrary times without requiring the video be buffered up to that point; or where a video can be "resumed" from the middle if you leave it and come back) already requires that you use MPEG-DASH or HLS; which in turn implies/necessitates pre-chunking, no?

Is there some use-case where people are currently serving 400MB contiguous video files from a CDN? I can't think of one. YouTube doesn't. Netflix doesn't. Even porn sites don't.

I guess Archive.org has some large video files on there in various places, that can be direct-downloaded; but the recommendation Archive.org itself makes, is to consume those via BitTorrent. Presumably they don't have a CDN partner willing to handle their unique workload for cheap.


You don’t need DASH or HLS to seek to an arbitrary point quickly. The mp4 container format has an index which stores playhead time -> byte. This index is either at the end of the file or beginning, and tools like qt-faststart move the index to the beginning, which makes videos start much quicker when serving over http. Browsers will use the index to issue range get requests an be able to seek just fine.

Serving a 400mb video via a CDN is highly dependent on the CDN. Some will construct a cache entry from a slop of range get requests and translate them to get the missing pieces and work brilliantly and other CDNs should be avoided.


ffmpeg.exe -i input.mp4 -c copy -movflags faststart -y output.mp4


> Having a functional video player on your site or in your app (e.g. one where you can skip to arbitrary times without requiring the video be buffered up to that point; or where a video can be "resumed" from the middle if you leave it and come back) already requires that you use MPEG-DASH or HLS; which in turn implies/necessitates pre-chunking, no?

Browsers are smart. They only buffer a few megabytes at a time and can seek around pretty efficiently.


That requires the video to be encoded in a way where you can just start reading the stream from any random byte offset, and everything will still work. Video files are not usually encoded this way (any more.) Resume an MP4 or MKV video half-way through, without reading the TOC-ish stuff from the first chunk, and you'll get garbage that maybe resyncs after 20 seconds.

It's totally possible to "encode for streaming", but it usually results in both an increase in overhead [more keyframes] and a decrease in quality [inability to use predictive interpolation, instead relying only on forward-interpolation.]

Mind you, this streaming-enabled encoding is how things were done on the web, before the advent of MPEG-DASH/HLS; and it's still how e.g. the MP2 encoding of digital cable/satellite video works. But we don't really want to go back to those days. They kind of sucked.

Jumping to random byte offsets in a video also tends to screw with any embedded data streams like subtitles or thumbnails, which tend to just be stored in most media container formats as a single chunk at the beginning/end of the file, rather than being spread or copied across the stream. Again, the kind of captioning done back in the MP2 days is immune to this, but it kind of sucked as well (e.g. it wouldn't trigger if you happened to skip to the millisecond after the instruction for it appeared in the stream, often leaving you with ~30 seconds of untranslated audio.)


I don't think this is quite right. If you serve an mp4 with H.264 video statically on any basic webserver it will just work in browser through plain HTTP, without need need for MPEG-DASH/HLS. Every widely used mediaplayer/browser just downloads the nearest chunk (keyframe point) behind the time that was seeked to can resume the playback from there. This point is found through an index stored in a container format. For basically every video format these days (say, at least as new as H.264), regular settings make this only a few more seconds of video to download and decode before the seeked point and basically happens instantly for normal online consumption. In H.264 forward prediction (through two-pass encoding) will playback fine too.

I think what you're saying applies more to a setting where the video is being streamed live, so that you cannot access the start of the file to get keyframe metadata. In that case HLS and MPEG-DASH help.


That's odd. I can't recall ever having a problem with browsers playing mp4s in a vanilla <video> tag, as long as I encode them in the main h264 profile, AAC audio, and MOOV atom at the front (see [0] for ffmpeg command). Obviously the server has to support Range: byte requests.

My impression is DASH/HLS are mostly useful for adjusting bitrate on the fly.

[0]: https://superuser.com/a/438471/402047


When the parent talks about jumping to random byte offsets, they mean you don't have the first part of the file at all. You just have an arbitrary 512-MB chunk out of the middle.


But they were claiming that a single monolithic file would break too, which is not the case. The browser does a range request to get the first part, then a range request to get the part you're playing, and it works.


Right. Like tuning into a digital-cable signal "in the middle"†. You just get bytes of the stream starting from an arbitrary offset, without having seen/processed anything before that (and without even being able to request anything before that), and you need to resynchronize from what you've got.

† I mean, a digital-cable video stream is always "in the middle" unless you're just starting a VOD stream, but still.


The browser just reads the TOC-ish stuff from the first chunk. Trust me, it works. I regularly load plain old multi-hundred-megabyte mp4s in my browser, off a web server, and skip around without problems. The default keyframe interval from x264 is fine. You don't have to do any horrible things to the encoding, you just have to start loading a few seconds before the seek point. Which the browser does automatically.

Do browsers even support embedded subtitles?


CloudFlare's free plan can and does end at any moment, I wouldn't rely on it for any serious application


Curious as well.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: