fwiw, Bunny are the people that announced S3 compatibility for their object storage in Q2 2022 [1]
> We can’t wait to have this available as a preview later in Q2 and truly make global storage a breeze, so keep an eye out!
then apologised for missing that in September 2023 [2]
> We initially announced that we were working on S3 support for Bunny Storage all the way back in 2022. Today, as 2023 is slowly coming to an end, many of our customers continue to follow our blog, hoping for good news about the release.
changing the roadmap to early 2024 [2]
> But we are working aggressively toward shipping S3 compatibility in early 2024.
That same post also has the beautiful "At bunny.net, we value transparency." quote.
It's early 2026, and they're literally ignoring my support requests asking about what the roadmap is looking like for this now.
So, do not trust their product or leadership at all.
Yeah I'm in the same boat. I was pretty excited to bring stuff over from Cloudflare but the missing S3 compat. and the communication around that was (and still is) a dealbreaker for me.
Asking because I was looking at both Cloudflare and Bunny literally this week...and I feel like I don't know anything about it. Googling for it, with "hackernews" as keyword to avoid all the blogspam, didn't bring up all that much.
(I ended up with Cloudflare and am sure that for my purposes it doesn't matter at all which I choose.)
- The free CDN is basically unusable with my ISP Telekom Germany due to a long-running and well documented peering dispute. This is not necessarily an issue with Cloudflare itself, but means that I have to pay for the Pro plan for every domain if I want to have a functioning site in my home country. The $25 per domain / project add up.
- Cloudflare recently had repeated, long outages that took down my projects for hours at a time.
- Their database offering (D1) had some unpredictable latency spikes that I never managed to fully track down.
- As a European, I'm trying to minimize the money I spent on US cloud services and am actively looking for European alternatives.
You don‘t have to get the Pro plan to solve the Deutsche Telekom issues. You can also use their Argo product for $5/month - but only makes sense if your egress costs wouldn‘t exceed the pro plans pricing.
> This feature is currently in the closed beta stage. It is not available for use currently, but it's expected to be in the near future. We appreciate your interest in it and will mark your ticket so we can notify you when it's available.
You left out the part where they realized they couldn't ship S3 compatibility without rebuilding their storage service. So they have decided to rebuild their storage service. Not really a small project. So I can see how its taking longer. At least they were transparent about it.
I've been struggling with Bunny the last couple of days.
Their log delivery api is delayed by over 3 days, despite them promising only "up to 5 minutes delay" in their docs: https://docs.bunny.net/cdn/logging
Why isn't it on the status page you might ask? Oh, that's because a delay is not "critical", but I fear I am losing loglines now, their retention is 3 days.
It's an interesting strategy for them, because it doesn't inspire confidence in me about their other offerings. When they can't reliably operate a log delivery API or be transparent about issues, it's hard to trust them with something as critical as a database.
The "Wait, what does “SQLite-compatible” actually mean?" subheading didn't answer my question to be honest. They're using (forked) libSQL under the hood - ok, cool. But how do I interface with it?
> Maybe I'm not the target market for this, but how hard is it REALLY to manage a RDBMS?
It depends:
- do you want multi region presence
- do you want snapshot backups
- do you want automated replication
- do you want transparent failover
- do you want load balancing of queries
- do you want online schema migrations with millisecond lock time
- do you want easy reverts in time
- do you want minor versions automatically managed
- do you want the auth integrated with a different existing system
- do you want...
There's a lot that hosted services with extra features can give you. You can do everything on the list yourself of course, but it will take time and unless you already have experience, every point can introduce some failure you're not aware of.
I would have no concerns around reliability uptime running my own database.
I would have concerns around backups (ensuring that your backups are actually working, secure, and reliable seems like potentially time intensive ongoing work).
I also don't think I fully understand what is required in terms of security. Do I now have to keep track of CVEs, and work out what actions I need to in response to each one? You talk about firewall rules. I don't know what is required here either.
I'm sure it's not too hard to hire someone who does know how to do these things, but probably not for anything close to the $50/month or whatever it costs to run a hosted database.
Backups are a PITA I wanted to go exactly this route but even though I had VMs and compute I can't let any production data hit it without bullet proof backups.
I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.
If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.
It's not about it being hard, it's about delegating. Many companies are a bit less sensitive to pricing and would rather pay monthly for someone else to keep their database up, rather than spending engineering hours on setting up a database, tuning it, updating it, checking its backups, monitoring it and making it scale if needed.
Sure, any regular SME can just install Postgres or MySQL without even setting much up except with `mysql_secure_install`, a user with a password and an 'app' database. But you may end up with 10-20 database installs you need to back up, patch and so on every once in a while. And companies value that.
On the pricing bit, I have to say edge driven SQLite/ libsql driven solutions (this is a lot of them) can be a mixed bag.
Cloudflare, Fly.io litestream offerings and Turso are pretty reasonably priced, given the global coverage.
AWS with Aurora is more expensive for sure and isn’t edge located if I recall correctly, so you don’t get near instant propagation of changes on the edge
The bigger thing for me is how much control you have. So far with these edge database providers you don’t have a ton of say in how things are structured. To use them optimally, I have found it works best if you are doing database-per-tenant (or customer) scenarios or using it as a read / write cache that gets exfiltrated asynchronously.
And that is where I believe the real cost factors come into play is the flexibility
Or at least they should. I’ve worked many places where thousands of dollars in engineering hours were wasted on something after they refused to use a service for a fraction of the cost. Some companies understand this but others don’t.
The vast majority of products with paying customers need better availability than “database went down on Friday and I was AFK until Monday, sorry for the 3 day downtime everyone”
While in public preview, Bunny Database is free.
When idle, Bunny Database only incurs storage costs. One primary region is charged continuously, while read replicas only add storage costs when serving traffic (metered by the hour).
Reads - $0.30 per billion rows
Writes - $0.30 per million rows
Storage - $0.10 per GB per active region (monthly)
The best thing about their pricing is that you can prepay. So if you have a runaway cost, it can stop before you run up a 5 or 6 figure bill, unlike Azure/AWS/GCP/CF.
Adding my voice to the chorus here: they've established a pattern of introducing new features and never really getting them past the 80% point. No qualms with the CDN; it's a sweet spot among providers. But their other offerings have been frustrating me for years now.
It does feel like they're spreading their resources pretty thin though, the S3-compatible interface for their file storage has been "coming soon" since 2022.
S3 is currently in closed preview with some users. It's quite easy to get added for those keen to try it. More using it and providing feedback, the quicker it'll become public preview.
Huh, how? Did you have to modify your site a lot to do switch?
I tried to test it out as a CDN replacement for Cloudflare but the workflow was a lot different. Instead of just using DNS to put it in front of another website and proxy the requests (the "orange cloud" button), I had to upload all the assets to Bunny and then rewrite the URLs in my app. Was kind of a pain
When I tried it last year, their edge compute infra was just not there yet. It could not do any meaningful server-side rendering because of code size, compute and JS standard constraints.
Depending on your precise requirements, I think it might have changed.
I've been trying out Bunny recently and it looks like a very viable replacement for most things I currently do with Cloudflare. This new database fills one of the major gaps.
Their edge scripting is based on Deno, and I think is pretty comparable to e.g. Vercel. They also have "magic containers", comparable to AWS ECS but (I think) much more convenient. It sounds from the docs like they run containers close to the edge, but I don't know if it's comparable to e.g. Lambda@Edge.
I haven’t tried to do SSR in bunny but they also have bunny magic containers now where you run an entire container instead of just edge scripts (but still at the edge).
I have been using them for over a year. THey have the same flow as Cloudflare, point domain to thier CDN, set CDN Pull Zone to target your server. I havent had to do anything.
They even support websockets.
Why they cant do is the TUnnel stuff, or at least fake it. I have ipv6 servers, and I can't have the IPv4 Bunny traffic go to the ipv6 only sources.
Will give this a spin. They’re one of the few cloud-y providers that has both prepayment and a rate limiter that doesn’t charge for rate limit exceeds (still blows my mind that providers charge for blocks).
Same, it's nice to use a no-BS CDN for personal projects (e.g. https://atlasof.space/). Their pricing is good and I actually appreciate that they have no free tier so that there's no "oh shit" moment when you suddenly exceed it and owe real $$$ (looking at you, Netlify). I probably won't use their database feature but I'll for sure keep using their CDN if they can keep things as straightforward as they currently are.
Reminds me of how we got scarred by "parse.com" -- it was also a promising database, and our customer insisted on it, but after lengthy development and just before our project release turned out that they are shutting down and noone works on it anymore. Like literally their support said "uhm sorry folks, we're all hired by Facebook, no one is working on parse.com anymore".
parse.com was my last straw building on "as a service" startups because of this. DaaS is not even particularly good for hobby projects anymore given how easy it is to work with sqlite.
This documentation page[1] seems pretty clear. One primary at a time, any number of read replicas that automatically proxy writes to the primary, when compute scales to zero the data is in object storage and a new primary can spin up elsewhere.
Small companies often have much better technical support than large companies where you just get lost in the system. One of the reasons I moved away from R2 was that it was impossible to contact anyone about the serious issues I had with the product. I’m using Bunny for CDN and have found them to be very responsive.
Per million rows written: Bunny $0.30, Cloudflare $1.00 (first 50M/month free)
Per GB stored: Bunny $0.10/region, Cloudflare $0.75 (5GB free)
Bunny also has a lot better region selection, 41 available vs. Cloudflare's 6 (see https://developers.cloudflare.com/d1/configuration/data-loca...). Even though Bunny charges storage per region used where Cloudflare doesn't, Bunny still comes out cheaper with 7 regions selected. Bunny lets you choose how many and which regions to replicate across; Cloudflare's region replication is an on/off toggle that is in beta and requires you to use "the new Sessions API" (I don't know what this entails).
The main reason I haven't tried out D1 is that it locks you into using Workers to access the database. Bunny says they have an HTTP API.
I plan to stick with VPSes for compute and storage, but I do like seeing someone (other than Amazon) challenge Cloudflare on their huge array of fun toys for devs to play with.
Not a technical reason, but given Cloudflare's recent business practices where they hold you hostage if you don't upgrade to an enterprise plan are a pretty good reason to avoid imo.
Some ISPs have bad peering with Cloudflare (e.g. Deutsche Telekom). Not Cloudflares fault but it makes it a bad choice if your customers are in Germany.
> Not every project needs Postgres, and that’s okay. Sometimes you just want a simple, reliable database that you can spin up quickly and build on, without worrying it’ll hit your wallet like an EC2.
Isn't the operational burden of SQLite the main selling point over Postgres (not one I subscribe to, but that's neither here nor there)? If it's managed, why do I care if it's SQLite or Postgres? If anything, I would expect Postgres to be the friendlier option, since you won't have to worry about eventually discovering that you actually need some feature even if you don't need it at the start of your project. Maybe there are projects that implement SQLite on top of Postgres so you can gradually migrate away from SQLite if you need Postgres features eventually?
Marek here from bunny.net. We’re not saying SQLite is universally better than Postgres. The trade-off we’re optimizing for is cost model and operational simplicity.
Even as a managed service, Postgres DBaaS still tends to push users into capacity planning, instance tiers, and paying for idle headroom. Using a SQLite-compatible engine lets us offer a truly usage-based model with affordable read replication and minimal idle costs.
Some European companies migrate their dependencies from US clouds to European ones. Turso is registered in Delaware. Bunny HQ is in Slovenia. Different data related policies apply.
> We can’t wait to have this available as a preview later in Q2 and truly make global storage a breeze, so keep an eye out!
then apologised for missing that in September 2023 [2]
> We initially announced that we were working on S3 support for Bunny Storage all the way back in 2022. Today, as 2023 is slowly coming to an end, many of our customers continue to follow our blog, hoping for good news about the release.
changing the roadmap to early 2024 [2]
> But we are working aggressively toward shipping S3 compatibility in early 2024.
That same post also has the beautiful "At bunny.net, we value transparency." quote. It's early 2026, and they're literally ignoring my support requests asking about what the roadmap is looking like for this now.
So, do not trust their product or leadership at all.
[1] https://bunny.net/blog/introducing-edge-storage-sftp-support... [2] https://bunny.net/blog/whats-happening-with-s3-compatibility...
reply