They added a feature that impressively fails to interoperate with the rest of the world.
> Added well-known type protos (any.proto, empty.proto, timestamp.proto, duration.proto, etc.). Users can import and use these protos just like regular proto files. Additional runtime support are available for each language.
From timestamp.proto:
// A Timestamp represents a point in time independent of any time zone
// or calendar, represented as seconds and fractions of seconds at
// nanosecond resolution in UTC Epoch time. It is encoded using the
// Proleptic Gregorian Calendar which extends the Gregorian calendar
// backwards to year one. It is encoded assuming all minutes are 60
// seconds long, i.e. leap seconds are "smeared" so that no leap second
// table is needed for interpretation.
Nice, sort of -- all UTC times are representable. But you can't display the time in normal human-readable form without a leap-second table, and even their sample code is wrong is almost all cases:
That's only right if you run your computer in Google time. And, damn it, Google time leaked out into public NTP the last time their was a leap second, breaking all kinds of things.
Sticking one's head in the sand and pretending there are no leap seconds is one thing, but designing a protocol that breaks interoperability with people who don't bury their heads in the sand is another thing entirely.
I think that the approach everything else uses is the "sticking your head in the sand approach". You basically pretend that there is no problem and that time is perfectly accurate, up until you have a minute with 59 or 61 seconds.
Just because suddenly trying to handle "Oh shit, everything is off by an entire second!" is the approach everything else uses doesn't mean it is the right approach.
No, I agree they did a bunch of good engineering for internal use.
But they didn't keep it internal properly -- the real world has leap seconds for better or for worse, and this library really does stick its head in the sand and pretend they don't exist. Google specifically says that this library is designed to be "the foundation of Google's new API platform". Yet they give a data type (as a headline feature) and a sample usage that is simply incorrect if you don't set your system to work using Google's "leap smear". It also seems quite likely that it'll result in blatantly wrong human-readable strings. I'll even quote a string from timestamp.proto [1]:
9999-12-31T23:59:59Z
That looks like an RFC 3339 string, and it even has the 'Z' suffix, which means it's UTC, which has an agreed-upon international definition. But this is not a valid UTC time. It's a time in a different time zone that Google made up.
Google easily could have done better: publish a spec for a different kind of time like:
9999-12-31T23:59:59s
where the little 's' means 'smeared'. Supply a serializer and deserializer for that. Now there's no ambiguity.
>You basically pretend that there is no problem and that time is perfectly accurate, up until you have a minute with 59 or 61 seconds.
Time is perfectly accurate, including all the minutes with 59 or 61 seconds. UTC is perfectly defined as atomic time (TAI) with an offset to keep it within 0.9 seconds of UT1 (time as measured by the rotation of the earth). Every time we increment or decrement this offset, this leads to leap seconds. But since 23:59:60 is a valid time (and distinct from 00:00:00 on days with leap seconds), there is no ambiguity here.
The problem here is how most computers handle this: introducing ambiguity by setting the clock backwards or forwards one second, instead of accounting for the fact that not all minutes have 60 seconds. Google did a pragmatic fix for their use case by squeezing leap seconds into the surrounding seconds, stretching them. It works for them, but now their "seconds" are not actual seconds anymore.
It's fine as a timestamp implementation, and great for many uses. But I think a big problem with the documentation. They start off by saying it's "at nanosecond resolution in UTC Epoch time", and then they go on to explain how it uses a completely different encoding that is neither compatible with UTC nor with TAI (atomic time which ignores leap seconds). And then they jump ahead to sample code which again pretends that the timestamp is UTC.
No matter whether you like "google time" or not, this is horrible documentation. They are glossing over an issue which should be marked with big red letters.
The question of how to reconcile leap-second-smearing systems with other systems is an interesting and important one. I'm not sure that timestamp.proto changes this issue: prior to timestamp.proto systems would still communicate using UNIX time (smeared or non-smeared) using plain integer or double seconds. timestamp.proto just provides a structure for storing UNIX time with greater range and precision than a single integer or floating point number can provide.
What I'm trying to say is that I think this is a smearing systems vs. non-smearing systems issue, and not so much a timestamp.proto issue. timestamp.proto mentions smearing but really it's just a vehicle for storing the seconds/nanos from the system clock, with whatever semantics that system clock uses. Because in practice systems don't give you access to both the smeared and non-smeared values; you get whatever the system gives you. The remarks about being leap-second-ignorant apply whether the leap second is being smeared or repeated.
Google implemented leap-second smearing in 2011, before the big push towards cloud. So the need to communicate sub-second timestamps between internal Google systems and external systems was probably not so much on people's minds. But these days we're releasing a bunch of APIs, and sub-second timestamps might become a more important issue for some of them.
This is only an issue if you use the Timestamp to represent a human-readable time. There are more uses for timestamping than for display to a human operator. For example, one might use a timestamp in a software system to detect the passage of time, as in the use of a monotonic clock. In a real-time system you would ignore the presence of leap seconds because you will never examine the timing of your system relative to a Gregorian calendar. Rather, you just want to make sure that the station-keeping engine on your satellite burns for exactly 250 milliseconds, and leap seconds are of no use in that application.
I think you have it exactly backwards, if I understand things correctly.
It _seems_ like their "UTC Epoch time" is the same thing as POSIX time, but the Google engineer's terminology is all fubar. The reliance on the Proleptic Gregorian Calendar is further proof as that's a reference to a specific algorithm for calculating calendar dates.
POSIX time says that there are precisely 86400 "seconds" per day, which I think implies the same thing as saying there being precisely 60 "seconds" per minute. The logical consequence is, of course, that in neither case is "second" referring to the SI second.
Once you get over the fact that we're discussing different units of time, then you can see that POSIX time is _perfect_ for recording and manipulating civil calendar time. For the purposes of calendar manipulation, you rarely if ever need to know elapsed time in SI-unit seconds. All you care about is easily calculating past and _future_ calendar information. Your power company and credit card companies don't bill you by SI seconds, they bill you by the hour, day, week, or month.
Conversely, in those situations where you want accurate and precise SI-second measurements, you rarely if ever want to convert or display that data in terms of calendar time. When SpaceX sends a rocket into space, the view screen shows elapsed seconds since launch, not elapsed seconds since lunch. That's a big difference.
Interestingly, in neither case do leap seconds matter! They're irrelevant. Leaps second play no part in either TAI or POSIX time.
There are some cases where you want both pieces of information, but I think it's usually a mistake to conflate them and try to shoehorn them into the same units. That misguided practice is behind all the anxiety about leap seconds in UTC time.
It's also worth noting that as clocks become increasingly precise and accurate that the whole leap second thing will fade away. UTC time is based on the fiction that there's an abstract, universal clock in the world that is measurable in SI seconds. There isn't. At some point the needs of routine industrial measurements will enter the realm where relativity governs, at which point the fiction will be laid bare. Calendar time, of course, doesn't rely on that fiction.
The move to uncouple civil time from solar time is totally misguided, IMO, and only exacerbates the improper way that software engineers conflate the purpose and function of various time measurements.
It's a serialization format containing seconds and microseconds. You can put whatever you want in there, including true (non-Google) UTC time, right? This seems more like a documentation problem than an actual problem with Protobuf.
It saddens me that this is the top comment. It's complete and total FUD unrelated in any way to what Proto is, and to boot, it's an optional type, provided if you want it, but otherwise not forced to be used in any way! Scroll down the page for much more worthwhile discussions of Proto.
I'm glad they're willing to break compatibility to push their approach, because I think it's a better one. UTC with leap seconds is the worst of all possible worlds - not suitable for human time, not suitable for system time either - as perennial leap second bugs in such high-profile projects as the linux kernel demonstrate. Everyone seems to have agreed for years that basing system time on something without leap seconds would be better - whether that be leap smears or TAI - but no-one bothers to take action.
It's not a full protocol. It's a data type for a serialization library. You can write your own data types and they serialize just as well as the built-in types.
> that breaks interoperability
Wait, what was "broken" here? What was working before that isn't with this new release? What does this inclusion of a utility data type in a serialization library break that previously was intact?
- removing optional values is actually quite nice. In practice, I end up checking for "missing or empty string" anyway.
- the "well-known types" boxed primitive types essentially add optional values back in. And depending on your language bindings, may look the same.
- extensions are still allowed in proto3 syntax files, but only for options - since the descriptor is still proto2. It seems odd to build a proto3 that couldn't represent descriptors.
- I still don't understand the removal of unknown fields. Reserialization of unknown fields was always the first defining characteristic of protobufs I described to people. I actually read many of the design/discussion docs internally when I worked at Google, and I still couldn't figure this one out. Although it's certainly simpler…
- Protobufs are the "lifeblood" (Rob Pike's words) of Google: the protobuf team is working to get rid of significant Lovecraftian internal cruft, after which their ability to incorporate open source contributions should improve dramatically.
Slight correction: optional values are not removed. Quite the opposite; the "optional" keyword is removed because now all fields are optional. It is actually required values which were removed.
> I could trust that if parsing succeeded, then I had a guarantee of a populated data structure
Using required fields have actually bit Google more than once and were increasingly being considered harmful.
A canonical example is that you add a required field, and then update binaryA in production (which receives messages from binaryB), which immediately crashes or errors out because the new field is missing.
So practically speaking, you can never add required fields to any message where you can't guarantee binary version syncing amongst all instances of the message-dependent services. At scale, this is essentially operationally impossible.
And if you're not running an RPC-based service architecture, then why are you using protos anyway?
> A canonical example is that you add a required field ...
Yeah. Don't do that without versioning your protocol. It's even less difficult to handle than maintaining API/ABI compatibility in a library.
> So practically speaking, you can never add required fields to any message where you can't guarantee binary version syncing amongst all instances of the message-dependent services.
Sure you can. If you version things at the protocol or per-request level, you can negotiate protocol conformance just fine.
Having a message type defined as "Message_V1" OR "Message_V2" is still simpler than having "any or none of the fields from any iteration of the message definition, where consistency is solely defined in terms of the field/message validation code you write in every protocol consumer".
> And if you're not running an RPC-based service architecture, then why are you using protos anyway?
It's a very serviceable compact serialization mechanism for at-rest data.
> Yeah. Don't do that without versioning your protocol. It's even less difficult to handle than maintaining API/ABI compatibility in a library.
Actually, the whole point of that was so you don't have to version your protocol. Protocol versioning actually tends to make code maintenance a pain in the posterior, and working through old data really annoying. Instead, you do optional fields.
If you don't want that, go ahead and just write raw bytes and don't bother with the serialization layer.
> Having a message type defined as "Message_V1" OR "Message_V2" is still simpler than having "any or none of the fields from any iteration of the message definition, where consistency is solely defined in terms of the field/message validation code you write in every protocol consumer".
But you don't have to do either. It seems like you aren't familiar with the use of protocol buffers. You just define optional fields with a reasonable default, and magically all the old protobufs get that default value.
> It's a very serviceable compact serialization mechanism for at-rest data.
That's fair, but then you run into the same issue -- adding required a field requires updating your entire store.
Depending on your store, that can range from onerous to outright impossible.
> Don't do that without versioning your protocol
I think it depends on your needs, but I think for most users, explicit versioning of messages is overkill and is just a more heavy way of encoding the same logic (e.g. I saw an older message, I will implicitly upgrade it by filling in these new fields, vs. just looking for the optional field that I just added)
Sure, if you can guarantee that your app will always be in that environment.
Otherwise, you run the risk of having to redo all your protos (+ downtime) if/when your app needs to scale up. I'm not sure whether that's worth avoiding some proto validation logic in the client code.
> You should be very careful about marking fields as `required`. If at some point you wish to stop writing or sending a required field, it will be problematic to change the field to an optional field – old readers will consider messages without this field to be incomplete and may reject or drop them unintentionally. You should consider writing application-specific custom validation routines for your buffers instead. Some engineers at Google have come to the conclusion that using `required` does more harm than good; they prefer to use only `optional` and `repeated`. However, this view is not universal.
In practice, to have decent compatibility as revisions changed, you really had to minimize the use of "required" fields anyway. While I agree it was sometimes nice to be able to avoid having to worry about it, in practice protobuf parsing imposes a very minimal set of constraints on data types. A successful protobuf parse was not nearly enough to ensure you had data integrity. I've run in to more than a few cases of developers using the wrong protobuf (v2) definition and not realizing their successful parse was still wrong.
I agree. In particular in languages without null you will have a lot of Option types in the mapping. You no longer can generate useful type definitions from the proto spec.
> Now, I have to check each field individually, in manually written code, to verify that no required fields are missing.
You always had to check the individual fields for the zero value. A required field in a proto2 message can be set but also have the default value and pass initialization.
> You always had to check the individual fields for the zero value.
No, you didn't. A required field has a value, period. If it defaults to a particular value, then that's the value it has.
If you had a non-required field, then you marked it 'optional', and checked for the field's existence (or mapped optional fields to a Maybe/Option monad representation, forcing the issue).
I think you wording is a bit difficult to parse. Does this convert what you are trying to say:
In proto2, default values are for optional fields. The value of an optional field would be that value, but there's a separate concern as to whether that field had been set.
How does this compare or in general why would you pick this vs newer formats like Cap'n'proto or FlatBuffers?
From FlatBuffers overview I see this comparison:
---
Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/ unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. The code is an order of magnitude bigger, too. Protocol Buffers has neither optional text import/export nor schema language features like unions.
I don't know, but I tried using protocol buffer once for mapbox vector files, the resulting C++ header was huge. It had templates and all sort of things, something like more than 1000 lines.
Cap'n'proto is more or less abandoned I believe.
But it and the flatbuffer approach gives very fast serialization and deserialization speed (essentially takes 0 times) but you pay a cost when you later access data, because it extracts the values you need on demand from the raw bytes.
I'm not sure it would often make much sense overall.
I would be very hesitant to call Cap'n Proto "abandoned". The Cap'n Proto developer is actively building a platform on top of it, and implements features in it as necessary, and as far as I've seen, actively works with pull requests for other features as well.
It's a pity that the "deterministic serialization" gives so few guarantees; I have worked on at least one project that really needed this.
(Basically, we wanted to parse a signed blob, do some work, and pass the original data on without breaking the signature; unfortunately, this requires keeping the serialized form around, since the serialized form cannot be re-generated from its parsed format.)
The main concern that the deterministic serialization isn't canonical is due to the unknown fields. As string and message type share the same wire type, when parsing an unknown string/message type, the parser has no idea whether to recursively canonicalize the unknown field.
The cross-language inconsistency is mainly due to the string fields comparison performance, i.e. java/objc uses utf16 encodings which has different orderings than utf8 strings due to surrogate pairs.
Feel free to start an issue on the github site asking for canonical serialization with your use case. We may change the deterministic serialization with stronger guarantee (e.g. cross language consistency) or add another API for canonical serialization.
This was years ago; I'd feel bad asking you to do a lot of work to support one niche use case in a research project that never quite made it to market. And protobufs ended up saving us quite a bit of development work, even if keeping the blob around is Wrong in a moral sense.
(You can find the niche use case in a response to your sibling comment, BTW.)
Think of a data flow A->B->C, with A e.g. handling incoming message server, B being a spam/virus filter, and C holding the user's mailbox. Spam/virus filters are useful, but are also rather vulnerable - so C is willing to trust B's spam/non-spam judgement, but wants to ensure that B can't alter or make up messages.
If protobufs had one canonical encoding, B could unpack the message and re-pack it when done; with the current protobuf implementation, B needs to keep the original blob around. In either case, C needs to check the signature on whatever blob it receives.
So wouldn't you stick with the original message from A, and just have B sign that? You wouldn't want to have B repack it, because then B has the potential to muck with things.
Imagine working on a team that wants to move quickly but whose output is both a product and an API that's consumed by multiple other teams. The product you are building uses said API, but so do other teams. Your code needs to be stable enough to support these other teams needs (an API which doesn't change under them) but you also want to be able to make changes to your own application quickly, thus needing to change the API regularly.
A reasonable move is to version said API and have an ops team that ensures that all in-use versions of the API stay running. Some consumers will be on the bleeding edge, your team's application for example while others will lag behind.
Using proto* in this case is a reasonable move because you gain multiple benefits, performance being perhaps the least important in this case. Having a defined schema for your API provides some level of natural documentation for the API. Code generation allows your team to publish trusted client libraries for multiple languages.
I'll specifically call out client libraries since I've seen it make a dramatic difference in organizational efficiency, mostly to do with team to team trust levels. Without a client library the testing situation becomes a significant burden, read up on contract testing. When the team that's publishing an API also creates the client that most directly calls that API, the client library is the testing surface instead of every consumer of the API needing to test the API itself for regressions.
We use them internally at Square for our RPC mechanism ("Sake", similar to "Stubby", Google's internal RPC mechanism), for our Kafka-based logging/metrics/queue infrastructure, and for defining external JSON APIs. We're in the process of switching from Sake to GRPC, which also use Protobufs as their payload format (although you can sub in different transports).
> Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.
Yes, I read this. It tells me what Protocol Buffers are. Faster, Smaller XML like data structures for serialisation. What are the most common use cases though? And do people only use them for performance reasons?
The most common use cases line up with those of JSON: communication between programs that don't share an address space. The main advantage over JSON (in my opinion) is the definition of an explicit schema. The second (and also important) advantage is in the efficient size of the serialized data, which limits memory, disk, and bandwidth usage. Another (less important to me) advantage is in serialization and deserialization efficiency. A disadvantage is that it requires deserialization for human inspection - that is, it isn't plain text like JSON or XML.
It is similar to Apache Thrift, if you're looking for a non-Google project with similar ideas.
Serialization and deserialization efficiency is specially important for mobile apps, in which the CPU used to parse/serialize JSON (or gzipped JSON) can become very prominent.
Apache Thrift, IIRC, is actually a reimplementation of protos, in the same way that Facebook's Buck is of Google's Bazel.
I have some times looked at "raw" binary protos to inspect the string fields, that happen(ed?) to be byte-aligned and so readable in a text editor. Not sure off the top of my head if that's always the case.
Performance is a nice benefit, but the standardization of message passing is by far the biggest benefit in my opinion. Within a given language, I know that any API I call will have certain unvarying semantics, I can see a highly readable yet formal spec of the data being exchanged, and the code to manipulate these messages will always be familiar and idiomatic.
Duplicating these benefits with XML or JSON would require defining your own grammar and parser, but wouldn't have the performance benefits. Recreating the performance gains would require a new serialization scheme, at which point you'd have broken from JSON and XML standard tools and recreated protobufs in everything but the proto definition language; at that point, why not create a DSL rather than bolting this functionality into an existing one?
In addition to smaller/faster than XML, protobufs make it extremely easy to declare the schema of data, validate data and version your schema. Then the generated wrappers and static type checking in various languages add additional guarantees that you're using the data correctly.
Plain XML still requires a lot to ensure compatibility when it's used across multiple places, protobufs attempt to minimize many sources of the incompatibilities.
Add in a bunch of tools such as protobuf->JSON, protobuf plaintext serialization, etc and it becomes more difficult to argue for using something such as XML or vanilla JSON.
Flatbuffers are still a nice solution for more performance-critical applications.
Yes, I think you are just using a different sense of "based on" than I am. gRPC is based on Stubby in the sense that it is influenced by the design of Stubby and uses the knowledge learned from creating Stubby.
I used protobuf as the output format for a web crawler. Workers read urls and sequentially write entire HTTP responses to disk. [0] Sure, you could serialize the responses to JSON, but the overhead of representing things like binary image data as escaped unicode strings was prohibitive in my case.
"Why not BSON?" Well, schemas can be nice when performance matters. Instead of solving a parsing problem at runtime, a C/C++ reader can contain a compiler-optimized deserializer for a given protobuf schema. It's almost like directly reading and writing an array of C structs, except protobuf is architecture-independent, and you can add new fields without breaking old readers.
There are plenty of reasons to not use protobuf. I particularly disliked the code generation step for C/C++. That makes even less sense in a language like Python, and yet that's exactly what the official python protobuf implementation from Google does (did?). I wrote a python protobuf library on top of a C protobuf library that avoids codegen: https://github.com/acg/lwpb
For me there are three main advantages: schema, performance and code generation.
Having a strict schema makes it a lot easier to maintain applications in a distributed system. Parsing protobuf is much faster than something like JSON. The multitude of code generators for protobuf make it really simple and easy to use multiple languages on the same data structures.
I used it in a trading system because it's a compact scheme for sending data across networks. It's also quite fast, and there's support for various languages. So you can have a feed handler blasting out prices using a c++ implementation, with a GUI drawing a chart written in c#.
Serializing data for RPC, network protocols or storage, description and serialization of configuration, serializable state, serializing complex types for cryptographic signing, etc.
Why is it useful? The schema both documents the data structure and allows mappings to natural APIs in many different languages. Parsers and encoders are generated for you, and are fast.
At Badoo we use them to have a unified API for all of our platforms (Web, Mobile Web, Android, iOS, Windows Phone etc). This would not have been possible without something like ProtoBuf.
Shocking! Google's started supporting more languages than just the ones they care about. I really hope this signals the death of their disdain culture.
Being a worthwhile Cloud provider means hiring experts in all sorts of languages and supporting their efforts.
Imagine a world where Google didnt just "support node" (YEARS late), but actually turned their v8 expertise into a Cloud product.
But that'd involve convincing Java-devs-turned-VPs to care about JavaScript, <2004>and EVERYONE knows that JavaScript is a terrible language.</2004>
Sadly the JSON format they chose isn't actually suitable for high-performance web apps. Web developers who use protobufs will continue to get by with various nonstandard JSON encodings.
The fields are indexed by field names (converted to lower camel case) instead of tag numbers. It's great for readability, but it's a lot more verbose, particularly for repeated fields.
> Added a new field option "json_name". By default proto field names are converted to "lowerCamelCase" in proto3 JSON format. This option can be used to override this behavior and specify a different JSON name for the field.
You're right. The only people that would use it are people that a) care enough about optimization to switch out shorter tag names and b) don't care enough about optimization to switch to binary format. Probably not many..
I've never looked at proto3, but proto2 has at least the following issues:
* No clue about namespacing. If you pick the wrong name for something, you can have name clashes within a protobuf, across uninterpreted option classes, with protobuf source code, with your own source code; and it's different if you're in Python or C. Nowhere are naming restrictions defined.
* The API is maddening and inconsistent, especially in Python. (It's totally different between Python and C.) Some things look like lists but really aren't (e.g. you can't assign a list to a repeated field in Python). Even basic reflection (e.g. to get at uninterpreted options) is a Lovecraftian nightmare, and the docs are wholly unhelpful.
* Good luck serializing a list. There's not really such a thing, despite that the API pretends like there is; there are only repeated fields. So you need a separate flag to distinguish "empty list" from "not present list".
* Abstruse implementation. There are so many layers of indirection in the generated source and the core library that I wouldn't know where to start debugging.
Not sure if they fixed any of these issues with proto3.
Reflection in the C++ version is as bad or worse given that you can't mess around with it in a REPL to figure out how it really works. And the C++ version has most of the namespacing issues (e.g. any field starting with "set_" has potential to clash with another field).
Both implementations are equally bad, despite that they seem to have been written by two separate teams that didn't communicate with each other.
I think it's more that GRPC (Google's RPC-over-HTTP2 protocol) directly supports Protobuf, and not Flatbuffers. All of Google's Cloud APIs use Protobuf (for example the [Speech API](https://cloud.google.com/speech/reference/rpc/) ).
I have to say, GRPC is pretty great. It's statically typed, supports loads of languages, the interfaces are simple to define (basically Protobuf), and it supports streaming requests! Most RPC systems omit that, or only have message streams (e.g. MQTT). Good RPC systems need both.
The only downside I find is that it is rather complicated (in design; not use).
Been using flatbuffers in production for a high speed market feed for a month now. Love it. Decode/encode time is absurdly fast (~1-2 microseconds for a small to medium schema). If you're pushing 50k+ events/second it can be a great choice. Takes up almost no space on the wire too.
"With that said, my intuition is that SBE will probably edge Cap’n Proto and FlatBuffers on performance in the average case, due to its decision to forgo support for random access. Between Cap’n Proto and FlatBuffers, it’s harder to say. FlatBuffers’ vtable approach seems like it would make access more expensive, though its simpler pointer format may be cheaper to follow. FlatBuffers also appears to do a lot of bookkeeping at encoding time which could get costly (such as de-duping vtables), but I don’t know how costly.
For most people, the performance difference is probably small enough that qualitative (feature) differences in the libraries matter more."
Because it’s easy to pick on myself. :) I, Kenton Varda, was
the primary author of Protocol Buffers version 2, which is the
version that Google released open source. Cap’n Proto is the
result of years of experience working on Protobufs, listening
to user feedback, and thinking about how things could be done
better.
To be completely fair, Protobus v3 is also the result of years of experience working on Protobufs, listening to user feedback, and thinking about how things could be done better :)
> primitive fields set to default values (0 for numeric fields, empty for string/bytes fields) will be skipped during serialization.
I don't totally understand this. Presumably during deserialization they will be set to defaults and not missing? Otherwise, coupled with the removal of required fields, it seems impossible to actually send a 0-value number or empty string, or to send a proto without a field and not have it set to 0 or "" (have to explicitly null the field?).
and how do you send a explicit zero so that the client knows that the field is really set by the server and not the default?
or a explicit empty string?
One case where this question is important is when you are updating a record stored by the server. You only want to send fields you are changing because the record might be huge. But then how does the server distinguish between fields you didn't set and fields you want to set back to the default? The solution is to also tell the server which fields you are changing in a separate message.
I was hoping for packed serialization of non-primitive types. I once used Protobuf to serialize small point clouds, and ended up needing to serialize them as a packed double array and reconstruct the (x, y, z) structure at read time to avoid Protobuf malloc'ing each point individually. Not a huge deal, but it would be a real pain for more complex types.
Could someone explain to me why you would use Protocol Buffers, Cap'n Proto, etc versus rolling your own type-length-value protocol besides API interop?
What if your team could write a smaller TLV protocol, and it was necessary to keep your codebase small? Would this not be wise? Are Protobufs and party not comparable to TLV protocols?
In the vast majority of cases, you want your team to spend their time doing something other than reinventing protos, debugging the in-house implementation, maintaining the library, etc.
It's not clear to me anyway how doing it yourself would help keeping your codebase small vs using protos. In terms of code to maintain, doing it yourself is a net loss. In terms of binary size and method count, the proto libraries for Objective-C and Android are optimized like crazy.
Those are all reasons why I wanted to use protobufs to begin with. It sounded like it solved many issues for us.
But I'm thinking about scripting environments, where the data types used in protobufs don't exist in the host language. Simple things like this. I think in the implementations I've seen, they're just coerced or ignored. That's fine, imo.
But in terms of small codebases: a simple TLV protocol, where only limited data types are implemented, can be 1/10th of the size of any protobufs implementation.
My team has built out a high performance type-length-value system that doesn't require compiled schemas for game development, and we have a very small serialization lib that's smaller than any protobufs implementation for our target language.
I'd like to use protobufs to decrease the amount of modules we have to personally maintain, but I don't see the value in doing so for our particular situation.
I'm a bit confused: When you talk of size, are you talking of the compiled binary size of the runtime + generated code, or are you talking of lines of code?
If you're talking of binary size, I'm surprised that it'd be a problem given that you're using a scripting environment. Maybe you'd be willing to share more details?
If you're talking of lines of code, using someone else's library seems to me to always be better.
C# binary serialization is only useful in certain circumstances. It doesn't work outside the .NET world and it even has compatibility problems within the .NET world—you can break deserialization by making certain changes to your code. From the Microsoft documentation:
> The state of a UTF-8 or UTF-7 encoded object is not preserved if the object is serialized and deserialized using different .NET Framework versions.
Performance and data size are much better with protobufs: http://stackoverflow.com/questions/549128/fast-and-compact-o.... Built-in serializers are only workable when both ends are on the same platform (i.e. .Net), and even then class versioning can be a problem.
> Added well-known type protos (any.proto, empty.proto, timestamp.proto, duration.proto, etc.). Users can import and use these protos just like regular proto files. Additional runtime support are available for each language.
From timestamp.proto:
Nice, sort of -- all UTC times are representable. But you can't display the time in normal human-readable form without a leap-second table, and even their sample code is wrong is almost all cases: That's only right if you run your computer in Google time. And, damn it, Google time leaked out into public NTP the last time their was a leap second, breaking all kinds of things.Sticking one's head in the sand and pretending there are no leap seconds is one thing, but designing a protocol that breaks interoperability with people who don't bury their heads in the sand is another thing entirely.
Edit: fixed formatting