Hacker Newsnew | past | comments | ask | show | jobs | submit | nbbnbb's commentslogin

Lets take it a level up. I'm still not sure why we even need a message bus in there in the first place. The whole Linux boot, init, systemd pile is a Rube Goldberg machine that is very difficult to understand at the moment. The only reason I suspect most people aren't complaining is that our abstraction level is currently a lot higher (docker / kubernetes etc) and machines are mostly ephemeral cattle and few people go near it.

As for JSON, please don't even start me on that. It is one of the worst serialisation decisions we ever made as a society. Poorly defined and unreliable primitive types, terrible schema support and expensive to parse. In a kernel, it does not belong! Give it a few months and varlink will be YAML over the top of that.


I think it doesn't have to be a message bus per-se, that design decision is mostly just because it's convenient. Even D-Bus can actually be used without the bus, if you really want to.

D-Bus is just a bus so it can solve a bunch of different IPC problems at the same time. e.g. D-Bus can handle ACLs/permissions on RPC methods, D-Bus can handle multiple producers and multiple consumers, and so on. I think ultimately all that's really needed is IPC, and having a unified bus is just to allow for some additional use cases that are harder if you're only using UNIX domain sockets directly.

If there are going to be systemd components that have IPC, then I'd argue they should probably use something standard rather than something bespoke. It's good to not re-invent the wheel.

Not that I think Varlink is any better. It seems like at best it's probably a lateral move. I hope this does not go forward.


> If there are going to be systemd components that have IPC, then I'd argue they should probably use something standard rather than something bespoke. It's good to not re-invent the wheel.

This is my point.

My favourite DBus situation a number of years ago was a CentOS 7 box that reboot stopped working on with a cryptic DBus error that no one has ever seen before. I had to sync it, power cycle the node from the ILO card and cross my fingers.

I really don't give a shit about this. I just wanted to run my jobs on the node, not turn into a sysadmin due to someone else's dubious architectural decisions.


Yes but the systemd developers don't want to implement their own protocols with e.g. ACL checking, and given some of their track record I kind of think you don't want them to, either. I'm pretty sure the error conditions would be even more bespoke if they "just" used UNIX domain sockets directly. Don't get me wrong, there's nothing particularly wrong with UNIX domain sockets, but there's no "standard" protocols for communicating over UDS.


This is systemd we’re talking about. A service manager that already mucks with mount namespaces.

It would be quite straightforward to map a capability-like UNIX socket into each service’s filesystem and give it a private view of the world. But instead…

> Public varlink interfaces are registered system-wide by their well-known address, by default /run/org.varlink.resolver. The resolver translates a given varlink interface to the service address which provides this interface.

…we have well known names, and sandboxing, or replacing a service for just one client, remains a mess. Sigh.


Please, your trolling is not really welcome.

> It would be quite straightforward to map a capability-like UNIX socket into each service’s filesystem and give it a private view of the world. But instead…

Can you link to your PR where you solved the problem?


Well there sort of is but people don't tend to know or use it. If it's within the same machine and architecture, which should be the case for an init system, then a fixed size struct can be written and read trivially.


C structs are a terrible serialization format, since they are not a serialization format at all. Nothing guarantees that you will get consistent struct behavior on the same machine, but also, it only really solves the problem for C. For everything else, you have to duplicate the C structure exactly, including however it may vary per architecture (e.g. due to alignment.)

And OK fine. It's not that bad, most C ABIs are able to work around this reasonably OK (not all of them but sure, let's just call it a skill issue.) But then what do you do when you want anything more complicated than a completely fixed-size type? Like for example... a string. Or an array. Now we can't just use a struct, a single request will need to be split into multiple different structures at the bare minimum.

And plus, there's no real reason to limit this all to the same machine. Tunneling UNIX domain sockets over the network is perfectly reasonable behavior and most* SSH implementations these days support this. So I think scoping the interoperability to "same machine" is unnecessarily limiting, especially when it's not actually hard to write consistent de/serialization in any language.

* At least the ones I can think of, like OpenSSH[1], Go's x/crypto/ssh[2], and libssh2[3].

[1]: https://www.openssh.com/txt/release-6.7

[2]: https://pkg.go.dev/golang.org/x/crypto/ssh#Client.ListenUnix

[3]: https://github.com/libssh2/libssh2/pull/945


BTW, Lustre's RPC system's serialization is very much based on C structs. It's receiver-makes-right to deal with endianness, for example. It's a pain, but it's also fast.


Making an RPC serialization system that is zero-overhead i.e. can use the same format on the wire as it does on disk is not a terrible idea. Capnp is the serialization format that I've been suggesting as a potential candidate and it is basically just taking the idea of C structs, dumping it into it's own schema language, and adding the bare minimum to get some protobuf-like semantics.


Well, Lustre RPC doesn't use on-disk data structures on the wire, though that is indeed an interesting idea.

In Lustre RPC _control_ messages go over one channel and they're all a C structure(s) with sender encoding hints so the receiver can make it right, and any variable-length payloads go in separate chunks trailing the C structures.

Whereas bulk _data_ is done with RDMA, and there's no C structures in sight for that.

Capnp sounds about right for encoding rules. The way I'd do it:

  - target 64-bit architectures
    (32-bit senders have to do work
     to encode, but 64-bit senders
     don't)
  - assume C-style struct packing
    rules in the host language
    (but not #pragma packed)
  - use an arena allocator
  - transmit {archflags, base pointer, data}
  - receiver makes right:
     - swab if necessary
     - fix interior pointers
     - fail if there are pointers
       to anything outside the
       received data
     - convert to 32-bit if the
       receiver is 32-bit
(That's roughly what Lustre RPC does.)

As for syntax, I'd build an "ASN.2" that has a syntax that's parseable with LALR(1), dammit, and which is more like what today's devs are used to, but which is otherwise 100% equivalent to ASN.1.


Out of curiosity, why not use offsets instead of pointers? That's what capnp does. I assume offset calculation is going to be efficient on most platforms. This removes the need for fixing up pointers; instead you just need to check bounds.


It's more work for the sender, but the receiver still has to do the same amount of work as before to get back to actual pointers. So it seems like pointless work.

Having actual interior pointers means not having to deal with pointers as offsets when using these objects. Now the programming language could hide those details, but that means knowing or keeping track of the root object whenever traversing those interior pointers, which could be annoying, or else encoding an offset to the root and an offset to the pointed-to-item, which would be ok, and then the programming language can totally hide the fact that interior pointers are offset pairs.

I've a feeling that fixing up pointers is the more interoperable approach, but it's true that it does more memory writes. In any case all interior pointers have to be validated on receiving -- I don't see how to avoid that (bummer).

What a fun sub-thread.


Note within the domain of this problem was the point. Which means on the same machine, with the same architecture and both ends being C which is what the init system is written in.

You are adding more problems that don't exist to the specification.

As for strings, just shove a char[4096] in there. Use a bit of memory to save a lot of parsing.


> You are adding more problems that don't exist to the specification.

D-Bus does in fact already have support for remoting, and like I said, you can tunnel it today. I'm only suggesting it because I have in fact tunneled D-Bus over the network to call into systemd specifically, already!

> As for strings, just shove a char[4096] in there. Use a bit of memory to save a lot of parsing.

OK. So... waste an entire page of memory for each string. And then we avoid all of that parsing, but the resulting code is horribly error-prone. And then it still doesn't work if you actually want really large strings, and it also doesn't do much to answer arrays of other things like structures.

Can you maybe see why this is compelling to virtually nobody?


Even being run on the same machine doesn't guarantee two independent processes agree on C struct layout compiled from the same source. For one, you could have something as simple as one compiled for 32bit, one 64, but even then compiler flags can impact struct layout.


> As for strings, just shove a char[4096] in there.

For the love of God, use a proper slice/fat pointer, please.

Switching over to slices eliminates 90%+ of the issues with using C. Carrying around the base and the length eliminates a huge number of the overrun issues (especially if you don't store them consecutively).

Splitting the base and the offset gives a huge amount of semantic information and makes serialization vastly easier.


Broadly, I agree with you. C strings were a mistake. The standard library is full of broken shit that nobody should use, and worse, due to the myriad of slightly different "safe" string library functions, (a completely different subset of which is supported on any given platform,) which all have different edge cases, many people are over-confident that their C string code is actually correct. But is it? Is it checking errors? Does your function ensure that the destination buffer is null-terminated when it fails? Are you sure you don't have any off-by-one issues anywhere?

Correct as you may be though the argument here is that you should just write raw structs into Unix sockets. In this case you can't really use pointers. So, realistically, no slices either. In this context a fixed-size buffer is quite literally the only sensible thing you can do, but also, I think it's a great demonstration of why you absolutely shouldn't do this.

That said, if we're willing to get rid of the constraint of using only one plain C struct, you could use offsets instead of pointers. Allocate some contiguous chunk of memory for your entire request, place struct/strings/etc. in it, and use relative offsets. Then on the receiver side you just need some fairly basic validation checks to ensure none of the offsets go out of bounds. But at that point, you've basically invented part of Cap'n'proto, which begs the question... Why not just use something like that instead. It's pretty much the entire reason they were invented.

Oh well. Unfortunately the unforced errors of D-Bus seem like they will lead to an overcorrection in the other direction, turning the core of our operating system into something that I suspect nobody will love in the long term.


> But at that point, you've basically invented part of Cap'n'proto

The only problem I have with Cap'n Proto is that the description is external to the serialization. Ideally I'd like the binary format to have a small description of what it is at the message head so that people can process messages from future versions.

ie. Something like: "Hmm, I recognize your MSG1 hash/UUID/descriptor so I can do a fast path and just map you directly into memory and grab field FD. Erm, I don't recognize MSG2, so I need to read the description and figure out if it even has field FD and then where FD is in the message."


I thought about this for a bit. I think largely to do things with messages you don't know about is probably a bad idea in general; writing code that works this way is bound to create a lot of trouble in the future, and it's hard to always reason about from every PoV. However, there are some use cases where dealing with types not known at compile-time is useful, obviously debugging tools. In that case I think the right thing to do is just have a way to look up schemas based on some sort of identity. Cap'n'proto is not necessarily the greatest here: It relies on a randomly-generated 64-bit file identifier. I would prefer a URL or perhaps a UUID instead. Either way, carrying a tiny bit of identity information means that the relatively-niche users who need to introspect an unknown message don't cause everyone else to need to pay up-front for describability, and those users that do need introspection can get the entire schema rather than just whatever is described in the serialized form.

It's better to design APIs to be extensible in ways that doesn't require dynamic introspection. It's always possible to have a "generic" header message that contains a more specific message inside of it, so that some consumers of an API can operate on messages even when they contain some data that they don't understand, but I think this still warrants some care to make sure it's definitely the right API design. Maybe in the future you'll come to the conclusion it would actually be better if consumers don't even try to process things they're not aware of as the semantics they implement may some day be wrong for a new type of message.


> I think largely to do things with messages you don't know about is probably a bad idea in general

Versioning, at the least, is extremely difficult without this.

Look at the Vulkan API for an example of what they have to do in C to manage this. They have both an sType tag and a pNext extension pointer in order for past APIs to be able to consume future versions.


But how do you know that the field called "FD" is meaningful if the message is a totally different schema than the one you were expecting?

In general there's very little you can really do with a dynamic schema. Perhaps you can convert the message to JSON or something. But if your code doesn't understand the schema it received, then it can't possibly understand what the fields mean...


Varlink is not a message bus. Hence you should be happy?


Yeah, how will number/float serialization go? Are we going to serialize them as strings and parse them? That abstraction isn't handled the same way across multiple languages.


> I'm still not sure why we even need a message bus in there in the first place.

Because traditional POSIX IPC mechanisms are absolute unworkable dogshit.

> It is one of the worst serialisation decisions we ever made as a society.

There isn't really any alternative. It's either JSON or "JSON but in binary". (Like CBOR.) Anything else is not interoperable.


There are a world of serialization formats that can offer a similar interoperability story to JSON or JSON-but-binary formats. And sure, implementing them in every language that someone might be interested in using them in might require complication, but:

- Whatever: people in more niche languages are pretty used to needing to do FFI for things like this anyhow.

- Many of them already have a better ecosystem than D-Bus. e.g. interoperability between Protobuf and Cap'n'proto implementations is good. Protobuf in most (all?) runtimes supports dynamically reading a schema and parsing binary wire format with it, as well as code generation. You can also maintain backwards compatibility in these formats by following relatively simple rules that can be statically-enforced.

- JSON and JSON-but-binary have some annoying downsides. I really don't think field names of composite types belong as part of the ABI. JSON-like formats also often have to try to deal with the fact that JSON doesn't strictly define all semantics. Some of them differ from JSON is subtle ways, so supporting both JSON and sorta-JSON can lead to nasty side-effects.

Maybe most importantly, since we're not writing software that's speaking to web browsers, JSON isn't even particularly convenient to begin with. A lot of the software will be in C and Rust most likely. It helps a bit for scripting languages like Python, but I'm not convinced it's worth the downsides.


Sorry, but bash isn't a "niche language" and it doesn't have an FFI story.



I don't know how to tell you this, but, you don't need to implement an RPC protocol in bash, nor do you need FFI. You can use CLI tools like `dbus-send`.

I pray to God nothing meaningful is actually doing what you are insinuating in any serious environment.


I'm trying to tell you that something that isn't straceable and greappable shouldn't belong in your system services stack.


Strace/ptrace is awful, and I don't know what "greappable" means here. Nothing stops us from adding DTrace probes.


FFI is the shell's only job.


This is a quite frankly ridiculous point. Most of that garb came from the HPC people who built loads of stuff on top of it in the first place. It's absolutely fine for this sort of stuff. It's sending the odd little thing here and there, not on a complex HPC cluster.

As for JSON, are you really that short sighted that it's the only method of encoding something? Is "oh well it doesn't fit the primitive types, so just shove it in a string and add another layer of parsing" acceptable? Hell no.


> ...it's the only method of encoding something?

If you want something on the system level parsable by anything? Yes it is.


protobufs / asn.1 / structs ...

Edit: hell even XML is better than this!


Structs are a part of C semantic. They are not an ipc format. You can somewhat use them like one if you take a lot of precaution about how they are laid out in memory including padding and packing but it’s very brittle.

Asn.1 is both quite complicated and not very efficient.

They could certainly have gone with protobufs or another binary serialisation format but would it really be better than the current choice?

I don’t think the issue they are trying to solve is related to serialisation anyway. Seems to me they are unhappy about the bus part not the message format part.


ASN.1 BER/DER is more or less the same thing as CBOR. The perceived complexity of ASN.1 comes from the schema language and specifications written in the convoluted telco/ITU-T style (and well, the 80's type system that has ~8 times two different types for “human readable string”).


That "convoluted telco/ITU-T style" yields amazingly high quality specifications. I'll take X.68x / X.69x any day over most Internet RFCs (and I've written a number of Internet RFCs). The ITU-T puts a great deal of effort into its specs, or at least the ASN.1 working group did.

ASN.1 is not that complicated. Pity that fools who thought ASN.1 was complicated re-invented the wheel quite poorly (Protocol Buffers I'm looking at you).


For our sins, our industry is doomed to suffer under unbearable weight of endless reinvented wheels. Of course it would have been better to stick with ASN.1. Of course we didn't, because inexperience and hubris. We'll never learn.


It sure seems that way. Sad. It's not just hubris nor inexperience -- it's cognitive load. It's often easier to wing something that later grows a lot than it is to go find a suitable technology that already exists.


One thing I liked about a Vernor Vinge sci-fi novel I read once was the concept of "computer archeologist". Spool the evolution of software forwards a few centuries, and we'll have layers upon layers of software where instead of solving problems with existing tooling, we just plaster on yet another NIH layer. Rinse and repeat, and soon enough we'll need a separate profession of people who are capable of digging down into those old layers and figure out how they work.


> The perceived complexity of ASN.1 comes from the schema language and specifications written in the convoluted telco/ITU-T style (and well, the 80's type system that has ~8 times two different types for “human readable string”).

I can’t resist pointing that it’s basically a longer way of saying quite complicated and not very efficient.


> I can’t resist pointing that it’s basically a longer way of saying quite complicated and not very efficient.

That's very wrong. ASN.1 is complicated because it's quite complete by comparison to other syntaxes, but it's absolutely not inefficient unless you mean BER/DER/CER, but those are just _some_ of the encoding rules available for use with ASN.1.

To give just one example of "complicated", ASN.1 lets you specify default values for optional members (fields) of SEQUENCEs and SETs (structures), whereas Protocol Buffers and XDR (to give some examples) only let you specify optional fields but not default values.

Another example of "complicated" is that ASN.1 has extensibility rules because the whole "oh TLV encodings are inherently extensible" thing turned out to be a Bad Idea (tm) when people decided that TLV encodings were unnecessarily inefficient (true!) so they designed efficient, non-TLV encodings. Well guess what: Protocol Buffers suffers from extensibility issues that ASN.1 does not, and that is a serious problem.

Basically, with a subset of ASN.1 you can do everything that you can do with MSFT RPC's IDL, with XDR, with Protocol Buffers, etc. But if you stick to a simple subset of ASN.1, or to any of those other IDLs, then you end up having to write _normative_ natural language text (typically English) in specifications to cover all the things not stated in the IDL part of the spec. The problem with that is that it's easy to miss things or get them wrong, or to be ambiguous. ASN.1 in its full flower of "complexity" (all of X.680 plus all of X.681, X.682, and X.683) lets you express much more of your protocols in a _formal_ language.

I maintain an ASN.1 compiler. I've implemented parts of X.681, X.682, and X.683 so that I could have the compiler generate code for the sorts of typed holes you see in PKI -all the extensions, all the SANs, and so on- so that the programmer can do much less of the work of having to invoke a codec for each of those extensions.

A lot of the complexity in ASN.1 is optional, but it's very much worth at least knowing about it. Certainly it's worth not repeating mistakes of the past. Protocol Buffers is infuriating. Not only is PB a TLV encoding (why? probably because "extensibility is easy with TLV!!1!, but that's not quite true), but the IDL requires manual assignment of tag values, which makes uses of the PB IDL very ugly. ASN.1 originally also had the manual assignment of tags problem, but eventually ASN.1 was extended to not require that anymore.

Cavalier attitudes like "ASN.1 is too complicated" lead to bad results.


> That's very wrong. ASN.1 is complicated because it's quite complete by comparison to other syntaxes

So, it's quite complicated. Yes, what I have been saying from the start. If you start the conversation by "you can define a small subset of this terrible piece of technology which is bearable", it's going to be hard convincing people it's a good idea.

> Cavalier attitudes like "ASN.1 is too complicated" lead to bad results.

I merely say quite complicated not too complicated.

Still, ASN.1 is a telco protocol through and through. It shows everywhere: syntax, tooling. Honestly, I don't see any point in using it unless it's required by law or by contract (I had to, I will never again).

> but it's absolutely not inefficient unless you mean BER/DER/CER, but those are just _some_ of the encoding rules available for use with ASN.1.

Sorry, I'm glade to learn you can make ASN.1 efficient if you are a specialist and now what you are doing with the myriad available encodings. It's only inefficient in the way everyone use it.


> So, it's quite complicated.

Subsets of ASN.1 that match the functionality of Protocol Buffers are not "quite complicated" -- they are no more complicated than PB.

> Still, ASN.1 is a telco protocol through and through.

Not really. The ITU-T developed it, so it gets used a lot in telco protocols, but the IETF also makes a lot of use of it. It's just a syntax and set of encoding rules.

And so what if it were "a telco protocol through and through" anyways? Where's the problem?

> It shows everywhere: syntax, tooling.

The syntax is very much a 1980s syntax. It is ugly syntax, and it is hard to write a parser for using LALR(1) because there are cases where the same definition means different things depending on what kinds of things are used in the definition. But this can be fixed by using an alternate syntax, or by not using LALR(1), or by hacking it.

The tooling? There's open source tooling that generates code like any XDR tooling and like PB tooling and like MSFT RPC tooling.

> Sorry, I'm glade to learn you can make ASN.1 efficient if you are a specialist and now what you are doing with the myriad available encodings. It's only inefficient in the way everyone use it.

No, you don't have to be a specialist. The complaint about inefficiency is about the choice of encoding rules made by whatever protocol spec you're targeting. E.g., PKI uses DER, so a TLV encoding, thus it's inefficient. Ditto Kerberos. These choices are hard to change ex-post, so they don't change.

"[T]he way everyone use it" is the way the application protocol specs say you have to. But that's not ASN.1 -- that's the application protocol.


> The tooling? There's open source tooling that generates code like any XDR tooling and like PB tooling and like MSFT RPC tooling.

There is no open source tooling that combines really used scheme understanding - in 5G this includes parameterized specifications by X.683 - and decoder able to show partially decoded message before an error, with per-bit explanation of rules led to its encoding.

> E.g., PKI uses DER, so a TLV encoding, thus it's inefficient.

When it is used, ~5% space economy is never worth people efforts to diagnose any problem. I strictly vote for this "inefficiency".


> Structs are a part of C semantic.

Uh, no, structs, records, whatever you want to call them, are in many, if not most programming languages. "Structs" is not just "C structs" -- it's just shorthand for "structured data types" (same as in C!).

> Asn.1 is both quite complicated and not very efficient.

Every rich encoding system is complicated. As for efficiency, ASN.1 has many encoding rules, some of which are quite bad (BER/DER/CER, which are the first family of ERs, and so many thing ASN.1 == BER/DER/CER, but that's not the case), and some of which are very efficient (PER, OER). Heck, you can use XML and JSON as ASN.1 encoding rules (XER, JER).

> They could certainly have gone with protobufs or another binary serialisation format but would it really be better than the current choice?

Protocol buffers is a tag-length-value (TLV) encoding, same as BER/DER/CER. Having to have a tag and length for every value encoded is very inefficient, both in terms of encoding size as well as in terms of computation.

The better ASN.1 ERs -PER and OER- are much more akin to XDR and Flat buffers than to protobufs.

> I don’t think the issue they are trying to solve is related to serialisation anyway. Seems to me they are unhappy about the bus part not the message format part.

This definitely seems to be be the case.


It seems you posit taglessness to be a universal crucial merit of any encoding scheme. This is good in an ideal world, heh.

I have had a misfortune to work for 5G which is full of PER-encoded protocols. Dealing with discrepancies in them - incompatible changes in 3GPP standard versions, different vendorsʼ errors, combined with usually low level of developers and managers in a typical corporation - was an utter nightmare.

IETF, in general, provides a good policy combining truly fixed binary protocols when they are unavoidable (IP/TCP/UDP levels) and flexible, often text, protocols where there is no substantial overhead from their use. Their early moves, well, suffered from over-grammaticalization (as RFC822). CBOR is nice here because it combines tagness and compactness. 3-bit basic tag combined with value (if fit) or length, it is commensurable with OER in efficiency but is decodable without scheme - and it is extremely useful in practice.


> Uh, no, structs, records, whatever you want to call them

It's plenty clear from discussion context that OP is talking about C struct but yes, replace C with any languages which suit you. It will still be part of the language semantic and not an IPC specification.

The point is you can't generally use memory layout as an IPC protocol because you generally have no guarantee that it will be the same for all architectures.


If it's IPC, it's the same architecture (mostly; typically there's at most 3 local architectures). The receiver can always make right. If there's hidden remoting going on, the proxies can make things right.


> The receiver can always make right.

Certainly but that’s hardly structs anymore. You are implicitly defining a binary format which is aligned on the sender memory layout then.


It's "structs" when the sender and receiver are using the same architecture, and if they're using the same int/long/pointer sizes then the only work to do is swabbing and pointer validation / fixups. That's a lot less work than is needed to do just about any encoding like protocol buffers, but it's not far from flat buffers and capnp.


Thank goodness they didn’t pick YAML though.


Yet!


You don't quite understand how this works.

One requirement is being able to strace a misbehaving service and figure out quickly what it's sending and receiving.

This is a system-level protocol, not just a random user app.


See you wrote it in a clearly understandable way without abusing mathematics or giving any credence to mathematics being involved in the concept.


Using an equation to represent this is dishonest. It assumes linearity and proportionality between variables which may not be the case. Also none of the terms are really measurable. You might as well write statements instead.

I mean try defining waste and quality.

Fundamentally, and to use a non-mathematical term appropriately in context, it is a load of bollocks. It is used to make simple ideas look like they are rigorously defined to people without the tools to interpret them. And that is dishonest.

As for inappropriate statistical methods, survey companies are a breeding ground for providing tools which the results of are not interpreted with any statistical rigour or language.

Source: annoyed mathematician.


+1.

One of the few comments on HN i fully agree with without reservation :-)

While folks may use symbols instead of long-winded statements they need to make it clear that it is merely a shorthand and no mathematical rigour is implied.


I don't have any formal data to prove this available without losing anonymity and probably getting sued by my employer but the introduction of them at my organisation correlates directly to a measurable rise in bugs and incidents. From causal analysis, the tools themselves are not directly responsible as such despite having limited veracity, but people trust them and do not do their jobs properly. There is also a mystique around them being the solution for all validation processes which leads to suboptimal attention at the validation stage on the hope that some vendor we already have is going to magically make a problem go away like they said they would at the last conference. I figure at this point the gain might be a negative on a social and human perspective the moment the idea was commercialised.

Urgh. I can't wait to retire.


I wonder if this is a legal issue if they bypass HDCP and record or snapshot content from a privileged domain?


Echoing from comments, most likely the image is scaled down and / or only fingerprints (think: Shazaam) are uploaded.

While the latter allows remote party to gauge what you're looking at it most likely doesn't infringe copyrights. But, as you mention, it might very well violate some of the HDCP fine print.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: