Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article doesn't do a great job at explaining that this isn't always just filtering, sometimes it's aggregation too.

A mobile client may need data points to display a single page that require calling 20 different APIs. Even if every single backend offered options for filtering as efficiently as possible, you may still need an aggregation service to bundle those 20 calls up into a single (or small set) of service calls to save on round-trip time.



You still have to aggregate somewhere. You can do it on the client or the frontend backend, it still has to get done. In the case of the latter we’re adding one extra hop before the client gets their data.

This pattern is advocating for reduced technical performance to accommodate organizational complexity, which I think the parent finds odd. You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios (after the client called)


> In the case of [backend for frontend] we’re adding one extra hop before the client gets their data.

> You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios

The article touches on this point, and it mirrors what I've seen as well. The time from client -> backend can be significant. For reasons completely outside of your control.

By using this pattern, you have 1 slow hop that's outside of your control followed by 20 hops that are in your control. You could decide to implement caching a certain way, batch API calls efficiently, etc.

You could do that on the frontend as well, but I've found it more complex in practice.

Also a note: I'm not really a BFF advocate or anything, just pointing out the network hops aren't equal. I did a spike on a BFF server implemented with GraphQL and it looked really promising.


You won't necessarily have to have ?client_type=xyz params on your endpoints if the BFF can do the filtering, so it saves having to build out all sorts of complexity in each backend service to write custom filtering logic. Of course, you'll pay the price in serialization time and data volume to transmit to the BFF, but that's negligible compared to the RTT of a mobile client.

I'd much rather issue 20 requests across a data center with sub-millisecond latency and pooled connections than try to make 20 requests from a spotty mobile network thats prone to all sorts of transmission loss and delays, even with multiplexing.


> You still have to aggregate somewhere.

Tbh, I'm not entirely sold on this - although I see this (server-side aggregation a cross data sources) as the main idea behind graphql. So seems like it belongs in your graphql proxy (which can proxy graphql, rest, soap endpoints - maybe even databases).

But for the "somewhere" part - consider that your servers might be on a 10gbps interconnect (and on 1 to 10gbps interconnect to external servers) - while your client might be on a 10 Mbps link - over bigger distances (higher latency).

Aggregating on client could be much slower because of the round-trip being much slower.

In addition, you might be able to do some server-side caching of queries that are popular across clients.


I agree with your assessment here, but one additional benefit is the capability to iterate faster on the backend. You have control over _where_ the aggregated data is coming from without waiting months for users to update their mobile app so that it sends requests to a new service, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: