Hacker Newsnew | past | comments | ask | show | jobs | submit | vyshane's commentslogin

Fountain pens are like mechanical keyboards. People use them because they make the act of writing _feel_ good. It's not about the tech, but rather about the pleasure of using the instrument. How smooth do you want the nib to be? How much feedback? How bouncy/flexible? How much line variation? Discovering (or modding) your preferred nib is a bit like discovering the one switch that suits you really. A bit of a rabbit hole... And then of course you get into inks.


> And then of course you get into inks.

Shimmering inks (inks with glitter suspended in them). You will probably need to use dip pens (I like glass ones), but the results are stunning. Not something you'd ever expect to see out of a pen.

Plus, you an also use thinned acrylic paints with dip pens...


No backpressure in RxJS.


This article resonates with me. I feel very strongly about spyware (Bossware is too kind). Views expressed here are my own, not those of my employer.

I work for a company that makes an automated time tracking product (WiseTime [1]). We migrated our infrastructure to EU/Germany because we wanted to fall under a jurisdiction that is one of the strictest when it comes to privacy. This is how we think about the problem.

- Many professionals (lawyers, contractors, ...) get paid for the time that they bill their clients

- Manual time tracking (start/stop stopwatch) sucks

- Automated time tracking is an order of magnitude more convenient

- If you are going to automate the problem away, make sure that the system cannot be abused to spy on people

- Otherwise no one will want to use it!

We view privacy as one of our most important features, and our systems were designed from the ground up to protect it.

- Your activity is captured into a private timeline that only you can see

- To make your time available to your team, you must select the activities that you want to share, and explicitly post them to the team. It's like sending an email. Your draft is private, but once you send it off, then your recipient has a copy of it.

- We allow you to anonymise your posted activity data when you leave a team

- We allow you to specify filters around what activities should and shouldn't be captured. Of course you can delete anything you want off of your private timeline.

- We provide user-level and team-level data retention settings. We automatically purge data that falls outside of your desired retention period.

- We silo our data layer so that we don't store any personal information with user activity data. User activity data is siloed away from posted team data, and so on.

- We take GDPR seriously and we even have automated processes to purge data from our Sales team's CRM

We are a remote-first team, and we wanted to build a system that we personally dogfood without any qualms.

[1]: https://wisetime.com


I often find myself thinking about problems in the shower or out on walks, and that's also when I have big breakthroughs. How does WiseTime ensure I'm paid for that time too, not just when my butt is in my seat?


That's a tough one to automate. Right now, it involves logging a manual time entry to your timeline (then posting it). If you walk away from your computer and come back, WiseTime will ask whether you want to log the time (or part of it).

If you wake up in the morning, jump into the shower, solve a problem there, and hop onto your computer, WiseTime will then offer to log the last several hours including your sleep time. Edit down to 10 minutes (or however long your shower was) and log it. A bit contrived, but that's the best I got at this time. It's a tough problem to solve ;)


I use ManicTime and it marks the block as idle. From there I can tag the block as lunch/break/meeting/shoulder surfing a coworker or whatever.

The activity feed is a little creepy but I'm not using it in a team so it is 100% local with no cloud stuff involved.


3 years ago, I also wanted a bare metal cluster for my homelab. I wanted x86-64, low power consumption, small footprint, and low cost. I ended up building this 5 node nano ITX tower:

https://vyshane.com/2016/12/19/5-node-nano-itx-kubernetes-to...

I think that the exposed boards adds to its charm. Doesn't help with dust though.


Yours is a lot neater than the four node bare cluster I built a few years ago: https://rwmj.wordpress.com/2014/04/28/caseless-virtualizatio...

One issue with caseless machines is the amount of RF they emit. Makes it hard to listen to any broadcast radio near one and probably disturbs the neighbours similarly.

I'm now using a cluster of NUCs which is considerably easier to deal with although quite a lot more expensive: https://rwmj.wordpress.com/2018/02/06/nuc-cluster-bring-up/


Very nice; I've considered doing something similar and running some production sites on it from my home, but the limitation has always been my terrible Internet bandwidth through Spectrum.

We almost got Verizon gigabit fiber a few years ago... then AT&T ran fiber to the front of my neighborhood last year, and then never ran it to the houses. As it is, I'm stuck with 10 mbps uplink, which is not enough to be able to do most of what I would want to do with a more powerful local cluster.


This is very cool. Curious, roughly, how much did this setup cost?


The NIO version of Swift gRPC is currently at v1.0.0-alpha.6. Hopefully we'll see a 1.0 version soon.

Swift gRPC repo: https://github.com/grpc/grpc-swift


This post on the Swift forums gives a good overview of Swift gRPC: https://forums.swift.org/t/discussion-grpc-swift/29584


How about the equivalent of RxJava instead: https://developer.apple.com/documentation/combine

Unfortunately not available on Linux, so server side is out :(


There’s an open source implementation that can be worth exploring:

https://github.com/broadwaylamb/OpenCombine


TL;DR

I was pretty excited when Apple announced SwiftUI and Combine at dub dub this year. I have been following the Swift gRPC project, and when they released their first 1.0.0-alpha version not long after, I decided that the world was ready for CombineGRPC, a library that integrates Swift gRPC and the Combine framework.

I dreamt of beautiful, responsive UIs; of streaming data straight to my lists as the user scrolled. Then I woke up and got hacking.


We've been using gRPC and Protocol Buffers for the last couple of years. We write APIs using the Protobuf interface definition language, then generate client libraries and server side interfaces. Then it's a matter of implementing the server by filling in the blanks.


I love protobuf for this reason. Personally I've opted for Twirp instead of gRPC, as gRPC has a lot of baggage, and streaming is really not necessary for me.

We've had to drop-in-replace, or add a validation or access layer service for something, and using protobuf has made this super easy. Anything interacting with that service is none the wiser.


gRPC has been solid for us on the JVM, and streaming has been great when consuming from Apache Flink jobs, integrating with message queues, receiving push notifications and so on. For async work it's useful to have more than just request/response.

I've been playing the FoundationDB Record Layer for a personal project of mine, and with this setup I can generate not only the API implementation, but also the models used by the persistence layer:

Protobuf (Messages) -> gRPC -> Scala/Monix -> Protobuf (Models) -> FoundationDB


> Protobuf (Models)

Sounds really cool! Is this something that comes out of the box or generated by your own plugins?


FoundationDB Record Layer uses protocol buffers out of the box. They leverage the fact that you can evolve protobuf messages in a sane way. That's their equivalent of doing database schema migrations.


We've used Apache Thrift for the same reason on some projects.


Are you writing internal APIs or exposing some to external developers also? Are those external developers able to start from JSON and make a request?


Both. And if your external clients rather consume a JSON/REST API, it's easy to derive that from a gRPC API. You can do it right there in your protobuf definition. It's actually easier to do it that way than to deal with OpenAPI's wall of yaml.


I had good success with the CoreOS Kubernetes Vagrant boxes [1]. However, I switched once Kubernetes became usable via Docker because the latter gives me super fast setups and teardowns, allowing me to iterate quickly on infrastructure code.

[1] https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagr...


GKE is why I am personally switching from AWS to GCP. I'm running Kubernetes on AWS at my current gig, but I'd rather not have to build and maintain the cluster myself if I don't have to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: